id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.09254
Entropic (Gromov) Wasserstein Flow Matching with GENOT
Optimal transport (OT) theory has reshaped the field of generative modeling: Combined with neural networks, recent \textit{Neural OT} (N-OT) solvers use OT as an inductive bias, to focus on ``thrifty'' mappings that minimize average displacement costs. This core principle has fueled the successful application of N-OT solvers to high-stakes scientific challenges, notably single-cell genomics. N-OT solvers are, however, increasingly confronted with practical challenges: while most N-OT solvers can handle squared-Euclidean costs, they must be repurposed to handle more general costs; their reliance on deterministic Monge maps as well as mass conservation constraints can easily go awry in the presence of outliers; mapping points \textit{across} heterogeneous spaces is out of their reach. While each of these challenges has been explored independently, we propose a new framework that can handle, natively, all of these needs. The \textit{generative entropic neural OT} (GENOT) framework models the conditional distribution $\pi_\varepsilon(\*y|\*x)$ of an optimal \textit{entropic} coupling $\pi_\varepsilon$, using conditional flow matching. GENOT is generative, and can transport points \textit{across} spaces, guided by sample-based, unbalanced solutions to the Gromov-Wasserstein problem, that can use any cost. We showcase our approach on both synthetic and single-cell datasets, using GENOT to model cell development, predict cellular responses, and translate between data modalities.
Dominik Klein, Théo Uscidda, Fabian Theis, Marco Cuturi
2023-10-13T17:12:04Z
http://arxiv.org/abs/2310.09254v3
# Generative Entropic Neural Optimal Transport To Map Within and Across Spaces ###### Abstract Learning measure-to-measure mappings is a crucial task in machine learning, featured prominently in generative modeling. Recent years have witnessed a surge of techniques that draw inspiration from optimal transport (OT) theory. Combined with neural network models, these methods collectively known as _Neural OT_ use optimal transport as an inductive bias: such mappings should be optimal w.r.t. a given cost function, in the sense that they are able to move points in a driftty way, within (by minimizing displacements) or across spaces (by being isometric). This principle, while intuitive, is often confronted with several practical challenges that require adapting the OT toolbox: cost functions other than the squared-Euclidean cost can be challenging to handle, the deterministic formulation of Monge maps leaves little flexibility, mapping across incomparable spaces raises multiple challenges, while the mass conservation constraint inherent to OT can provide too much credit to outliers. While each of these mismatches between practice and theory has been addressed independently in various works, we propose in this work an elegant framework to unify them, called _generative entropic neural optimal transport_ (GENOT). GENOT can accommodate any cost function; handles randomness using conditional generative models; can map points across incomparable spaces, and can be used as an _unbalanced_ solver. We evaluate our approach through experiments conducted on various synthetic datasets and demonstrate its practicality in single-cell biology. In this domain, GENOT proves to be valuable for tasks such as modeling cell development, predicting cellular responses to drugs, and translating between different data modalities of cells. ## 1 Introduction Mapping a probability distribution onto another is a ubiquitous challenge in machine learning, with many implications in the field of generative modeling. Optimal transport (OT) has arisen in a few years as a major purveyor of tools to better address these challenges, both in theory and practice. The focus of OT lies on finding maps that can effectively transform a distribution of matter onto another, by minimizing a certain notion of cost (Santambrogio, 2015). Originally rooted in physics, the application of OT to large-dimensional problems arising in machine learning and sciences has necessitated various modifications and adaptations. Starting with solvers that can solve approximate matching problems at large scales (Cuturi, 2013; Peyre et al., 2016; Scetbon et al., 2021, 2022), a recent plethora of OT-inspired training approaches for neural networks has emerged (Makkuva et al., 2020; Korotin et al., 2020; Asadulaev et al., 2022; Fan et al., 2020; Uscidda & Cuturi, 2023; Lipman et al., 2023; Tong et al., 2020, 2023b). As an illustration of this overall trend, the applications of OT to single-cell genomics have evolved from advanced matching problems (Schiehinger et al., 2019; Demetci et al., 2022), towards neural-based approaches that can, for instance, predict the response of cells to various perturbations (Bunne et al., 2021, 2022). Our goal in this paper is to address the various challenges that still stand in the way of applying OT to the most pressing scientific tasks. From Linear to Quadratic Neural OT Maps.Optimal transport is primarily used through the Kantorovich problem to put in correspondence distributions taking values in the same space \(\mathcal{X}\), pend the existence of a cost \(c(x,y)\) for any two points \(x,y\in\mathcal{X}\). Most of the theory is available in that regime, notably for simpler costs such as the squared Euclidean distance (Santambrogio, 2015, SS1.3). We refer to such problems as _linear_ OT problems. Yet, more challenging applicative scenarios sought by practitioners involve source and target distributions that do _not_ live in the same space, e.g. \(\mathcal{X}\) and \(\mathcal{Y}\) have differing dimensions, as in (Demcicki et al., 2022). The challenge in that case is that no cost functions are known, requiring the use of quadratic losses (Memoli, 2011; Sturm, 2020), yielding the so-called Gromov-Wasserstein (GW) problem. While theory is far more scarce in these regimes, practitioners expressed major interest in that flexibility, going as far as proposing, with the Fused Gromov-Wasserstein (FGW) distance, a tool that blends both linear and quadratic approaches (Vayer et al., 2018), as in (Klein et al., 2023; Lange et al., 2023; Nitzan et al., 2019; Zeira et al., 2022). There exists, however, to our knowledge, only one formulation of a neural quadratic OT method, which is limited to learning deterministic maps for the inner product costs and whose training procedure involves a min-max-min optimization procedure (Nekrashevich et al., 2023). **From Deterministic to Stochastic Maps.** The classic (Monge) deterministic map can lack flexibility in practice, both at estimation and inference time. In the quadratic case, that map may not exist (Dumont et al., 2022). Practitioners may favor, instead, stochasticity, which would account naturally for instance, for the non-determinism of cell evolutions (Elowitz et al., 2002). Stochastic formulations can also produce a conditional distribution that can be used to quantify uncertainty. In the discrete setting, this property is fulfilled by entropy-regularized OT (EOT) (Cuturi, 2013). **Flexibility in Mass Conservation.** In numerous real-world applications, the data acquisition process can be error-prone, resulting in outliers. To mitigate this, unbalanced OT (UOT) formulations that can discard observations have been proposed (Frogner et al., 2015; Chizat et al., 2018; Sejourne et al., 2021), with numerous applications to generative modeling (Balaji et al., 2020; Yang and Uhler, 2019) and single-cell genomics (Schiebinger et al., 2019; Eyring et al., 2022; Lubeck et al., 2022). **Contributions.** We propose a flexible neural OT framework that satisfies all requirements above: * We propose the first method to compute neural EOT couplings in both Kantorovich and GW settings by fitting stochastic maps to their conditional distributions (Prop. 3.1) using conditional flow matching (Lipman et al., 2023) as a building block. In particular, GENOT works with any cost function between samples. * By showing that solving an unbalanced EOT problem is equivalent to solving a balanced one between re-weighted measures (Prop. 3.2) that can be estimated consistently (Prop. 3.3), we introduce U-GENOT to solve unbalanced EOT problems. * We extend (U-)GENOT to solve the (unbalanced) entropic Fused GW problem (SS 3.3). To our knowledge, GENOT is the first neural OT method to solve a continuous Fused GW problem. * We demonstrate the applicability of GENOT in various single-cell biology problems. In particular, we (i) quantify lineage branching events in the developing mouse pancreas, (ii) predict cellular responses to drug perturbations along with a well-calibrated uncertainty estimation, and (iii) introduce a novel method to translate ATAC-seq data to RNA-seq data. ## 2 Background **Notations.** We consider throughout this work two compact subsets \(\mathcal{X}\subset\mathbb{R}^{p}\), \(\mathcal{Y}\subset\mathbb{R}^{q}\), referred to as the source and the target domain, respectively. In general, \(p\neq q\). The sets of positive measures and probability measures on \(\mathcal{X}\) are denoted by \(\mathcal{M}^{+}(\mathcal{X})\) and \(\mathcal{M}^{+}_{1}(\mathcal{X})\), respectively. For \(\pi\in\mathcal{M}^{+}(\mathcal{X}\times\mathcal{Y})\), we denote its marginals by \(\pi_{1}:=p_{1}\sharp\pi\) and \(\pi_{2}:=p_{2}\sharp\pi\). Then, for \(\mu\in\mathcal{M}^{+}(\mathcal{X}),\nu\in\mathcal{M}^{+}(\mathcal{Y})\), \(\Pi(\mu,\nu)\) is the set of probability measures with respective marginals \(\mu\) and \(\nu\), i.e. \(\Pi(\mu,\nu)=\{\pi:\pi_{1}=\mu,\,\pi_{2}=\nu\}\subset\mathcal{P}(\mathcal{X} \times\mathcal{Y})\). We define \(\frac{\mathrm{d}\mu}{\mathrm{d}\nu}\) to be the relative density of \(\mu\) w.r.t. \(\nu\) and write \(\mu=\frac{\mathrm{d}\mu}{\mathrm{d}\nu}\cdot\nu\) accordingly. For \(\rho,\gamma\in\mathcal{M}^{+}(\mathcal{X})\), \(\mathrm{KL}(\rho|\gamma)=\int_{\mathcal{X}}\log(\frac{\mathrm{d}\rho}{\mathrm{ d}\gamma})\,\mathrm{d}\rho-\int_{\mathcal{X}}\mathrm{d}\gamma+\int_{ \mathcal{X}}\mathrm{d}\rho\). ### Entropic Optimal Transport The Entropic Kantorovich Problem.Let \(c:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) be a cost function, \(\mu\in\mathcal{M}^{+}_{1}(\mathcal{X}),\nu\in\mathcal{M}^{+}_{1}(\mathcal{Y})\) and \(\varepsilon\geq 0\). The entropy-regularized OT problem reads \[\min_{\pi\in\Pi(\mu,\nu)}\int_{\mathcal{X}\times\mathcal{Y}}c(\mathbf{x}, \mathbf{y})\,\mathrm{d}\pi(\mathbf{x},\mathbf{y})+\varepsilon\mathrm{KL}(\pi| \mu\otimes\nu)\,. \tag{1}\] A solution \(\pi_{\varepsilon}^{\star}\) of (EK) always exists. With \(\varepsilon=0\), we recover the classical Kantorovich (1942) problem. When \(\varepsilon>0\), the optimal coupling \(\pi_{\varepsilon}^{\star}\) is unique. If \(\mu\) and \(\nu\) are discrete, (EK) can be solved with the Sinkhorn algorithm (Cuturi, 2013). **The Entropic Gromov-Wasserstein Problem.** As opposed to considering an _inter-domain_ cost defined on \(\mathcal{X}\times\mathcal{Y}\), the entropic Gromov-Wasserstein problem is concerned with seeking couplings based on _intra-domain_ cost functions \(c_{\mathcal{X}}:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) and \(c_{\mathcal{Y}}:\mathcal{Y}\times\mathcal{Y}\to\mathbb{R}\): \[\min_{\pi\in\Pi(\mu,\nu)}\int_{(\mathcal{X}\times\mathcal{Y})^{2}}|c_{ \mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})-c_{\mathcal{Y}}(\mathbf{y}, \mathbf{y}^{\prime})|^{2}\,\mathrm{d}\pi(\mathbf{x},\mathbf{y})\,\mathrm{d} \pi(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\,+\varepsilon\mathrm{KL}(\pi|\mu \otimes\nu).\] With \(\varepsilon=0\), we recover the Gromov-Wasserstein problem (Memoli, 2011). As in the Kantorovich setting, using \(\varepsilon>0\) comes with favorable computational properties, since for discrete \(\mu\),\(\nu\), we can solve (EGW) with a mirror-descent scheme based on the Sinkhorn algorithm (Peyre et al., 2016). **Unbalanced Extensions.** The EOT formulations presented above can only handle measures with the same total mass. Unbalanced optimal transport (UOT) (Liero et al., 2018; Chizat et al., 2018) lifts this constraint by penalizing the deviation of \(p_{1}\sharp\pi\) to \(\mu\) and \(p_{2}\sharp\pi\) to \(\nu\) with a divergence. Using the \(\mathrm{KL}\) divergence and introducing \(\lambda_{1},\lambda_{2}>0\) controlling how much mass variations are penalized as opposed to transportation, the unbalanced extension of (EK) seeks a measure \(\pi\in\mathcal{M}^{+}(\mathcal{X}\times\mathcal{Y})\): \[\min_{\pi\in\mathcal{M}^{+}(\mathcal{X}\times\mathcal{Y})}\int_{\mathcal{X} \times\mathcal{Y}}c(\mathbf{x},\mathbf{y})\,\mathrm{d}\pi(\mathbf{x},\mathbf{ y})+\varepsilon\mathrm{KL}(\pi|\mu\otimes\nu)+\lambda_{1}\mathrm{KL}(\pi_{1}|\mu)+ \lambda_{2}\mathrm{KL}(\pi_{2}|\nu).\] This problem can be solved efficiently in a discrete setting using a variant of the Sinkhorn algorithm (Frogner et al., 2015; Sejourne et al., 2023a). Analogously, the GW formulation (EGW) also admits an unbalanced generalization, which reads \[\min_{\pi\in\mathcal{M}^{+}(\mathcal{X}\times\mathcal{Y})} \int_{(\mathcal{X}\times\mathcal{Y})^{2}}|c_{\mathcal{X}}(\mathbf{x}, \mathbf{x}^{\prime})-c_{\mathcal{Y}}(\mathbf{y},\mathbf{y}^{\prime})|^{2}\, \mathrm{d}\pi(\mathbf{x},\mathbf{y})\,\mathrm{d}\pi(\mathbf{x}^{\prime}, \mathbf{y}^{\prime})\] (UEGW \[+\varepsilon\mathrm{KL}^{\otimes}(\pi|\mu\otimes\nu)+\lambda_{1} \mathrm{KL}^{\otimes}(\pi_{1}|\mu)+\lambda_{2}\mathrm{KL}^{\otimes}(\pi_{2}| \nu),\] where \(\mathrm{KL}^{\otimes}(\rho|\gamma)=\mathrm{KL}(\rho\otimes\rho|\gamma\otimes\gamma)\). This can also be solved using an extension of Peyre et al. (2016)'s scheme introduced by Sejourne et al. (2023b). For both unbalanced problems (EK) and (UEGW), instead of directly selecting \(\lambda_{i}\), we introduce \(\tau_{i}=\frac{\lambda_{i}}{\lambda_{i}+\varepsilon}\) s.t. we recover the hard marginal constraint for \(\tau_{i}=1\), when \(\lambda_{i}\to+\infty\). We write \(\tau=(\tau_{1},\tau_{2})\) accordingly. ### Conditional Flow Matching Provided a prior distribution \(\rho_{0}\in\mathcal{M}_{1}^{+}(\mathbb{R}^{d})\) and a time-dependent vector field \(v_{t}\), one can define a probability path \((p_{t})_{t\in[0,1]}\) starting from \(\rho_{0}\) using the flow \((\phi_{t})_{t\in[0,1]}\) induced by the ODE \[\frac{\mathrm{d}}{\mathrm{d}t}\phi_{t}(\mathbf{z})=v_{t}(\phi_{t}(\mathbf{z})),\quad\phi_{0}(\mathbf{z})=\mathbf{z}, \tag{1}\] by setting \(p_{t}=\phi_{t}\sharp\rho_{0}\). In that case, we say that \(v_{t}\) generates the path \(p_{t}\) through the flow \(\phi_{t}\). Continuous Normalizing Flows (Chen et al., 2018) model the vector field with a neural network \(v_{t,\theta}\), leading to a deep parametric model of the flow, which is trained to match a terminal condition defined by a target distribution \(p_{1}=\rho_{1}\in\mathcal{M}_{1}^{+}(\mathbb{R}^{d})\). (Conditional) Flow Matching (CFM) (Lipman et al., 2023) is a simulation-free technique to train CNFs by constructing probability paths between individual data samples \(\mathbf{z}_{0}\sim\rho_{0}\), \(\mathbf{z}_{1}\sim\rho_{1}\), and minimizing the loss \[\mathcal{L}_{\mathrm{CFM}}(\theta)=\mathbb{E}_{t\sim\mathcal{U}([0,1]),Z_{0} \sim\rho_{0},Z_{1}\sim\rho_{1}}[\|v_{t,\theta}\left(tZ_{0}+(1-t)Z_{1}\right)-( Z_{1}-Z_{0})\|_{2}^{2}]. \tag{2}\] If this loss is 0, then \(v_{t,\theta}\) generates a probability path between \(\rho_{0}\) and \(\rho_{1}\), i.e. the induced flow satisfies \(\phi_{1}\sharp\rho_{0}=\rho_{1}\)(Lipman et al., 2023)[Theorem 1]. To sample from \(\rho_{1}\), we solve the ODE (1) with \(\mathbf{z}_{0}\sim\rho_{0}\) and obtain \(\phi_{1}(\mathbf{z}_{0})\sim\rho_{1}\). ## 3 Generative Entropic Neural Optimal Transport In this section, we introduce GENOT, a method to learn EOT couplings by learning their conditional distributions. In SS (3.1), we first focus on the balanced OT case, when the source and the target measures have the same mass, and show that GENOT can solve (EK) or (EGW). Second, in SS (3.2), we extend GENOT to the unbalanced setting by loosening the conservation of mass constraint and defining U-GENOT, which can be used to solve problems (UEK) and (UEGW). Finally, in SS 3.3, we highlight that GENOT also addresses a fused problem, combining (EK) and (EGW). ### Learning Entropic Optimal Couplings with GENOT Let \(\mu\in\mathcal{M}_{1}^{+}(\mathcal{X})\), \(\nu\in\mathcal{M}_{1}^{+}(\mathcal{Y})\) and \(\pi_{\varepsilon}^{*}\) be an EOT coupling between \(\mu\) and \(\nu\), which can be a solution of problem (EK) or (EGW). The measure disintegration theorem yields \[\mathrm{d}\pi_{\varepsilon}^{*}(\mathbf{x},\mathbf{y})=\mathrm{d}\pi_{ \varepsilon,1}^{*}(\mathbf{x})\,\mathrm{d}\pi_{\varepsilon}^{*}(\mathbf{y}| \mathbf{x})=\mathrm{d}\mu(\mathbf{x})\,\mathrm{d}\pi_{\varepsilon}^{*}( \mathbf{y}|\mathbf{x})\,. \tag{3}\] Knowing \(\mu\), we can hence fully describe \(\pi_{\varepsilon}^{*}\) via the conditional distributions \((\pi_{\varepsilon}^{*}(\cdot|\mathbf{x}))_{\mathbf{x}\in\mathcal{X}}\). The latter are also of great practical interest, as they provide a way to transport a source sample \(\mathbf{x}\sim\mu\) to the target domain \(\mathcal{Y}\); either _stochastically_ by sampling \(\mathbf{y}_{1},...,\mathbf{y}_{n}\sim\pi_{\varepsilon}^{*}(\cdot|\mathbf{x})\), or _deterministically_ by averaging over conditional samples: \[T_{\varepsilon}(\mathbf{x}):=\mathbb{E}_{Y\sim\pi_{\varepsilon}^{*}(\cdot| \mathbf{x})}[Y]=\mathbb{E}_{(X,Y)\sim\pi_{\varepsilon}^{*}}[Y|X=\mathbf{x}]\,. \tag{4}\] Moreover, we can compute any statistic of \(\pi_{\varepsilon}^{*}(\cdot|\mathbf{x})\) to assess the uncertainty surrounding this prediction. In the following, we elaborate on our approach for calculating these conditional distributions. **Noise Outsourcing.** Let \(\rho\in\mathcal{M}_{1}^{+}(\mathcal{Z})\) be an atomless distribution on an arbitrary Borel space \(\mathcal{Z}\), refer to as the noise. The noise outsourcing lemma (Kallenberg, 2002) states that there exists a collection of maps \(\{T^{*}(\cdot|\mathbf{x})\}_{\mathbf{x}\in\mathcal{X}}\) with \(T^{*}(\cdot|\mathbf{x}):\mathcal{Z}\to\mathcal{Y}\) s.t. for each \(\mathbf{x}\sim\mu\), \(\pi_{\varepsilon}^{*}(\cdot|\mathbf{x})=T^{*}(\cdot|\mathbf{x})\sharp\rho\). More precisely, if \(\mathbf{x}\sim\mu\) and \(\mathbf{x}\sim\rho\), then \(\mathbf{y}=T^{*}(\mathbf{z}|\mathbf{x})\sim\pi_{\varepsilon}^{*}(\cdot| \mathbf{x})\). Each \(T^{*}(\cdot|\mathbf{x})\) generates a distribution from a point \(\mathbf{x}\), by "outsourcing" the noise vectors \(\mathbf{z}\sim\rho\). We refer to \(\{T^{*}(\cdot|\mathbf{x})\}_{\mathbf{x}\in\mathcal{X}}\) as a collection of _optimal conditional generators_ since they generate the conditional distributions of \(\pi_{\varepsilon}^{*}\). Conversely, noise outsourcing provides a way to define neural couplings \(\pi_{\theta}\) by parameterizing their conditional generators \(\{T_{\theta}(\cdot|\mathbf{x})\}_{\mathbf{x}\in\mathcal{X}}\) with neural networks. To obtain \(\pi_{\theta}\approx\pi_{\varepsilon}^{*}\), we then need \(T_{\theta}(\cdot|\mathbf{x})\) to generate \(\pi_{\varepsilon}^{*}(\cdot|\mathbf{x})\) by outsourcing the noise \(\rho\), for any source sample \(\mathbf{x}\sim\mu\). **Learning the Conditional Generators.** In the following, we learn a collection of maps \(\{T_{\theta}(\cdot|\mathbf{x})\}_{\mathbf{x}\in\mathcal{X}}\) fitting the constraint \(T_{\theta}(\cdot|\mathbf{x})\sharp\rho\approx\pi_{\varepsilon}^{*}(\cdot| \mathbf{x})\) for any \(\mathbf{x}\sim\mu\). Instead of directly modeling \(T_{\theta}(\cdot|\mathbf{x})\) with a neural network, we employ the CFM framework discussed in SS 2.2. To that end, we first set \(\mathcal{Z}=\mathbb{R}^{q}\) and the noise \(\rho=\mathcal{N}(0,I_{q})\). We remind that \(q\) is the dimension of the target domain \(\mathcal{Y}\). Then, we parameterize each \(T_{\theta}(\cdot|\mathbf{x})\) implicitly as the flow induced by a neural vector field \(v_{t,\theta}(\cdot|\mathbf{x}):\mathbb{R}^{q}\to\mathbb{R}^{q}\). Namely \(T_{\theta}(\cdot|\mathbf{x})=\phi_{1}(\cdot|\mathbf{x})\) where \(\phi_{t}(\cdot|\mathbf{x})\) solves \[\frac{\mathrm{d}}{\mathrm{d}t}\phi_{t}(\mathbf{z}|\mathbf{x})=v_{t,\theta}( \phi_{t}(\mathbf{z}|\mathbf{x})|\mathbf{x}),\quad\phi_{0}(\mathbf{z}|\mathbf{x })=\mathbf{z}. \tag{5}\] We stress that while \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{d}\), the flow from \(\rho\) to \(\pi_{\varepsilon}^{*}(\cdot|\mathbf{x})\) is defined on \(\mathbb{R}^{q}\supset\mathcal{Y}\). Hence, we can map samples _within_ the same space when \(p=q\), but also _across_ incomparable spaces when \(p\neq q\). In particular, this allows us to solve the Gromov-Wasserstein problem (EGW). Thus, for each \(\mathbf{x}\), we optimize \(v_{t,\theta}(\cdot|\mathbf{x})\) by minimizing the CFM loss (2) with source \(\rho\) and target \(\pi_{\varepsilon}^{*}(\cdot|\mathbf{x})\), i.e. \[\mathbb{E}_{t\sim\mathcal{U}([0,1]),Z\sim\rho,Y\sim\pi_{\varepsilon}^{*}(\cdot| \mathbf{x})}[\|v_{t,\theta}\left((1-t)Z+tY|\mathbf{x}\right)-(Y-Z)\|_{2}^{2}]\,. \tag{6}\] Averaging over source samples \(\mathbf{x}\sim\mu\) and using Fubini's Theorem, we arrive at the GENOT loss \[\mathcal{L}_{\mathrm{GENOT}}(\theta)=\mathbb{E}_{t\sim\mathcal{U}([0,1]),Z\sim \rho,X\sim\mu,Y\sim\pi_{\varepsilon}^{*}(\cdot|X)}[\|v_{t,\theta}\left((1-t)Z+ tY|X\right)-(Y-Z)\|_{2}^{2}]\,. \tag{7}\] We optimize this loss by (i) estimating \(\hat{\pi}_{\varepsilon}\) from samples \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\sim_{\mathrm{i.i.d}}\mu\) and \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\sim_{\mathrm{i.i.d}}\nu\), then (ii) sampling the estimated discrete conditional distributions. We detail our training procedure in algorithm 1. GENOT can be thought of as a conditional CFM model: For each \(\mathbf{x}\), using CFM, we train a conditional vector field \(v_{t,\theta}(\cdot|\mathbf{x})\) to generate \(\pi_{\varepsilon}^{*}(\cdot|\mathbf{x})\) from the noise \(\rho\). **Proposition 3.1** (GENOT recovers Optimal Conditional Generators.).: _Suppose that \(\mathcal{L}_{\mathrm{GENOT}}(\theta)=0\). Then the flows \(\{\phi_{1}(\cdot|\mathbf{x})\}_{\mathcal{X}}\), induced by the velocity fields \(\{v_{t,\theta}(\cdot|\mathbf{x})\}_{\mathcal{X}}\), are a collection of optimal conditional generators. Namely, if \(\mathbf{x}\sim\mu\), \(\mathbf{z}\sim\rho\) and \(\mathbf{y}=\phi_{1}(\mathbf{z}|\mathbf{x})\) denotes the solution of the ODE (5), then \(\mathbf{y}\sim\pi_{\varepsilon}^{*}(\cdot|\mathbf{x})\). Consequently, we recover \(\pi_{\varepsilon}^{*}\)._ **GENOT Addresses Any Cost.** Thanks to Prop. 3.1, we can use GENOT to solve (EK) and (EGW) problems. In both cases, we do not impose any restrictions on the cost functions. We only need to be able to evaluate these costs on samples to estimate \(\pi^{*}_{\varepsilon}\) with a discrete solver. In particular, we can use costs that are implicitly defined and whose evaluation requires a non-differentiable sub-routine. For instance, recent works have proposed using the geodesic distance on the data manifold as cost, which can be approximated from samples by considering the shortest path distance on the \(k\)-nn graph induced by the Euclidean distance (Demetci et al., 2022). Using such data-driven cost functions is crucial for many applications where comparing samples via an \(\ell_{p}\) distance is not meaningful, as in some single-cell genomic tasks (Huguet et al., 2022; Klein et al., 2023). ### U-Genot: Extension to the Unbalanced Setting **Re-Balancing the UOT Problems.** In its standard form, GENOT imposes the marginal constraints, so it cannot directly tackle the unbalanced problems (UEK) or (UEGW). However, these unbalanced problems can be _re-balanced_. In both Kantorovich and GW cases, we can show that the unbalanced EOT coupling \(\pi^{*}_{\varepsilon,\tau}\) between \(\mu\in\mathcal{M}^{+}(\mathcal{X})\) and \(\nu\in\mathcal{M}^{+}(\mathcal{Y})\) actually solves a balanced EOT problem between its marginals, which are re-weighted versions of \(\mu\) and \(\nu\) that have the same mass. **Proposition 3.2** (Re-Balancing the unbalanced problems.).: _Let \(\pi^{*}_{\varepsilon,\tau}\) be an unbalanced EOT coupling, solution of (UEK) or (UEGW) between \(\mu\in\mathcal{M}^{+}(\mathcal{X})\) and \(\nu\in\mathcal{M}^{+}(\mathcal{Y})\). We note \(\tilde{\mu}=p_{1}\sharp\pi^{*}_{\varepsilon,\tau}\) and \(\tilde{\nu}=p_{2}\sharp\pi^{*}_{\varepsilon,\tau}\) its marginals. Then, in both cases, \(\tilde{\mu}\) (resp. \(\tilde{\nu}\)) has a density w.r.t \(\mu\) (resp. \(\nu\)) i.e. it exists \(\eta,\xi:\mathbb{R}^{d}\to\mathbb{R}^{+}\) s.t. \(\tilde{\mu}=\eta\cdot\mu\) and \(\tilde{\nu}=\xi\cdot\nu\). Moreover, \(\tilde{\mu}\) and \(\tilde{\nu}\) have the same mass and_ 1. _(Kantorovich)_ \(\pi^{*}_{\varepsilon,\tau}\) _solves the balanced problem (_EK_) between_ \(\tilde{\mu}\) _and_ \(\tilde{\nu}\) _with the same_ \(\varepsilon\)_._ 2. _(Gromov-Wasserstein) Provided that_ \(c_{X}\) _and_ \(c_{\mathcal{Y}}\) _are conditionally positive (or conditionally negative) kernels (see Def. B.1),_ \(\pi^{*}_{\varepsilon,\tau}\) _solves the balanced problem (_EGW_) between_ \(\tilde{\mu}\) _and_ \(\tilde{\nu}\) _with_ \(\varepsilon^{\prime}=m(\pi^{*}_{\varepsilon,\tau})\,\varepsilon\)_, where_ \(m(\pi^{*}_{\varepsilon,\tau})=\pi^{*}_{\varepsilon,\tau}(\mathcal{X}\times \mathcal{Y})\) _is the total mass of_ \(\pi^{*}_{\varepsilon,\tau}\)_._ **Remark.** In various experimental settings, \(\mu\) and \(\nu\) have mass 1 and we impose one of the two hard marginal constraints, for instance on \(\mu\), by setting \(\tau_{1}=1\). Then \(\tilde{\nu}\) has also mass 1 and \(m(\pi^{*}_{\varepsilon,\tau})=1\), so we keep the same regularization strength \(\varepsilon\) by re-balancing (UEGW). **Learning the Coupling and the Re-Weightings Simultaneously.** Thanks to Prop. 3.2, we aim to (i) learn a balanced EOT coupling between \(\tilde{\mu}\) and \(\tilde{\nu}\) along with (ii) the re-weighting functions \(\eta,\xi\). The latter are of key interest since they enable modeling the creation and destruction of mass. We can do both simultaneously by slightly adapting the GENOT procedure. More formally, we seek to optimize the U-GENOT loss \[\mathcal{L}_{\text{U-GENOT}}(\theta) =\mathbb{E}_{t\sim\mathcal{U}([0,1]),Z\sim\rho,X\sim\tilde{\mu},Y \sim\pi^{*}_{\varepsilon,\tau}(\cdot|X)}[\|v_{t,\theta}\,(tZ+(1-t)Y|X)-(Y-Z) \|_{2}^{2}]\] (i) \[+\mathbb{E}_{X\sim\mu}[(\eta(X)-\eta_{\theta}(X))^{2}]+\mathbb{E} _{Y\sim\nu}[(\xi(Y)-\xi_{\theta}(Y))^{2}]\,.\] (ii) As with GENOT, we simply need to estimate the unbalanced OT coupling \(\hat{\pi}_{\varepsilon,\tau}\) from samples \(\mathbf{x}_{1},\ldots\mathbf{x}_{n}\sim_{\text{i.i.d}}\mu\) and \(\mathbf{y}_{1},\ldots\mathbf{y}_{n}\sim_{\text{i.i.d}}\nu\) to estimate that loss. We build upon theoretical insights from the Kantorovich case, which we extend in practice to the Gromov-Wasserstein case. **Proposition 3.3** (Estimation of the re-weightings.).: _Let \(\hat{\pi}_{\varepsilon,\tau}\) the solution of (UEK) computed on samples. Let \(\mathbf{a}=\hat{\pi}_{\varepsilon,\tau}\mathbf{1}_{n}\) and \(\mathbf{b}=\hat{\pi}^{\top}_{\varepsilon,\tau}\mathbf{1}_{n}\) be its marginal weights and let \(\hat{\eta}_{n}(\mathbf{x}_{i}):=n\,a_{i}\) and \(\hat{\xi}_{n}(\mathbf{y}_{i}):=n\,b_{i}\). Then, almost surely, \(\hat{\eta}_{n}(\mathbf{x}_{i})\to\eta(\mathbf{x}_{i})\) and \(\hat{\xi}_{n}(\mathbf{x}_{i})\to\xi(\mathbf{y}_{i})\)._ Using Prop. 3.2, \(\hat{\pi}_{\varepsilon,\tau}\) is a balanced EOT coupling between its marginals, which are empirical approximations of \(\tilde{\mu}\) and \(\tilde{\nu}\). We hence estimate the term (i) of the loss as we do in the balanced case by sampling from the discrete conditional distribution. Furthermore, Prop.3.3 highlights that the estimation of \(\hat{\pi}_{\varepsilon,\tau}\) also provides a consistent estimate of the re-weighting function evaluations at each \(\mathbf{x}_{i}\) and \(\mathbf{y}_{i}\). This enables the estimation of the term (ii). Therefore, as with GENOT, each U-GENOT iteration only requires a call to a discrete solver. We detail our training procedure in algorithm 2. ### Combining Kantorovich and Gromov-Wasserstein to the Fused Setting We show in SS 3.1 and SS 3.2 how to use our method to map samples within the same space, or across incomparable spaces, by solving (EK) or (EGW) and their unbalanced extensions. On the other hand, there are cases where the source and the target domains are only _partially_ incomparable, leading to a problem that combines both OT formulations (Vayer et al., 2018). Suppose that the source and target space can be decomposed as \(\mathcal{X}=\Omega\times\bar{\mathcal{X}}\) and \(\mathcal{Y}=\Omega\times\bar{\mathcal{Y}}\), respectively. Moreover, assume we are given an inter-domain cost \(c:\Omega\times\Omega\to\mathbb{R}\) along with the intra-domain costs \(c_{\bar{\mathcal{X}}},c_{\bar{\mathcal{Y}}}\). The entropic fused-Gromov-Wasserstein (FGW) problem can then be defined as \[\min_{\pi\in\Pi(\mu,\nu)}\int_{((\Omega\times\bar{\mathcal{X}})\times(\Omega \times\bar{\mathcal{Y}}))^{2}}L\left((\mathbf{u},\mathbf{x}),(\mathbf{v}, \mathbf{y}),\mathbf{x}^{\prime},\mathbf{y}^{\prime}\right)\mathrm{d}\pi\left( (\mathbf{u},\mathbf{x}),(\mathbf{v},\mathbf{y})\right)\mathrm{d}\pi(\mathbf{x }^{\prime},\mathbf{y}^{\prime})+\varepsilon\mathrm{KL}(\pi|\mu\otimes\nu)\,,\] where \(L\left((\mathbf{u},\mathbf{x}),(\mathbf{v},\mathbf{y}),\mathbf{x}^{\prime}, \mathbf{y}^{\prime}\right):=\left(1-\alpha\right)c(\mathbf{u},\mathbf{v})+ \alpha\left|c_{\bar{\mathcal{X}}}(\mathbf{x},\mathbf{x}^{\prime})-c_{\bar{ \mathcal{Y}}}(\mathbf{y},\mathbf{y}^{\prime})\right|^{2}\) and \(\alpha\in[0,1]\) determines the influence of the components of the space decompositions. When \(\alpha=1\), we recover the pure GW setting. The above fused problem admits an unbalanced extension, which can be derived exactly in the same way as (UEGW) using the quadratic \(\mathrm{KL}^{\otimes}\)(Thual et al., 2023). **(U-)GENOT Addresses the Fused Setting.** Whether in the balanced or unbalanced setting, we can use our method to learn a specific coupling as soon as it can be estimated from samples. We stress that the discrete solvers we use for problems (EGW) and (UEGW) are still applicable in the fused setting. As a result, we can compute discrete fused couplings and then solve (EFGW) and its unbalanced counterpart with (U-)GENOT. To illustrate this idea more precisely, take a solution \(\pi_{\alpha}^{\star}\) of (EFGW). Learning \(\pi_{\alpha}^{\star}\) with our method amounts to training vector fields that are conditioned on pairs of modality from the source domain \(v_{t,\theta}(\cdot,|\mathbf{u},\mathbf{x})\), to sample pairs of modality from the target domain via the induced flow: \(\mathbf{z}\sim\rho\), \(\phi_{1}(\mathbf{z}|\mathbf{u},\mathbf{x})=(\mathbf{v},\mathbf{y})\sim\pi_{ \alpha}^{\star}(\cdot|\mathbf{u},\mathbf{x})\). Given each term of the fused problem (EFGW), the sampled modalities \((\mathbf{v},\mathbf{y})\) minimize transport cost quantified by \(c\) along the first modality, while being "isometric" w.r.t. \(c_{\bar{\mathcal{X}}}\) and \(c_{\bar{\mathcal{Y}}}\) on the second modality. ## 4 Related work Neural EOT.While GENOT is the first model to learn neural EOT couplings in the (Fused) Gromov-Wasserstein or the unbalanced setting, various methods have been proposed in the (balanced) Kantorovich setting. The first class of methods solves the (EK) dual problem. While some of them (Genevay et al., 2019) do not allow direct sampling according to \(\pi_{\varepsilon}^{\star}\), Daniels et al. (2021) model the conditional distribution \(\pi_{\varepsilon}^{\star}(\cdot|\mathbf{x})\). However, this method is (i) costly as it employs Langevin sampling at inference time and (ii) numerically unstable as it requires the exponentiation of large numbers. Mokrov et al. (2023) proposed another approach modeling \(\pi_{\varepsilon}^{\star}(\cdot|\mathbf{x})\) leveraging energy-based models, but is computationally expensive since it relies on Langevin sampling in each training iteration. Other Kantorovich EOT solvers build upon the link between (EK) and the Schrodinger bridge (SB) problem. They model the EOT plan as a time-evolving stochastic process with fixed marginal constraints, endowed with learnable drift and diffusion terms (De Bortoli et al., 2021; Chen et al., 2021; Vargas et al., 2021; Gushchin et al., 2022). Although these methods have shown good performance on image data, they are very costly since they require simulation-based training. A recent line of work proposed to train such models in a completely simulation-free manner (Tong et al., 2023; Shi et al., 2023; Liu et al., 2023) via score or flow matching. However, these methods can only be used for the squared Euclidean cost. Indeed, they rely on the fact that the marginals of the SB can be characterized as a mixture of Brownian bridges weighted by an EOT plan. However, this property is true only when we choose the Wiener process as a reference measure in the SB problem, which is limited to using \(c(\mathbf{x},\mathbf{y})=\|\mathbf{x}-\mathbf{y}\|_{2}^{2}\) in (EK) (Leonard, 2013)[Eq. 1.2]. On the other hand, GENOT is the first neural EOT framework that can handle any cost function, even those defined implicitly, and whose evaluation requires a call to a non-differentiable sub-routine, like the geodesic distance on the data manifold. This point allows us to emphasize that our method fundamentally differs from theirs since we do not exploit the link between EOT and SB. Our approach is purely conditional and uses flow matching only as a powerful generative black box to generate each \(\pi_{\varepsilon}^{\star}(\cdot|\mathbf{x})\) from a \(\rho\) noise. Computation of Neural Couplings.Another line of work considers computing neural couplings through the weak OT paradigm Korotin et al. (2022a, b); Asadulaev et al. (2022); Gazdieva et al. (2022), by solving a challenging min-max problem. However, (i) their method only enables mapping within the same space, (ii) in the balanced setting, and (iii) cannot handle EOT problems since they would require estimating the entropy of the neural coupling from samples at each iteration. ## 5 Experiments We demonstrate the applicability and versatility of the GENOT framework on toy data and single-cell data to map within the same space and across incomparable spaces. Metrics are discussed in appendix C and details on the single-cell datasets can be found in appendix D. Further experimental details or results for each experiment are reported in appendix E. Setups for competing methods are listed in appendix F. Details on the implementation of GENOT can be found in appendix G. We introduce the notation GENOT-K for the GENOT model solving problem (EK) while GENOT models solving the tasks (EGW) and (EFGW) are referred to as GENOT-GW and GENOT-FGW, respectively. The prefix U is used whenever consider an unbalanced problem, as described in SS 3.2. Moreover, when reporting results based on the conditional mean of a GENOT model, we add the suffix \(CM\) to the model name. If not stated otherwise, we use the squared Euclidean distance as cost. ### GENOT-K to map within spaces **U-GENOT-K on simulated data** To visualize the capabilities of UGENOT-K to learn unbalanced entropy-regularized transport plans and rescaling functions, we compare its predictions with the OT plan obtained from a discrete EOT solver. Fig. 1 shows that the unbalanced entropy-regularized transport plan with \(\varepsilon=0.05\) and \(\tau_{1}=\tau_{2}=0.98\) between mixtures of Gaussians is accurately learnt by U-GENOT-K. The influence of the unbalancedness parameters \(\tau_{1},\tau_{2}\) is visualized in Fig. 7. **U-GENOT-K for modeling single-cell trajectories** The pancreas dataset considered so far subsets the original dataset to one cell lineage (endocrine) to prevent obtaining biologically implausible couplings. Indeed, table 1 shows that in the balanced case, the cell lineage transition score (see C.2) shows that only \(66\%\) of the cells are mapped to the correct lineage. By loosening the conservation of mass constraint, U-GENOT-K helps to counteract the distributional shift introduced by different proliferation rates of cells and experimental biases. **Prediction of cellular responses to drug perturbations with U-GENOT-K** Neural OT maps have been successfully applied to model cellular responses to perturbations with deterministic neural OT maps (Bunne et al., 2021; Uscidda & Cuturi, 2023). Yet, these predictions lack information about the confidence of the model. GENOT enables to sample from the conditional distribution, which allows for uncertainty quantification. We consider single-cell RNAseq data measuring the response of cells to 163 different cancer drugs (Srivastava et al., 2020). Each drug has been applied to a population of cells which can be partitioned into three different cell types. While there is no ground truth in the matching between unperturbed and perturbed cells due to the destructive nature of sequencing technologies, we know which unperturbed subset of cells is supposed to be mapped to which perturbed subset of cells. This allows us to define an accuracy metric (appendix C.2). For the uncertainty metric, we choose again cos-var. Fig. 3 shows that for 117 out of 163 drugs the model is perfectly well calibrated (appendix C.1), while it yields a negative correlation between error and uncertainty only for one drug. To improve the accuracy of GENOT-K, we leverage its unbalanced formulation. Fig. 3 shows that allowing for mass variation improves the performance for nine different cancer drugs which are known to have a strong effect. Fig. 13 and 14 confirm the results visually. ### GENOT-GW and GENOT-FGW to map across spaces **GENOT-GW on simulated data.** We transport a Swiss role in \(\mathbb{R}^{3}\) to a spiral in \(\mathbb{R}^{2}\). Fig. 4 shows that GENOT-GW successfully mimics an isometric alignment. Here, we set \(\varepsilon=0.01\) and investigate its influence in more detail in Fig. 15. **GENOT-GW for translating modalities of single cells** The number of modalities which can be simultaneously measured in a single cell is limited due to technical limitations. Yet, it is important to match measurements of different modalities to obtain a more holistic view of the profile of a cell. The discrete GW formulation has been used to match measurements of cells in different modalities (Demetci et al., 2022). We use GENOT-GW to Figure 4: Mapping a Swiss roll in \(\mathbb{R}^{3}\) (top left) to a spiral in \(\mathbb{R}^{2}\) (bottom left). Center: Color code tracks where samples from the source (top) are mapped to (bottom). Right column: samples (top) and their conditional distributions. Figure 3: Left: Calibration score for the predictions of GENOT-K for modeling cellular responses to 163 cancer drugs (appendix C.1). Right: Accuracy of cellular response predictions of U-GENOT-K for different cancer drugs with varying unbalancedness parameter \(\tau=\tau_{1}=\tau_{2}\). For each \(\tau\), U-GENOT-K was run three times with different seeds. translate ATAC measurements to gene expression space on a bone marrow dataset (Luecken et al., 2021). As both modalities were measured in the same cell, the true match of each cell is known. We compare GENOT-GW with the discrete GW formulation (see F.2) and assess the performance with the FOSCTTM ("Fractions of Samples Closer to the True Match") score (see C.2). We leverage the flexibility of GENOT and choose an approximation of the geodesic distance (Crane et al., 2013) as it is known that Euclidean distances are often not meaningful in embeddings of single-cell measurements (Moon et al., 2018). With respect to the FOSCTTM score, Fig. 6 shows three results. First, using a graph-based cost is crucial in higher dimensions. Second, out-of-sample prediction for discrete GW based on regression is competitive for lower dimensions, but not for high-dimensional spaces. Third, taking the conditional mean as prediction improves the result with respect to the FOSCTTM score. Regarding the distributional fitting property the striking superiority of GENOT models is unmistakable. Crucially, Fig. 6 shows that the fitting property of GENOT models is not affected by the cost. **GENOT-FGW improves modality translation of single cells** As the predictions yielded by GW-based models are not satisfactory, we introduce a novel method for translating between ATAC and RNA measurements by extending the model proposed by Demcti et al. (2022) to the fused setting. Therefore, we infer approximate gene expression from the ATAC measurements using gene activity (Stuart et al., 2021). We construct a joint space of the two modalities using a conditional VAE (Lopez et al., 2018). Fig. 16 shows that the additional fused term helps to obtain a significantly better alignment compared to GENOT-GW, with the best GENOT-FGW CM model (weight parameter \(\alpha=0.7\)) attaining a FOSCTTM score of below \(0.05\). It is important to note that incorporating the GW terms is necessary for attaining good results as discussed in appendix E.3. Fig. 5 visualizes the push-forward of the learnt coupling. The intertwinement of samples of the target and the predicted target in the left panel visualizes the distribution fitting property, while the separation into cell types on the right confirms the optimality of the learnt coupling. See figures 19 and 20 for further visualizations. Figure 5: UMAP embedding of transported cells and cells in the target distribution (left), and jointly colored by cell type (right). Figure 6: Benchmark (mean and std across three runs) of GENOT-GW models against discrete GW (GW-LR, appendix F) on translating cells between ATAC space of dimension \(d_{1}\) and RNA space of dimension \(d_{2}\) for experiment \(d_{1}/d_{2}\). Performance is measured with the FOSCTTM score (appendix C.2) and the Sinkhorn divergence between target and predicted target distribution. While on the left, we learn the EOT coupling for the squared Euclidean cost, we use the geodesic cost on the right (Crane et al., 2013). When aligning multiple modalities of single cells, we cannot assume to have the same proportion of cell types in both datasets, for example due to experimental biases caused by sequencing technologies. We simulate this setting by removing cells belonging to either of the cell types _Proerythroblasts_, _Erythroblasts_ or _Normoblasts_ in the source distribution. Table 3 shows that U-GENOI-FGW preserves high accuracy while learning meaningful rescaling functions. ConclusionWe introduce GENOT, a versatile neural OT framework to learn cost-efficient stochastic maps within the same space and/or across incomparable spaces. GENOT is flexible to the extent that the mass conservation constraint can be loosened, and provides tools to sample targets from an input. GENOT can be used within a wide array of tasks in single-cell biology. ## 6 Acknowledgements Co-funded by the European Union (ERC, DeepCell - 101054957). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. F.J.T. consults for Immunai Inc., Singularity Bio B.V., CytoReason Ltd, Cellularity, and has ownership interest in Dermagnostix GmbH and Cellarity.
2301.07538
Computing the Coefficients for Non-Periodic Highly Oscillatory Orthonormal Functions
A three term recurrence relation is derived for a basis consisting of polynomials multiplied by sines and cosines with large, but fixed frequencies. A numerical method for computing the coefficients of the three term recurrence relation is derived.
Rockford Sison
2023-01-18T13:57:21Z
http://arxiv.org/abs/2301.07538v1
# Computing the coefficients for non-periodic highly oscillatory orthonormal functions ###### Abstract. A three term recurrence relation is derived for a basis consisting of polynomials multiplied by sines and cosines with large, but fixed frequencies. A numerical method for computing the coefficients of the three term recurrence relation is derived. Key words and phrases:oscillatory integrals, orthogonal functions, three term recurrence relations ## 1. Introduction Orthogonal functions, typically polynomials or sines and cosines, have a long history in solving problem[2]. These orthogonal functions give rise to three-term recurrence relations. We will examine the problem of trying to represent highly oscillatory, non-periodic functions, in the form \[f(x)\sin(\omega x)+g(x)\cos(\omega x), \tag{1.1}\] where \(f(x)\) and \(g(x)\) are assumed to be non-oscillatory functions. If one wishes to represent (1.1) with a standard basis such as the Chebychev or Legendre polynomials, then the number of polynomials used must scale with \(\omega\). Many papers have been written on computing the integrals of (1.1)[3][1], and solving differential equations with oscillations[4]. In the following paper, we will present an extension of orthogonal functions and their three term recurrence relations to problems involving oscillations. We will also provide a numerically stable method for computing the coefficients of the recursion for large \(\omega\). ## 2. Creating the basis For simplicity of formulas we focus on the case of (1.1) where \(\omega=2\pi k.\) We can justify this by noticing if \(f(x)\) and \(g(x)\) in (1.1) are non-oscillatory, and \(\omega=2\pi k+\epsilon\) where \(|\epsilon|<\pi\), then there exist non-oscillatory \(\hat{f}(x)\) and \(\hat{g}(x)\) such that \[f(x)\sin(\omega x)+g(x)\cos(\omega x)=\hat{f}(x)\sin(2\pi kx)+\hat{g}(x)\cos( 2\pi kx) \tag{2.1}\] via straightforward applications of addition formula for trigonometric functions. We define the following inner product \[<f(x),g(x)>:=\int_{-1}^{1}f(x)g(x)\ dx \tag{2.2}\] **Theorem 2.1**.: _The following functions form a basis for \(\{x^{k}\sin(\omega x),x^{k}\cos(\omega x)\}_{k=0}^{N}\)_ \[p_{0}(x) = \cos(\omega x) \tag{2.3}\] \[q_{0}(x) = \sin(\omega x)\] (2.4) \[p_{1}(x) = xp_{0}(x)+\frac{1}{2\omega}q_{0}(x)\] (2.5) \[q_{1}(x) = xq_{0}(x)+\frac{1}{2\omega}p_{0}(x)\] (2.6) \[p_{k+1}(x) = xp_{k}(x)-\frac{<xp_{k},q_{k}>}{<q_{k},q_{k}>}q_{k}(x)-\frac{<xp _{k},p_{k-1}>}{<p_{k-1},p_{k-1}>}p_{k-1}(x)\] (2.7) \[q_{k+1}(x) = xq_{k}(x)-\frac{<xq_{k},p_{k}>}{<p_{k},p_{k}>}p_{k}(x)-\frac{<xq _{k},q_{k-1}>}{<q_{k-1},q_{k-1}>}q_{k-1}(x) \tag{2.8}\] Proof.: We follow the standard proof for three term recurrence relations. First note that \(p_{k}(x)\) is even when \(k\) is even, and odd when \(k\) is odd, while \(q_{k}(x)\) is even when \(k\) is odd, and odd when \(k\) is even. It is straight forward to verify \(p_{0}(x),q_{0}(x),p_{1}(x)\), and \(q_{1}(x)\) are all orthogonal to each other. All that remains is to prove the state via induction. Examine \(<p_{k+1},p_{j}(x)>\) where \(j<k-1\). Then \(<xp_{k},p_{j}>=<p_{k},xp_{j}>=0\) due to the fact that \(xp_{j}(x)\) is a polynomial of degree less than \(k\), and by assumption, \(p_{k}\) is orthogonal to all such polynomials. \[<p_{k+1},q_{k}> = <xp_{k},q_{k}>-\frac{<xp_{k},q_{k}>}{<q_{k},q_{k}>}<xp_{k},q_{k}>=0\] \[<p_{k+1},p_{k}> = 0\] \[<p_{k+1},q_{k-1}> = 0\] \[<p_{k+1},pk-1> = <xp_{k},p_{k-q}>-\frac{<xp_{k},p_{k-1}>}{<p_{k-1},p_{k-1}>}<p_{k-1 },p_{k-1}>=0\] Where the two middle lines are due to the even and odd properties. Similarly, \(q_{k+1}(x)\) is orthogonal to \(\{p_{j}(x),q_{j}(x)\}_{j=0}^{k}\). All that remains to be checked is the orthogonality of \(p_{k+1}(x)\) and \(q_{k+1}(x)\). However, one is even and the other is odd, so they must also be orthogonal. ## 3. Computing the Coefficients We now lay out a procedure for computing the coefficients of the recursion for \(\omega\) large relative to \(N\). First we must choose a basis to represent the orthogonal functions. Naively, one may want to use the basis \(\{x^{k}\cos(\omega x),x^{k}\sin(\omega x)\}\) to represent the orthogonal basis. However, as \(k\) increases, this basis becomes and more linearly dependent. This may be seen quickly by noting that the matrix whose coefficients are given by \(H^{\omega}_{i,j}=<x^{i}\cos(\omega x),x^{j}\cos(\omega x)>\) converges to the coefficients of the infamously ill-conditioned Hilbert matrix divided by two as \(\omega\) goes to positive infinity. Hence representing this space in the "monomial" basis leads to poor numerical accuracy. We will represent the \(\{p_{j}(x),q_{j}(x)\}_{j=0}^{k}\) as products of the Legendre polynomials with sines and cosines. Indeed, one can see that in the limit as \(\omega\) goes to infinity, \(\{P_{k}(x)\cos(\omega x),P_{k}(x)\sin(\omega x)\}\) become orthogonal to each other. Hence for large \(\omega\), one may expect the Legendre polynomial basis multiplied by sines and cosines is a good choice. \[<P_{k}(x)\cos(\omega x),P_{j}(x)\cos(\omega x)>=\frac{<P_{k}(x),P_{j}(x)>+<P_{k} (x),P_{j}(x)\cos(2\omega x)>}{2} \tag{3.1}\] As \(\omega\) goes to infinity, this converges to either zero when \(j\neq k\) or \(||P_{k}||^{2}/2\) when \(j=k\). Using a known orthogonal basis to represent another has been used in [5]. It will be necessary to compute inner products of the form \(<P_{k}(x)\cos(\omega x),P_{j}(x)\cos(\omega x)>\), \(<P_{k}(x)\cos(\omega x)\), \(P_{j}(x)\sin(\omega x)>\), and \(<P_{k}(x)\sin(\omega x),P_{j}(x)\sin(\omega x)>\). We will develop a recursive algorithm for computing these coefficients. We examine the following. \[<P_{k}(x)\cos(\omega x),P_{j}(x)\cos(\omega x)> \tag{3.2}\] \[=\frac{<P_{k}(x),P_{j}(x)>}{2}+\frac{<P_{k}(x),P_{j}(x)\cos(2 \omega x)>}{2}\] (3.3) \[=\frac{\delta_{kj}||P_{k}||^{2}}{2}+\frac{1}{2}\int_{-1}^{1}P_{k }(x)P_{j}(x)\cos(2\omega x)\ dx\] (3.4) \[=\frac{\delta_{kj}||P_{k}||^{2}}{2}+\frac{P_{k}(x)P_{j}(x)\sin(2 \omega x)}{2\omega}\Bigg{|}_{-1}^{1}\] (3.5) \[-\frac{1}{4\omega}\int_{-1}^{1}(P_{k}^{\prime}(x)P_{j}(x)+P_{k}( x)P_{j}^{\prime}(x))\sin(2\omega x)\ dx\] (3.6) \[=\frac{\delta_{kj}||P_{k}||^{2}}{2}-\frac{1}{4\omega}\int_{-1}^{1 }\sum_{l=0}^{k-1-2l\geq 0}(2(k-1-2l)+1)P_{k-1-2l}(x)P_{j}(x)\sin(2\omega x)\ dx\] (3.7) \[-\int_{-1}^{1}\sum_{l=0}^{j-1-2l\geq 0}\frac{(2(j-1-2l)+1)}{4 \omega}P_{k}(x)P_{j-1-2l}(x)\sin(2\omega x)\ dx\] (3.8) \[=\frac{\delta_{kj}||P_{k}||^{2}}{2}-\frac{1}{4\omega}\sum_{l=0}^{ k-1-2l\geq 0}(2(k-1-2l)+1)<P_{k-1-2l}(x),P_{j}(x)\sin(2\omega x)>\] (3.9) \[-\frac{1}{4\omega}\sum_{l=0}^{j-1-2l\geq 0}\frac{(2(j-1-2l)+1)}{4 \omega}<P_{k}(x),P_{j-1-2l}(x)\sin(2\omega x)> \tag{3.10}\] Thus we have the inner product we would like to compute is the sum of previous inner products with a \(\sin(2\omega x)\) instead of a \(\cos(2\omega x)\). We define the following matrices \[M1_{j,k}=<P_{j}(x),P_{k}(x)> \tag{3.11}\] \[M2_{j,k}=<P_{j}(x)\cos(\omega x),P_{k}(x)\sin(\omega x)>\] (3.12) \[M3_{j,k}=<P_{j}(x)\cos(\omega x),P_{k}(x)\cos(\omega x)>\] (3.13) \[M4_{j,k}=<P_{j}(x)\sin(\omega x),P_{k}(x)\sin(\omega x)>\] (3.14) \[M5_{j,k}=<P_{j}(x),P_{k}(x)\cos(2\omega x)>\] (3.15) \[M6_{j,k}=<P_{j}(x),P_{k}(x)\sin(2\omega x)> \tag{3.16}\] These matrices have the following relations \[M1_{j,k}=\delta_{ij}||P_{k}||^{2}\] \[M2_{j,k}=\frac{M6_{j,k}}{2}\] \[M3_{j,k}=\frac{M1_{j,k}}{2}+\frac{M5_{j,k}}{2}\] \[M4_{j,k}=\frac{M1_{j,k}}{2}-\frac{M5_{j,k}}{2}\] \[M5_{j,k}=\frac{(1+(-1)^{j+k})\sin(2\omega)}{2\omega}-\frac{1}{2 \omega}\sum_{l=0}^{j-1-2l\geq 0}(2(j-1-2l)+1)M6_{j-1-2l,k}\] \[-\frac{1}{2\omega}\sum_{l=0}^{k-1-2l\geq 0}(2(k-1-2l)+1)M6_{j,k-1-2l}\] \[M6_{j,k}=\frac{(-1+(-1)^{j+k})\cos(2\omega)}{2\omega}+\frac{1}{2 \omega}\sum_{l=0}^{j-1-2l\geq 0}(2(j-1-2l)+1)M5_{j-1-2l,k}\] \[+\frac{1}{2\omega}\sum_{l=0}^{k-1-2l\geq 0}(2(k-1-2l)+1)M5_{j,k-1-2l}\] The matrices satisfy the following properties. All matrices are symmetric. M1 is diagonal and can be computed via the known norms of the Legendre polynomials. M2, M3, and M4 can all be computed once M5 and M6 are known. M5 and M6 can be populated by making entries on successive skew diagonals, i.e. first make \(M5_{0,0}\) and \(M6_{0,0}\). Then make \(M5_{1,0},M5_{0,1},M6_{1,0}\), and \(M6_{0,1}\) via the recursion. Continue by making the next skew diagonal formed of elements \(M5_{j,k}\) and \(M6_{j,k}\) such that \(j+k=2\). Due to symmetry we only need to compute the upper halves of these matrices. The recursion relations are stable for \(2\omega>j,k\). And finally, by assumption on \(\omega,\sin(2\omega)=0\) and \(\cos(2\omega)=1\). Let \(f(x)\) and \(g(x)\) have the forms \[f(x) =\sum_{k=0}^{N}a_{k}P_{k}(x)\cos(\omega x)+b_{k}P_{k}(x)\sin(\omega x) \tag{3.17}\] \[g(x) =\sum_{k=0}^{M}c_{k}P_{k}(x)\cos(\omega x)+d_{k}P_{k}(x)\sin(\omega x) \tag{3.18}\] , then we have \[<f,g>=\vec{a}^{T}\cdot M2\cdot\vec{d}+\vec{a}^{T}\cdot M3\cdot\vec{c}+\vec{b}^{ T}\cdot M2\cdot\vec{c}+\vec{b}^{T}\cdot M4\cdot\vec{d} \tag{3.19}\] where every M matrix has been taken to have dimensions \(N\times M\). In this framework we may compute all the coefficients of our recursion. However, it has been observed the norm of the "monic" orthogonal functions decays rapidly; it has been observed they decay roughly on an order of 2. Therefore it is recommended to compute the normalized orthogonal functions instead. We only care about the case of large omega because when omega is small relative to N, you should just use regular quadrature. The determination for what large omega relative to N means will be from where the algorithm is stable. Particularly computing the matrices M1, M2, M3, M4, M5, M6, M7, and M8. The recurrence relation for computing these matrices is stable for \(\omega>N\) where N is the size of the square matrix. Now that we have a numerical method of computing inner products, we may represent the orthogonal functions in this basis and compute the coefficients of the recursion directly. The method will be stable as long as two conditions are met. The first is \(\omega>N\) and the second is that the Legendre Polynomials multiplied by sines and cosines well approximates the space of our orthogonal functions. ## 4. The Derivative Matrix We note that we may use the recurrence relation to compute integrals and derivatives of a given basis function. A given basis function may be represented as \[p_{k}(x)=\sum_{j=0}^{k}a_{kj}P_{j}(x)\cos(\omega x)+b_{kj}P_{j}(x)\sin(\omega x). \tag{4.1}\] By taking the derivative of both sides we arrive at \[p_{k}^{\prime}(x)=\sum_{j=0}^{k}a_{kj}(P_{j}^{\prime}(x)\cos(\omega x)-\omega P _{j}(x)\sin(\omega x))+b_{kj}(P_{j}^{\prime}(x)\sin(\omega x)+\omega P_{j}(x) \cos(\omega x)) \tag{4.2}\] . Given the left hand side is in the form of polynomials multiplied by sines and cosines, we may represent it in our the basis of Legendre polynomials multiplied by sines and cosines. Hence we can form a derivative matrix. Let \(\vec{b}=\{p_{0}(x),q_{0}(x),p_{1}(x),p_{2}(x),...\). Then we have \[\vec{b}^{\prime}=\mathbf{D}\vec{b} \tag{4.3}\] . We note that \(\mathbf{D}\) is triangular. Explicitly, the derivative matrix is \[\mathbf{I_{1}}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\qquad\quad\mathbf{I_{2}}=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\qquad\quad\mathbf{0}=\begin{pmatrix}0&0\\ 0&0\end{pmatrix} \tag{4.4}\] Then we have \[\mathbf{D}=\begin{pmatrix}\omega\mathbf{I_{2}}&\mathbf{I_{1}}&\mathbf{0}& \mathbf{I_{1}}&\mathbf{0}&\mathbf{I_{1}}&\ldots\\ &\omega\mathbf{I_{2}}&3\mathbf{I_{1}}&\mathbf{0}&3\mathbf{I_{1}}&\mathbf{0}& \ldots\\ &&\omega\mathbf{I_{2}}&5\mathbf{I_{1}}&\mathbf{0}&5\mathbf{I_{1}}&\ldots\\ &&\omega\mathbf{I_{2}}&7\mathbf{I_{1}}&\mathbf{0}&\ldots\\ &&&\ddots&\ddots&\ddots\end{pmatrix} \tag{4.5}\] . We note that this D Matrix is simple to understand, we have a block diagonal matrix from the derivative landing on sine and cosine, and then an upper triangular matrix that is directly similar to the derivative matrix for Legendre polynomials. This is the derivative matrix for taking the derivative of the \(\{\cos(\omega x),\sin(\omega x),x\cos(\omega x),x\sin(\omega x),...,x^{N}\cos (\omega x),x^{N}\sin(\omega x)\}\) basis. In order to get the derivative matrix for the orthogonal basis, let \(\vec{f}\), \(\mathbf{D},and\vec{g}\) be in the first basis. Then we have \[\mathbf{D}\vec{f}=\vec{g}\] \[\mathbf{B}^{-1}\mathbf{D}\mathbf{B}(\mathbf{B}^{-1}\vec{f})=( \mathbf{B}^{-1}\vec{g})\] Hence our derivative matrix in the orthogonal basis is \(\mathbf{B}^{-1}\mathbf{D}\mathbf{B}\). ## 5. Future Directions To our knowledge, this is the first time orthogonal functions with a three term recurrence relation have been created in pairs. In order to create a quadrature method from these orthonormal functions, we must also generalize quadrature methods to handle the mixed three term recurrence relations. Computing the coefficients of these mixed three term recurrence relations was the necessary first step. We note this method can be generalized to include any number separate frequencies \(\omega_{1},\omega_{2},...,\omega_{N}\). The stability of the resulting numerical methods now depends on the smallest distance from one frequency to another relative to the number of orthogonal functions used. Of key interest will be the case where \(\omega_{1}=0\) and \(\omega_{2}\) is large. We believe this would lead to an Enriched Spectral Method. We also believe this method is also suitable and straightforward to implement in higher dimensions on a square grid, as is standard for quadrature methods. We are interested in applying this method to problems with singularities of differing orders such as \(\{x^{k},x^{k}\log(x),x^{k}\log(x)^{2}\}_{k=0}^{N}\). We believe we can develop methods suited to handling problems where these types of singularities (crack phenomenons) arise. Certain papers have have orthogonal polynomials for for \(\{x^{k}\log(x)^{m}\}_{k=0}^{N}\) for natural numbers \(m\).
2310.16374
Joint Distributional Learning via Cramer-Wold Distance
The assumption of conditional independence among observed variables, primarily used in the Variational Autoencoder (VAE) decoder modeling, has limitations when dealing with high-dimensional datasets or complex correlation structures among observed variables. To address this issue, we introduced the Cramer-Wold distance regularization, which can be computed in a closed-form, to facilitate joint distributional learning for high-dimensional datasets. Additionally, we introduced a two-step learning method to enable flexible prior modeling and improve the alignment between the aggregated posterior and the prior distribution. Furthermore, we provide theoretical distinctions from existing methods within this category. To evaluate the synthetic data generation performance of our proposed approach, we conducted experiments on high-dimensional datasets with multiple categorical variables. Given that many readily available datasets and data science applications involve such datasets, our experiments demonstrate the effectiveness of our proposed methodology.
Seunghwan An, Jong-June Jeon
2023-10-25T05:24:23Z
http://arxiv.org/abs/2310.16374v1
# Joint Distributional Learning via Cramer-Wold Distance ###### Abstract The assumption of conditional independence among observed variables, primarily used in the Variational Autoencoder (VAE) decoder modeling, has limitations when dealing with high-dimensional datasets or complex correlation structures among observed variables. To address this issue, we introduced the Cramer-Wold distance regularization, which can be computed in a closed-form, to facilitate joint distributional learning for high-dimensional datasets. Additionally, we introduced a two-step learning method to enable flexible prior modeling and improve the alignment between the aggregated posterior and the prior distribution. Furthermore, we provide theoretical distinctions from existing methods within this category. To evaluate the synthetic data generation performance of our proposed approach, we conducted experiments on high-dimensional datasets with multiple categorical variables. Given that many readily available datasets and data science applications involve such datasets, our experiments demonstrate the effectiveness of our proposed methodology. ## 1 Introduction The Variational Autoencoder (VAE) is a generative model utilized to estimate the underlying distribution of a given dataset [25, 39]. The primary objective of VAE is to maximize the Evidence Lower Bound (ELBO) of the observation \(\mathbf{x}\), thereby enabling the generative model to produce synthetic data that closely resembles the observed dataset. Note that the generative model of VAE is written as follows: \[\int p(\mathbf{z})p(\mathbf{x}|\mathbf{z})d\mathbf{z}, \tag{1}\] where \(p(\mathbf{z})\) represents the prior distribution of the latent variable \(\mathbf{z}\) and \(p(\mathbf{x}|\mathbf{z})\) corresponds to the decoder. The ELBO is derived as a lower bound on the log-likelihood of an individual observation \(\mathbf{x}\), making it a local approximation for that specific data point. To achieve equality in the ELBO for accurately recovering the given observation, the Kullback-Leibler (KL) divergence between the proposal posterior \(q(\mathbf{z}|\mathbf{x})\) and the true posterior \(p(\mathbf{z}|\mathbf{x})\) distributions should be minimized, ideally reaching zero. This means that the proposal posterior distribution should have an infinite capacity, ensuring that the generative model can generate the synthetic data accurately. However, conventional VAE approaches typically assume that a prior distribution follows a standard Gaussian distribution. This choice offers certain advantages, such as having a closed-form KL-divergence and improved sampling efficiency. However, it also implies that this prior has a 'finite' capacity. Consequently, the aggregated posterior [44], denoted as \(\int q(\mathbf{z}|\mathbf{x})p(\mathbf{x})d\mathbf{x}\), can significantly differ from the prior. This deviation in the distributions carries a notable implication: the generative model, as represented by (1), can _not_ effectively produce synthetic data that closely resembles the original data because the decoder \(p(\mathbf{x}|\mathbf{z})\) is trained using latent variables sampled from the proposal posterior \(q(\mathbf{z}|\mathbf{x})\). Hence, generating high-quality synthetic datasets crucially depends on aligning the aggregated posterior with the chosen prior distribution [5]. This alignment process involves parameterizing the prior using trainable parameters. Several previous studies, including [44, 16, 17], have parameterized the prior using a mixture (Gaussian) distribution. However, when maximizing the ELBO jointly with this complex prior, it often necessitates intricate mathematical derivations like the density ratio trick [42] and the application of a greedy algorithm [44]. Within this context, several studies such as [33, 40, 1, 9, 47, 13], have adopted the _two-step learning method. This approach involves a separate training process where the alignment of the prior with the aggregated posterior is carried out independently. It not only helps alleviate the challenges associated with intricate derivations but also provides a more stable training process. Additionally, it enables the construction of a flexible learning pipeline [40]. In this paper, we introduce a novel framework for two-step learning that leverages the distributional learning of VAE [2]. Our specific focus is on applying this framework to datasets characterized by high dimensionality and containing multiple categorical attributes. We emphasize this for two compelling reasons. (_high-dimensional_) Firstly, when dealing with high-dimensional data, it is a common practice to assume conditional independence among observed variables given the latent variable. However, this assumption can pose challenges when the dimensionality of the latent variable is smaller than that of the observations, causing inaccuracies in capturing this conditional independence. We employ two regularization techniques to tackle this issue and effectively model the correlation structure among observed variables. These regularizations are based on the Cramer-Wold distance [41, 26] and classification loss [54, 38]. Notably, the Cramer-Wold distance shares similarities with the sliced-Wasserstein distance but offers the advantage of having a closed-form solution. (_multiple-categorical_) In many readily available public datasets and data science applications, it's common to encounter datasets that include categorical variables [6]. For example, among the entire dataset collection available at archive.ics.uci.edu, datasets that contain categorical and mixed columns account for approximately 65.4% of the total. Hence, we conduct experiments on publicly available real tabular datasets that consist of multiple categorical variables. These experiments showcase the excellent performance of our proposed model in generating synthetic data. Our paper makes two primary contributions, which can be summarized as follows: 1. We present a novel framework for two-step learning and provide both theoretical and numerical distinctions from existing methods within this category. 2. We specifically utilize the Cramer-Wold distance to enable joint distributional learning for multiple-categorical datasets and demonstrate its effectiveness through a series of numerical experiments. ## 2 Related Work **Distributional learning (generative modeling).** Distributional learning involves the estimation of the underlying distribution of an observed dataset. Generative models based on latent spaces aim to perform distributional learning by generating data closely resembling a given dataset. An early and prominent example of generative modeling is the VAE. However, VAE faced limitations in generative performance due to a misalignment between the distribution of representations that learned information from the observations, known as the aggregated posterior, and the prior distribution. To address this issue, [34] introduced the Adversarial AutoEncoder (AAE), which directly minimizes the aggregated posterior and prior using the adversarial loss from the GAN framework. This introduction of adversarial loss made it easier to compute, even when the aggregated posterior took complex forms, unlike KL-divergence. Similarly, [5] proposed the penalized optimal transport (POT). The POT's objective function consists of a reconstruction loss (cost) and a penalty term that minimizes the divergence (distance) between the aggregated posterior and prior distributions. Subsequent research incorporated various divergences into this penalty term, such as Maximum Mean Discrepancy (MMD) [43], Sliced-Wasserstein distance [11], and Cramer-Wold distance [41]. Notably, Sliced-Wasserstein and Cramer-Wold distances are based on random projections of high-dimensional datasets onto one-dimensional subspaces, resolving challenges in calculating distances between multivariate distributions. While these methods commonly utilize divergence for alignment in the latent space, some studies directly introduce divergence into the data space. For instance, [12, 33, 20] employed MMD, and [11] used the sliced Wasserstein distance for reconstruction error. More recently, [2] introduced the continuous ranked probability score (CRPS), a proper scoring rule that measures the distance between the proposed cumulative distribution function (CDF) and the ground-truth CDF of the underlying distribution. It shows theoretically that it is feasible to minimize the KL-divergence between the ground-truth density and the density estimated through generative modeling. **Two-step learning.** As previously discussed, within the VAE framework, optimizing the ELBO while simultaneously learning both the decoder and complex prior parameters often involves complex mathematical derivations, such as the density ratio trick [42], and a greedy algorithm [44]. The requirement for a closed-form expression of the ELBO has limited the exploration of new approaches to modeling priors. However, [40] has revealed that two-step training can be thought of as a simple combination of existing methods for fitting the decoder and prior model. This approach offers the added benefit of flexibility in the learning process, allowing for straightforward adjustments to the prior modeling when the necessary method for learning these distributions is available. [33, 40, 1] employed an AutoEncoder to fit the decoder, while [9, 47, 13] used the VAE framework in the first step of training. A common theme in these papers was the learning of the prior distribution in the second step to align with the aggregated posterior (distribution of representations) [34]. Notably, [9] theoretically demonstrated that under the assumption that observations exist on a simple Riemannian manifold, two-step learning can approximate the ground-truth measure. **Handing multiple-categorical datasets.** To train the generator and discriminator networks with multiple-categorical (discrete) variables, [8] proposes a combination of an AutoEncoder and a GAN, which is based on [53]. The AutoEncoder directly learns from high-dimensional discrete variables, while the GAN generates the continuous latent variables of the AutoEncoder's latent space. In other words, the GAN learns the distribution of the representation vectors. Subsequent studies have adopted this approach [46, 51, 31, 32]. [32] employs the VAE instead of the autoencoder. On the other hand, [6] proposes another approach that avoids backpropagating through discrete samples by adopting the Gumbel-Softmax [22] to make sampling from discrete distributions differentiable. Further, [28, 50] incorporate the Wasserstein GAN with gradient penalty (WGAN-GP, [3]) to enhance training stability and accommodate various variables types. **Synthetic data generation.** The synthetic data generation task actively adopts the GAN framework, as it allows for nonparametric synthetic data generation [8, 38, 49, 54, 37, 28, 27, 50, 18]. In particular, [49, 54] assume that continuous columns in tabular datasets can be approximated using Gaussian mixture distributions and model their decoder accordingly. They also employ the Variational Gaussian mixture model [4], known as _mode-specific normalization_, to preprocess the continuous variables. However, this preprocessing step requires additional computational resources and hyperparameter tuning to determine the number of modes. Alternatively, other approaches proposed by [38, 54] focus on regularizing the difference between the first and second-order statistics of the observed and synthetic datasets. **Correlation structure learning.** Several studies have focused on capturing the correlation structure between variables to improve the quality of synthetic data. For instance, [51] maximizes the correlation between two different latent vectors representing diseases and drugs. Similarly, [28] introduces an alignment loss based on the \(L_{2}\) distance between correlation matrices. On the other hand, [46] modifies the Multilayer Perceptron (MLP) with Convolutional Neural Networks (CNN). **The learning of prior.**[21, 34] have demonstrated that the aggregated posterior is the optimal prior, which maximizes the objective function of the VAE, but it can lead to overfitting. To address this issue, [45, 16] proposed approximating the optimal prior by using a finite mixture of posterior distributions with trainable pseudo-inputs. However, the performance of VampPrior [45] is sensitive hyperparameters, such as the number of mixture components [42]. [34, 42] employed the adversarial training method to regularize the VAE model by aligning the aggregated posterior with the prior distribution. ## 3 Proposal Let \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{p}\) be an observation consisting of discrete variables. \(T_{j}\) denotes the number of levels for the discrete variables \(\mathbf{x}_{j}\) where \(j\in\{1,\cdots,p\}\). We denote the ground-truth underlying distribution (probability density function, PDF) as \(p(\mathbf{x})\). The decoder, posterior, and prior distributions are denoted as \(p(\mathbf{x}|\mathbf{z};\theta)\), \(q(\mathbf{z}|\mathbf{x};\phi)\), and \(p(\mathbf{z};\eta)\), respectively, where \(\theta,\phi,\eta\) are trainable neural network parameters. Note that the prior distribution is not fixed and is parameterized with a trainable parameter \(\eta\). The aggregated posterior [34, 45] is defined as \[q(\mathbf{z};\phi)\coloneqq\int p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)d \mathbf{x}.\] Equipped with the proposal distributions above, the generative model is defined as \[\hat{p}(\mathbf{x};\theta,\eta)\coloneqq\int p(\mathbf{x}|\mathbf{z};\theta) \cdot p(\mathbf{z};\eta)d\mathbf{z}, \tag{2}\] and it is also referred to as the estimated density function. Then, our primary objective is to approximate the ground-truth density by minimizing some divergence between the estimated and the ground-truth density functions. We employ the forward KL-divergence, \(\mathcal{KL}(p(\mathbf{x})\|\hat{p}(\mathbf{x};\theta,\eta))\), as it is one of the most popular choices [36]. As shown in [40], \(\mathcal{KL}(p(\mathbf{x})\|\hat{p}(\mathbf{x};\theta,\eta))\) for the two-step learning [40] can be re-written as \[\mathcal{KL}\Big{(}p(\mathbf{x})\|\hat{p}(\mathbf{x};\theta,\eta) \Big{)} \tag{3}\] \[= \underbrace{\mathcal{KL}\Big{(}p(\mathbf{x})q(\mathbf{z}|\mathbf{x };\phi)\|p(\mathbf{x}|\mathbf{z};\theta)q(\mathbf{z};\phi)\Big{)}}_{(i)}\] \[+ \underbrace{\mathcal{KL}(q(\mathbf{z};\phi)\|p(\mathbf{z};\eta))} _{(ii)}\] (see Appendix A.1 for the detailed derivation of (3)). In (3), the terms \((i)\) and \((ii)\) represent the objectives of training steps 1 and 2, respectively. By the distributional learning of VAE [2], we can minimize \(\mathcal{KL}(q(\mathbf{z};\phi)\|p(\mathbf{z};\eta))\), because the parameter \(\phi\) is fixed during the training process of step 2. Therefore, our proposal method is mainly focused on the training process of step 1, and the differences between the existing two-step learning methods will be addressed in the following sections. ### Step 1 The objective of step 1 training process is the term \((i)\) of (3), and it can be re-written as \[\mathcal{L}(\theta,\phi)\coloneqq \mathcal{KL}\Big{(}p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)\|p (\mathbf{x}|\mathbf{z};\theta)q(\mathbf{z};\phi)\Big{)} \tag{4}\] \[= \mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[-\log p( \mathbf{x}|\mathbf{z};\theta)]\] \[+ \mathbb{E}_{p(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}|\mathbf{x };\phi)\|q(\mathbf{z};\phi))]\] \[- H(p(\mathbf{x})),\] where \(H(\cdot)\) is the entropy function. We minimize (4) with respect to \(\theta,\phi\). The third term of (4) of RHS is the entropy of the ground-truth density function and is constant. **Assumption 1**.: \(\mathbf{x}_{1},\cdots,\mathbf{x}_{p}\) _are mutually independent given \(\mathbf{z}\)._ Our proposed model assumes that \(p(\mathbf{x})\) is parametrized by a mixture of categorical distributions, i.e., the decoder \(p(\mathbf{x}|\mathbf{z};\theta)\) of the generative model (2) is defined as follows: \[p(\mathbf{x}|\mathbf{z};\theta) = \prod_{j=1}^{p}p(\mathbf{x}_{j}|\mathbf{z};\theta_{j}) \tag{5}\] \[= \prod_{j=1}^{p}\prod_{l=1}^{T_{j}}\pi_{l}(\mathbf{z};\theta_{j}) ^{l(\mathbf{x}_{j}=l)},\] by Assumption 1, where \(\theta=(\theta_{1},\cdots,\theta_{p})\), \(\pi(\cdot;\theta_{j}):\mathbb{R}^{d}\mapsto\Delta^{T_{j}-1}\) is a neural network parameterized with \(\theta_{j}\), where \(\Delta^{T_{j}-1}\) is the standard \((T_{j}-1)\)-simplex for all \(\mathbf{z}\in\mathbb{R}^{d}\), and the subscript \(l\) refers to the \(l\)th element of the output \(\pi\). Then, the reconstruction loss of step 1, the first term of (4), is written as follows: \[\mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}\left[-\sum_{j=1}^{p} \sum_{l=1}^{T_{j}}\mathbb{I}(\mathbf{x}_{j}=l)\cdot\log\pi_{l}(\mathbf{z}; \theta_{j})\right],\] which is equivalent to the cross-entropy (classification loss) for a dataset with categorical variables. For the computation of \(\mathbb{E}_{p(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}|\mathbf{x};\phi)\|q( \mathbf{z};\phi))]\), which is the second term of (4), the log-likelihood of the posterior distribution need to be tractable. Therefore, we parameterized the posterior with the multivariate Gaussian distribution. The posterior distribution is defined as \(q(\mathbf{z}|\mathbf{x};\phi)\coloneqq\mathcal{N}\big{(}\mathbf{z}|\mu( \mathbf{x};\phi),diag(\sigma^{2}(\mathbf{x};\phi))\big{)}\), where \(\mu:\mathbb{R}^{p}\mapsto\mathbb{R}^{d}\), \(\sigma^{2}:\mathbb{R}^{p}\mapsto\mathbb{R}^{d}_{+}\) are neural networks parameterized with \(\phi\), and \(diag(a),a\in\mathbb{R}^{d}\) denotes a diagonal matrix with diagonal elements \(a\). However, since the second term of (4) is still not tractable due to the presence of the aggregated posterior, we minimize the upper bound of the second term of (4), which is derived as follows: \[0 \leq \mathbb{E}_{p(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}|\mathbf{x}; \phi)\|q(\mathbf{z};\phi))]\] \[= I(\mathbf{x},\mathbf{z};\phi)\] \[\leq \mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[\log q( \mathbf{z}|\mathbf{x};\phi)]-\mathbb{E}_{p(\mathbf{x})q(\mathbf{z};\phi)}[ \log q(\mathbf{z}|\mathbf{x};\phi)]\] \[= \mathbb{E}_{p(\mathbf{x})}[-H(q(\mathbf{z}|\mathbf{x};\phi))]- \mathbb{E}_{p(\mathbf{x})q(\mathbf{z};\phi)}[\log q(\mathbf{z}|\mathbf{x}; \phi)]\] (the detailed derivation is shown in Appendix A.1). (6) regularize the entropy of the posterior distribution and minimizing (6) means that all latent variables should not have specific information about a certain observation of \(\mathbf{x}\). In this paper, we will refer to this upper bound as the 'entropy regularization term.' #### 3.1.1 Joint Distributional Learninig However, when we model the decoder based on Assumption 1, it offers computational efficiency. Still, (5) struggles to capture the joint relationships among the observed variables effectively. This limitation becomes evident when dealing with a restricted latent space, where latent variables may fail to capture the conditional independence among observed variables. Consequently, the model excels at capturing only the marginal distribution of the observed dataset, lacking in capturing intricate dependencies. To address this limitation, we employ two regularization strategies: Cramer-Wold distance regularization [41, 26] and classification loss regularization [54, 38]. Cramer-Wold distance, while similar in spirit to the sliced-Wasserstein distribution, stands out due to its closed-form solution. For the classification loss regularization, we first define the conditional distributions, \(p(\mathbf{x}_{j}|\mathbf{x}_{-j};\varphi_{j})\), which are assumed to be categorical distributions where \(\mathbf{x}_{-j}\) denotes the vector of \(\mathbf{x}\) except for \(\mathbf{x}_{j}\) for \(j\in\{1,\cdots,p\}\), and \(\varphi=(\varphi_{1},\cdots,\varphi_{p})\). And we pre-train the one-vs-all classifiers \(p(\mathbf{x}_{j}|\mathbf{x}_{-j};\varphi_{j})\) as follows: \[\max_{\varphi}\mathbb{E}_{p(\mathbf{x})}\left[\sum_{j=1}^{p}\sum_{l=1}^{T_{j}} \mathbb{I}(\mathbf{x}_{j}=l)\cdot\log p(\mathbf{x}_{j}|\mathbf{x}_{-j};\varphi _{j})\right]. \tag{7}\] Finally, our objective function for the step 1 is minimizing \[- \mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}\left[\sum _{j=1}^{p}\sum_{l=1}^{T_{j}}\mathbb{I}(\mathbf{x}_{j}=l)\cdot\log\pi_{l}( \mathbf{z};\theta_{j})\right]\] \[+ \mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[\log q( \mathbf{z}|\mathbf{x};\phi)]-\mathbb{E}_{p(\mathbf{x})q(\mathbf{z};\phi)}[ \log q(\mathbf{z}|\mathbf{x};\phi)]\] \[+ \lambda\cdot\int_{S_{p}}\|\mathrm{sm}_{\kappa}(v^{\top}\mathbf{X })-\mathrm{sm}_{\kappa}(v^{\top}\hat{\mathbf{X}})\|_{2}^{2}d\sigma_{p}(v)\] \[- \gamma\cdot\mathbb{E}_{q(\hat{\mathbf{x}};\phi,\theta)}\left[ \sum_{j=1}^{p}\sum_{l=1}^{T_{j}}\mathbb{I}(\hat{\mathbf{x}}_{j}=l)\cdot\log p (\hat{\mathbf{x}}_{j}|\hat{\mathbf{x}}_{-j};\varphi_{j}^{*})\right]\] with respect to \(\theta,\phi\), where \(\lambda\geq 0\), \(\gamma\geq 0\), \(S_{p}\) denotes the unit sphere in \(\mathbb{R}^{p}\), \(\sigma_{p}\) is the normalized surface measure on \(S_{p}\), and \(\varphi^{*}\) is the pre-trained parameter which is fixed during the training process. Also, we denote \(\mathbf{X}\coloneqq\{\mathbf{x}_{i}\}_{i=1}^{n}\) and \(\hat{\mathbf{X}}\coloneqq\{\hat{\mathbf{x}}_{i}\}_{i=1}^{n}\), where \(\mathbf{x}_{i}\sim p(\mathbf{x})\), \(\hat{\mathbf{x}}_{i}\sim q(\hat{\mathbf{x}};\phi,\theta)\), and \[q(\hat{\mathbf{x}};\phi,\theta):=\int p(\mathbf{x})q(\mathbf{z}|\mathbf{x}; \phi)p(\hat{\mathbf{x}}|\mathbf{z};\theta)d\mathbf{z}d\mathbf{x}.\] With a Gaussian kernel \(N(\cdot,\kappa)\), the smoothen distribution is defined as \(\mathrm{sm}_{\kappa}(R):=\frac{1}{n}\sum_{i=1}^{n}N(r_{i},\kappa)\), where the sample \(R=\{r_{i}\}_{i=1}^{n}\) and \(r_{i}\in\mathbb{R}\). Furthermore, in Section 4.2, we experimentally demonstrated the impact of the entropy regularization term on the synthetic data generation performance, as well as the influence of each regularization. #### 3.1.2 Comparison to Prior Works In this section, we will demonstrate that the step 1 objective function of existing two-step learning methods differs from the term \((i)\) of (3), which is the objective function of our step 1 training process. In short, the second term of (4), \(\mathbb{E}_{p(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}|\mathbf{x};\phi)\|q( \mathbf{z};\phi))]\), is _not_ minimized with existing two-step learning methods. Instead, a vanilla AutoEncoder [33, 40, 1] or a VAE [9, 47, 13] is trained. Furthermore, we also observe that minimizing the second term of (4) is important for the model's synthetic data generation performance, as shown in Section 4.2. **AutoEncoder.** If a vanilla AutoEncoder is trained in step 1, then the objective function \(\mathcal{L}^{(1)}(\theta,\phi)\) can be written as: \[\min_{\theta,\phi}\mathcal{L}^{(1)}(\theta,\phi)\coloneqq\mathbb{E}_{p( \mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[-\log p(\mathbf{x}|\mathbf{z};\theta)].\] Since \(-H(p(\mathbf{x}))\) is constant and the second term of (4) is always non-negative, \(\mathcal{L}^{(1)}(\theta,\phi)\leq\mathcal{L}(\theta,\phi)\) and it means that the two-step learning method with AutoEncoder [33, 40, 1] minimizes the lower bound of (4). **VAE.** For the two-step learning with VAE, the objective function \(\mathcal{L}^{(2)}(\theta,\phi)\) is: \[\min_{\theta,\phi}\mathcal{L}^{(2)}(\theta,\phi)\coloneqq \mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[-\log p( \mathbf{x}|\mathbf{z};\theta)]\] \[+ \mathbb{E}_{p(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}|\mathbf{x}; \phi)\|p(\mathbf{z};\eta))].\] And the second term of (4) can be written as: \[\mathbb{E}_{p(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}|\mathbf{x}; \phi)\|q(\mathbf{z};\phi))]\] \[= \iint p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)\log\frac{q( \mathbf{z}|\mathbf{x};\phi)}{q(\mathbf{z};\phi)}\cdot\frac{p(\mathbf{z};\eta)}{ p(\mathbf{z};\eta)}d\mathbf{x}d\mathbf{z}\] \[= \mathbb{E}_{p(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}|\mathbf{x}; \phi)\|p(\mathbf{z};\eta))]+\mathcal{KL}(q(\mathbf{z};\phi)\|p(\mathbf{z};\eta)).\] Since \(\mathcal{KL}(q(\mathbf{z};\phi)\|p(\mathbf{z};\eta))\) is always non-negative, \(\mathcal{L}^{(2)}(\theta,\phi)\leq\mathcal{L}(\theta,\phi)\) and it means that the two-step learning method with VAE [9, 47, 13] minimizes the lower bound of (4). ### Step 2 [2] showed that \(\mathcal{KL}(q(\mathbf{z};\phi)\|p(\mathbf{z};\eta))\) can be minimized with the following conditions on \(q(\mathbf{z};\phi)\) (note that the parameter \(\phi\) is fixed during step 2): 1. \(\mathbf{z}\) comprises \(d\) continuous random variables, 2. and \(q(\mathbf{z};\phi)\) is defined over \(\mathbf{z}\in\mathbb{R}^{d}\) with \(q(\mathbf{z};\phi)>0\) for all \(\mathbf{z}\in\mathbb{R}^{d}\). Since our posterior distribution \(q(\mathbf{z}|\mathbf{x};\phi)\) is assumed to be a multivariate Gaussian distribution, it can be easily shown that the above two conditions are satisfied. Therefore, we can minimize \(\mathcal{KL}(q(\mathbf{z};\phi)\|p(\mathbf{z};\eta))\) during the training process of step 2. Training details and relevant theorems can be found in [2]. It's also possible to model the prior distribution in step 2 using methodologies such as GMM (Gaussian Mixture Model) [23, 40] or KDE (Kernel Density Estimation) to approximate the distribution of the aggregated posterior. ### Incorporating Causal Structure Information While not explored in this paper, assuming that the causal structure information among the observed variables is given, the need for challenging conditional independence assumptions that are hard to satisfy in high-dimensional settings can be alleviated. Recently, many studies have emerged that use continuous optimization to tackle the NP-hard problem of DAG learning [55, 52, 7, 35, 29] and to use gradient-based optimizations such as deep learning methods. Utilizing these methods, it is possible to find a DAG (causal structure) for a given dataset. Let \(Pa(\mathbf{x}_{j})\) represent the parent variables of \(\mathbf{x}_{j}\) for \(j=1,\cdots,p\). Additionally, let \(G\) denote the graph representing the causal structure among the observed variables, and \(\mathbf{x}\) is the Bayesian network with respect to \(G\). Then, \(p(\mathbf{x})\) can be expressed as a product of individual density functions, conditional on their parent variables: \[p(\mathbf{x})=\prod_{j=1}^{p}p(\mathbf{x}_{j}|Pa(\mathbf{x}_{j})). \tag{9}\] Based on (9), the lower bound on the log-likelihood of the single observation \(\mathbf{x}\) is written as: \[\log p(\mathbf{x})= \sum_{j=1}^{p}\log p(\mathbf{x}_{j}|Pa(\mathbf{x}_{j}))\] \[\geq \sum_{j=1}^{p}\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}[\log p( \mathbf{x}_{j}|\mathbf{z},Pa(\mathbf{x}_{j}))]\] \[- \mathcal{KL}\Big{(}q(\mathbf{z}|\mathbf{x})\|p(\mathbf{z}|Pa( \mathbf{x}_{j}))\Big{)},\] (for notational simplicity, we drop notations for parameters). In the above derivation, we assumed that the posterior distribution depends on all the observed variables, but it is also possible to define the posterior distribution for each \(\mathbf{x}_{j}\) separately using only the parent variables, as follows: \(q(\mathbf{z}|Pa(\mathbf{x}_{j}))\) for \(j=1,\cdots,p\). Hence, when incorporating causal structure information, it's possible to define the reconstruction loss for each observed variable \(\mathbf{x}_{j}\) individually (one-dimensional distributional learning), even without relying on the conditional independence assumption. ## 4 Experiments ### Toy Example: MNIST Dataset We use the MNIST dataset [30] to illustrate the impact of the entropy regularization term on synthetic data generation performance. To examine the effect of the entropy regularization term on the latent space, we consider a 2-dimensional latent space for ease of visualization. The values are scaled within the range of 0 to 1, and we binarize them using a threshold of 0.5. The FID (Frechet inception distance) score [19] in Table 1 is computed using the MNIST test dataset and 10,000 synthetic images. When examining the average 2-dimensional posterior variances in Table 1, it becomes evident that, during the step 1 training process without the entropy regularization term, the variance of each posterior distribution is significantly smaller compared to when the entropy regularization term is present. This implies that, in the absence of the entropy regularization term, the density of each posterior distribution is concentrated in specific regions of the latent space. This, in turn, leads to increased complexity in the aggregated posterior. Also, the range of values the latent variables can take is expanded without the entropy regularization term. It can be observed by comparing the visualizations of the aggregated posterior for each test dataset with and without the entropy regularization term in Figure 1-(a) and Figure 1-(b). Furthermore, when it comes to the quality of the generated MNIST images, it is evident from the FID score in Table 1 that training with the entropy regularization term leads to an increase in image quality (see Figure 1-(c) and Figure 1-(d)). Therefore, the entropy regularization term not only regulates the complexity of the aggregated posterior but also contributes to improving the quality of generated images. In short, the target distribution, represented by the aggregated posterior, becomes overly complex without the entropy regularization term, making it challenging to align the distributional learning of the aggregated posterior and the prior distribution in step 2. Consequently, a mismatch between the sampled latent variable from the prior and the aggregated posterior occurs, resulting in the generation of lower-quality synthetic data. \begin{table} \begin{tabular}{l r r} \hline \hline & without & with \\ \hline avg. \(\sigma^{2}\) & \((0.0003,0.0005)\) & \((1.784,1.940)\) \\ FID & 6.809 & 6.042 \\ \hline \hline \end{tabular} \end{table} Table 1: MNIST dataset. ‘without’ denotes the model is trained without the entropy regularization term, and ‘with’ denotes the model is trained with the entropy regularization term. ‘avg. \(\sigma^{2}\)’ is the averaged 2-dimensional posterior variance with the test dataset. ### Tabular Datasets For all experiments, the synthetic dataset is generated to have an equal number of samples as the real training dataset. We run all experiments using Geforce RTX 3090 GPU, and our experimental codes are all available with pytorch. We release the code at XXX. #### 4.2.1 Overview Dataset.We employed three real tabular datasets, all of which are characterized by high dimensionality and multiple categorical variables: survey, census, and income. survey dataset is obtained from the 2013 American Community Survey 1, which is a survey from the US Census Bureau. We sample observations corresponding to the California region (State Code: 06.California/CA) since the row number of the California region is the largest. income dataset (Census-Income (KDD)) 2 is obtained from 1994 and 1995 population surveys conducted by the U.S. Census Bureau. census dataset (US Census Data (1990)) 3 consists of a one percent sample of the Public Use Microdata Samples (PUMS) person records drawn from the full 1990 census sample. Due to computational issues, we sample observations from census and income randomly. Footnote 2: [https://www.kaggle.com/datasets/census/](https://www.kaggle.com/datasets/census/) Footnote 3: 2013-american-community-survey7datasetId=6&sortBy= voteCount Footnote 4: [http://archive.ics.uci.edu/dataset/117/census+](http://archive.ics.uci.edu/dataset/117/census+) income+kdd Footnote 5: [https://archive.ics.uci.edu/ml/datasets/US+Census+](https://archive.ics.uci.edu/ml/datasets/US+Census+) Data+(1990) For each dataset, we tune the hyper-parameter \(\beta\) and \(\gamma\) differently (survey: \(\beta=0.05\), \(\gamma=0.5\), income: \(\beta=0.5\), \(\gamma=4\), census: \(\beta=0.1\), \(\gamma=3\)). Here, \(\beta\) is the weight parameter for the KL-divergence in step 2. The values range used for hyper-parameter tuning is \[\lambda \in \{100,500\}\] \[\gamma \in \{0.1,0.5,1,2,3,4,5\}\] \[\beta \in \{0.01,0.05,0.1,0.5\}.\] Compared models.We conduct a comparative analysis of our model with state-of-the-art synthesizers, which are able to generate datasets with multiple categorical variables, including both one-step and two-step learning methods. Specifically, the one-step learning methods considered are MC-Gumbel [6], MC-WGAN-GP [6], and WGAN-GP-A [28, 27]. The two-step learning methods include medGAN [8], MC-medGAN [6], MC-ARAE [6], corGAN [46], and DAAE [31]. Note that one-step learning methods involve a single training step. All the models in this comparison have a similar number of model parameters. A comprehensive comparison of the model parameters is in Table 3. \begin{table} \begin{tabular}{l r r r r} \hline \hline dataset & \#train & \#test & \#column & one-hot \\ \hline survey & 60,000 & 2,593 & 54 & 231 \\ income & 45,000 & 5,000 & 23 & 416 \\ census & 30,000 & 5,000 & 68 & 394 \\ \hline \hline \end{tabular} \end{table} Table 2: The detailed tabular dataset descriptions. #train and #test denote the number of samples from the real training and test datasets, respectively. #col indicates the number of columns, and one-hot denotes the total dimension size when each variable is transformed into one-hot vectors. Figure 1: (a)-(b) The scatter plot of sampled latent variables given the test dataset with 2-dimensional latent space (the fitted latent space). (c)-(d) The plot of generated samples. #### 4.2.2 Metrics 1. Statistical Similarity To assess the statistical similarity between the real training dataset and the synthetic data, we utilize four metrics each to measure similarity from both marginal and joint distribution perspectives. **Marginal.** The marginal distributional similarity between the real training and synthetic datasets is evaluated using these four metrics: KL-divergence [15, 2], the two-sample Kolmogorov-Smirnov (KS) test [27], support coverage (category coverage) [15, 27], and the MSE of dimension-wise probability [8, 6, 46, 32]. The _KL-divergence_ and _KS test_ are computed independently for each variable, measuring the similarity between the real training and synthetic marginal probability mass functions (PMFs). These metrics quantify the discrepancy between the two PMFs, with both being zero when the distributions are identical and larger values indicating greater dissimilarity. The _support coverage_ (category coverage) metric assesses how well the synthetic data represents the support of variables in the real training data. It is computed as the mean of the ratios of the support cardinalities for all variables between the real training and synthetic datasets. This metric is calculated as \(\frac{1}{p}\sum_{j=1}^{p}T_{j}^{*}/\hat{T}_{j}\), where \(T_{j}^{*}\) and \(\hat{T}_{j}\) represent the support cardinality of the \(j\)th variable in the real training and synthetic data, respectively. When the support coverage is perfect, the metric equals 1, and higher values indicate a better representation of the real data's support in the synthetic dataset. The _MSE of dimension-wise probability_ measures how effectively a synthesizer has learned the distribution of the real training dataset for each dimension. It is computed as the mean squared error between the dimension-wise probability vectors for each variable in the real training and synthetic datasets. **Joint.** The joint distributional similarity between Figure 2: Visualization of sampled latent variables from the aggregated posterior using dimension reduction via PCA. Top row: ‘without’ denotes the model is trained without the entropy regularization term. Bottom row: ‘with’ denotes the model is trained with the entropy regularization term. the real training and synthetic datasets is evaluated using these four metrics: the pairwise correlation difference using the Pearson correlation [15, 54, 32, 2], the pairwise correlation difference using the Kendall's \(\tau\) correlation [27], log-cluster [15, 27], the MSE of variable-wise prediction using the multi-class classification accuracy [49, 54, 48, 24, 38, 8, 14, 6, 15, 51, 31, 32]. The Pearson correlation coefficient and Kendall's \(\tau\) correlation are employed to evaluate the level of correlation captured among the variables by various methods. The _Pairwise Correlation Difference_ (PCD) quantifies the difference in terms of the Frobenius norm between these correlation matrices calculated from the real training and synthetic datasets. A smaller PCD indicates that the synthetic data closely approximates the real data in terms of linear correlations among the variables. In essence, it assesses how effectively the method captures the linear relationships between variables present in the real dataset. The _log-cluster_ metric evaluates how similar the underlying structure between the real training and synthetic datasets is, with particular attention to clustering patterns. To calculate this metric, we initially combine the real training and synthetic datasets into a unified dataset. Subsequently, we apply cluster analysis to this merged dataset using the \(K\)-means algorithm, using a predefined number of clusters denoted as \(G\). The metric is computed as follows: \[\log\left(\frac{1}{G}\sum_{i=1}^{G}\left(\frac{n_{i}^{R}}{n_{i}}-c\right)^{2 }\right),\] where \(n_{i}\) is the number of samples in the \(i\)th cluster, \(n_{i}^{R}\) is the number of samples from the real dataset in the \(i\)th cluster, and \(c=n^{R}/(n^{R}+n^{S})\). \(n^{R}\) and \(n^{S}\) denote the number of samples from the real training and synthetic dataset. In this paper, \(c\) is set to \(0.5\) because we have \(n^{R}=n^{S}\). Large values of the log-cluster metric indicate discrepancies in cluster memberships, suggesting differences in real and synthetic data distribution. As in [15], the number of clusters is set to \(20\). To assess how effectively a synthetic dataset replicates the statistical dependence structures found in the real training dataset, we utilize the _MSE of variable-wise prediction_ using multi-class classification accuracy. This metric is determined by evaluating the predictive performance of a trained model on both the real training and synthetic datasets. For each variable, we train a classifier that performs classification using all variables except the one currently under consideration (one-vs-all classifier). Due to computational issues, we use a linear logistic regression model. Subsequently, we assess the classification performance of the excluded variable on a test dataset. Finally, we calculate the MSE between the dimension-wise predictive performance (accuracy) vectors for each variable based on the classifiers trained separately on the real training and synthetic datasets. #### 4.2.3 Metrics 2. Privacy The privacy-preserving capacity is measured using the privacy metric proposed by [50]. Denote \(\mathbf{x}_{i}^{(Tr)},\mathbf{x}_{i}^{(Te)},\mathbf{x}_{i}^{(S)},i=1,\cdots,n\) as the samples from the real training, test, and synthetic datasets, respectively. And the _nearest neighbor Adversarial Accuracy_ (AA) between the real training and synthetic dataset is defined as: \[AA_{TrS} = \frac{1}{2}\Bigg{(}\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\left(D_{ TrS}(i)>D_{TrTr}(i)\right)\] \[+\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\left(D_{STr}(i)>D_{SS}(i) \right)\Bigg{)},\] where \[D_{TrS}(i) = \min_{j=1,\cdots,n}d(\mathbf{x}_{i}^{(Tr)},\mathbf{x}_{j}^{(S)})\] \[D_{TrTr}(i) = \min_{j=1,\cdots,n,j\neq i}d(\mathbf{x}_{i}^{(Tr)},\mathbf{x}_{ j}^{(Tr)})\] \[D_{SS}(i) = \min_{j=1,\cdots,n,j\neq i}d(\mathbf{x}_{i}^{(S)},\mathbf{x}_{j} ^{(S)}),\] and \(\mathbb{1}(\cdot)\) is an indicator function, \(d(\cdot)\) is the Hamming distance. The AA between the real test and synthetic dataset (\(AA_{TeS}\)) can be defined similarly. \(AA_{TrS}\) indicates the performance of an adversarial classifier responsible for distinguishing between the real training and the synthetic dataset. The ideal scenario is when \(AA_{TrS}\) equals \(0.5\), which means that it's impossible to differentiate between the real training and synthetic datasets. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & survey & income & census \\ \hline MC-Gumbel & 96.0K & 119.8K & 117.1K \\ MC-WGAN-GP & 96.0K & 119.8K & 117.1K \\ WGAN-GP-A & 96.0K & 119.8K & 117.1K \\ \hline medGAN & 96.2K & 120.1K & 117.4K \\ MC-medGAN & 96.2K & 120.1K & 117.4K \\ MC-ARAE & 92.8K & 111.4K & 109.3K \\ corGAN & 96.2K & 120.1K & 117.4K \\ DAAE & 96.1K & 120.0K & 117.3K \\ \hline Ours & 95.6K & 119.4K & 116.6K \\ \hline \hline \end{tabular} \end{table} Table 3: The number of model parameters for each dataset. To simplify, when the generative model effectively replicates real data, the adversarial classifier finds it challenging to tell generated data apart from real data. This results in both the training and test adversarial accuracy (referred to as \(AA_{TrS}\) and \(AA_{TeS}\)) being approximately 0.5, and the privacy loss, which is defined as \(|AA_{TrS}-AA_{TeS}|\), becomes negligible. In essence, privacy is preserved. Conversely, if the generative model performs poorly and fails to mimic real data accurately, the adversarial classifier easily distinguishes between them. Consequently, both the training and test adversarial accuracy will be high, likely exceeding 0.5, and they will have similar values. Surprisingly, even in this case, privacy loss remains low. However, the usefulness of the generated synthetic data for practical purposes may be limited. Lastly, if the generator overfits the training data, the training adversarial accuracy will be high, indicating a good match with the training data. However, the test adversarial accuracy will hover around 0.5, indicating a poor resemblance to new, unseen data. In such a case, privacy is compromised, and the generative model struggles to generalize effectively to new data, resulting in a high privacy loss, nearing 0.5. For ease of interpretation, Table 7 reports the differences between \(AA_{TrS}\), \(AA_{TeS}\), and 0.5. And AA(train), AA(test), and AA(privacy) represent \(|AA_{TrS}-0.5|\), \(|AA_{TeS}-0.5|\), and \(|AA_{TrS}-AA_{TeS}|\), respectively. #### 4.2.4 Results **Metrics 1. Statistical Similarity.** In Tables 4, 5, and 6, we empirically assess how our proposed model performs in comparison to baseline models (both one-step and two-step learning methods) in terms of metrics that measure statistical similarity from both marginal and joint distribution perspectives. In the following result tables, we abbreviate the metrics as follows: KL-divergence as KL, the two-sample \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{marginal} & \multicolumn{4}{c}{joint} \\ \cline{2-9} Model & KL \(\downarrow\) & KS \(\downarrow\) & Coverage \(\uparrow\) & DimProb \(\downarrow\) & PCD(P) \(\downarrow\) & PCD(K) \(\downarrow\) & log-cluster \(\downarrow\) & VarPred \(\downarrow\) & Rank \\ \hline MC-Gumbel & 0.130\(\pm\)0.120 & 0.106\(\pm\)0.049 & 0.997\(\pm\)0.002 & 1.585\(\pm\)0.753 & 7.513\(\pm\)1.397 & 7.360\(\pm\)1.387 & \(-\)2.781\(\pm\)0.643 & 3.154\(\pm\)0.082 & 7.5 \\ MC-WGAN-GP & 0.005\(\pm\)0.002 & 0.020\(\pm\)0.003 & **1.000\(\pm\)**0.000 & 0.278\(\pm\)0.051 & 4.585\(\pm\)0.049 & 4.544\(\pm\)0.044 & **-5.366\(\pm\)**0.357 & 3.054\(\pm\)0.039 & 3.3 \\ WGAN-GP-A & 0.006\(\pm\)0.001 & 0.017\(\pm\)0.003 & 0.875\(\pm\)0.008 & 0.237\(\pm\)0.050 & \(\it{nm}_{\it mean}\) & \(\it{nm}_{\it mean}\) & \(\it{nm}_{\it mean}\) & \(\it{3.312}\)\(\pm\)0.238 & 3.058\(\pm\)0.025 & 6.8 \\ \hline medGAN & 0.652\(\pm\)0.037 & 0.302\(\pm\)0.013 & 0.999\(\pm\)0.011 & 3.647\(\pm\)0.109 & 7.384\(\pm\)0.136 & 7.310\(\pm\)1.59 & -1.883\(\pm\)0.044 & 1.577\(\pm\)0.140 & 8.1 \\ MC-medGAN & 0.542\(\pm\)0.037 & 0.254\(\pm\)0.012 & 1.000\(\pm\)0.020 & 2.619\(\pm\)0.100 & 7.985\(\pm\)0.136 & 7.826\(\pm\)0.148 & -1.842\(\pm\)0.055 & 3.081\(\pm\)0.071 & 8.8 \\ MC-ARAE & 0.185\(\pm\)0.059 & 0.128\(\pm\)0.028 & 0.716\(\pm\)0.157 & 1.769\(\pm\)0.360 & 20.140\(\pm\)0.060 & 20.050\(\pm\)0.060 & -1.878\(\pm\)0.134 & 3.061\(\pm\)0.137 & 9.2 \\ corGAN & 0.603\(\pm\)0.035 & 0.290\(\pm\)0.010 & 0.999\(\pm\)0.001 & 3.019\(\pm\)0.099 & 7.129\(\pm\)0.246 & 7.048\(\pm\)0.243 & -1.961\(\pm\)0.053 & **1.425\(\pm\)**0.193 & 7.2 \\ DAAE & 0.258\(\pm\)0.061 & 0.161\(\pm\)0.034 & 0.637\(\pm\)0.061 & 2.271\(\pm\)0.362 & \(\it{nm}_{\it mean}\) & \(\it{nm}_{\it mean}\) & \(\it{nm}_{\it mean}\) & \(\it{1.600\pm\)0.086} & 3.206\(\pm\)0.090 & 10.8 \\ \hline Ours\((\lambda:0,\gamma:0)\) & 0.128\(\pm\)0.036 & 0.102\(\pm\)0.009 & 0.996\(\pm\)0.004 & 1.296\(\pm\)0.148 & 6.742\(\pm\)0.965 & 6.251\(\pm\)0.809 & -2.824\(\pm\)0.172 & 1.516\(\pm\)0.499 & 5.2 \\ Ours\((H,\lambda:0,\gamma:0)\) & 0.008\(\pm\)0.000 & 0.017\(\pm\)0.002 & 0.965\(\pm\)0.003 & 0.254\(\pm\)0.017 & 5.355\(\pm\)0.200 & 5.316\(\pm\)0.264 & -2.916\(\pm\)0.119 & 2.496\(\pm\)0.729 & 4.8 \\ Ours\((H,\lambda:100,\gamma:0)\) & 0.005\(\pm\)0.000 & **0.014\(\pm\)**0.001 & 0.973\(\pm\)0.002 & 0.199\(\pm\)0.013 & 4.629\(\pm\)0.034 & 4.604\(\pm\)0.035 & -3.883\(\pm\)0.176 & 3.022\(\pm\)0.012 & 3.6 \\ Ours\((H,\lambda:100,\gamma:0)\) & **0.004\(\pm\)**0.000 & **0.014\(\pm\)**0.002 & 0.976\(\pm\)0.002 & **0.196\(\pm\)**0.015 & **4.418\(\pm\)**0.301 & **4.397\(\pm\)**0.296 & -4.037\(\pm\)**0.282 & 2.986\(\pm\)0.107 & **2.6** \\ \hline \hline \end{tabular} \end{table} Table 4: Statistical similarity results from survey dataset. \(\uparrow\) denotes higher is better and \(\downarrow\) denotes lower is better. The best value is bolded, and the second best is underlined. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{marginal} & \multicolumn{4}{c}{joint} \\ \cline{2-9} Model & KL \(\downarrow\) & KS \(\downarrow\) & Coverage \(\uparrow\) & DimProb \(\downarrow\) & PCD(P) \(\downarrow\) & PCD(K) \(\downarrow\) & log-cluster \(\downarrow\) & VarPred \(\downarrow\) & Rank \\ \hline MC-Gumbel & 0.436\(\pm\)0.572 & 0.174\(\pm\)0.134 & 0.843\(\pm\)0.213 & 1.465\(\pm\)1.073 & 3.252\(\pm\)1.207 & 3.406\(\pm\)1.308 & -3.069\(\pm\)0.950 & 0.997\(\pm\)0.708 & 8.1 \\ MC-WGAN-GP & 0.011\(\pm\)0.002 & 0.021\(\pm\)0.005 & **1.000\(\pm\)**0.000 & 0.143\(\pm\)0.040 & **0.700\(\pm\)**0.089 & 0.732\(\pm\)0.123 & -5.692\(\pm\)0.340 & 0.129\(\pm\)0.009 & **1.9** \\ WGAN-GP-A & 0.031\(\pm\)0.003 & 0.027\(\pm\)0.007 & 0.836\(\pm\)0.015 & 0.201\(\pm\)0.053 & 0.757\(\pm\)0.227 & 0.749\(\pm\)0.245 & -5.353\(\pm\)0.273 & **0.127\(\pm\)**0.006 & 3.9 \\ \hline MC-medGAN & 0.704\(\pm\)0.006 & Kolmogorov-Smirnov test as KS, support coverage (category coverage) as Coverage, MSE of dimension-wise probability as DimProb, pairwise correlation difference using Pearson correlation as PCD(P), pairwise correlation difference using Kendall's \(\tau\) correlation as PCD(K), log-cluster as log-cluster, and MSE of variable-wise prediction using multi-class classification accuracy as VarPred. Lastly, the Rank column represents the ranking of each model based on each metric, with the average rank computed for each model across all metrics. Firstly, to assess the impact of regularization terms, we performed an ablation study by training the model with and without the entropy regularization term, Cramer-Wold distance, and classification loss regularization. In Tables 4, 5, and 6, \(H\) indicates the inclusion of the entropy regularization term, and \(\lambda\) and \(\gamma\) represent the weight parameters for the Cramer-Wold distance and the classification loss regularization used in (8). In other words, positive values of these weight parameters mean the incorporation of the respective regularization during training. Note that the training process of step 1 is equivalent to training a vanilla AutoEncoder without the entropy regularization term. Tables 4, 5, and 6 consistently demonstrate that as we add the entropy regularization term, the Cramer-Wold distance, and the classification loss regularization, the average rank progressively increases. This suggests that the entropy regularization term and the introduced regularization techniques in this paper are effective for both marginal and joint distributional learning, enhancing synthetic data generation performance. In particular, similar to the experimental results on the MNIST dataset in Section 4.1, it can be observed that using the entropy regularization term, which was not considered in many existing two-step learning methods, helps prevent the step 2 target distribution, the aggregated posterior, from becoming overly complex. This, in turn, contributes to improving synthetic data generation performance. The visualization results of sampled latent variables from the aggregated posterior for each dataset can be observed in Figure 2. As shown in Figure 2, when the \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{marginal} & \multicolumn{4}{c}{joint} \\ \cline{2-10} Model & KL \(\downarrow\) & KS \(\downarrow\) & Coverage \(\uparrow\) & DimProb \(\downarrow\) & PCD(P) \(\downarrow\) & PCD(K) \(\downarrow\) & log-cluster \(\downarrow\) & VarPred \(\downarrow\) & Rank \\ \hline MC-Gumbel & \(0.080_{\pm 0.032}\) & \(0.079_{\pm 0.019}\) & \(0.969_{\pm 0.014}\) & \(1.278_{\pm 0.304}\) & \(6.346_{\pm 0.940}\) & \(6.291_{\pm 1.036}\) & \(-3.494_{\pm 0.663}\) & \(0.851_{\pm 0.138}\) & 6.4 \\ MC-WGAN-GP & \(0.009_{\pm 0.002}\) & \(0.024_{\pm 0.004}\) & \(\mathbf{1.000_{\pm 0.001}}\) & \(0.357_{\pm 0.005}\) & \(2.747_{\pm 0.212}\) & \(\mathbf{2.632_{\pm 0.190}}\) & \(\mathbf{-5.733_{\pm 0.405}}\) & \(\mathbf{0.239_{\pm 0.013}}\) & **2.5** \\ WGAN-GP-A & \(0.016_{\pm 0.002}\) & \(0.025_{\pm 0.002}\) & \(0.855_{\pm 0.008}\) & \(0.380_{\pm 0.042}\) & \(\mathit{max}_{\mathit{num}}\) & \(\mathit{max}_{\mathit{num}}\) & \(-4.886_{\pm 0.341}\) & \(\mathit{0.240_{\pm 0.014}}\) & 6.8 \\ \hline MC-DAGGAN & \(0.64_{\pm 0.004}\) & \(0.302_{\pm 0.013}\) & \(\mathbf{1.000_{\pm 0.000}}\) & \(3.249_{\pm 0.171}\) & \(15.63_{\pm 0.208}\) & \(16.121_{\pm 0.324}\) & \(-1.678_{\pm 0.006}\) & \(0.930_{\pm 0.129}\) & 9.1 \\ medGAN & \(0.534_{\pm 0.016}\) & \(0.305_{\pm 0.007}\) & \(\mathbf{1.000_{\pm 0.000}}\) & \(3.228_{\pm 0.052}\) & \(13.236_{\pm 0.599}\) & \(14.128_{\pm 0.641}\) & \(-1.778_{\pm 0.009}\) & \(1.090_{\pm 0.085}\) & 8.7 \\ MC-ARAE & \(0.184_{\pm 0.032}\) & \(0.133_{\pm 0.018}\) & \(\mathcal{0.505_{\pm 0.025}}\) & \(1.861_{\pm 0.212}\) & \(\mathit{max}_{\mathit{num}}\) & \(\mathit{max}_{\mathit{num}}\) & \(-1.902_{\pm 0.130}\) & 1.230_{\pm 0.155}\) & 9.6 \\ corGAN & \(0.471_{\pm 0.038}\) & \(0.285_{\pm 0.014}\) & \(\mathbf{1.000_{\pm 0.000}}\) & \(0.302_{\pm 0.140}\) & \(12.865_{\pm 0.696}\) & \(13.754_{\pm 0.681}\) & \(-1.825_{\pm 0.065}\) & \(1.219_{\pm 0.128}\) & 8.1 \\ DAAE & \(0.409_{\pm 0.102}\) & \(0.229_{\pm 0.042}\) & \(0.541_{\pm 0.058}\) & \(3.323_{\pm 0.578}\) & \(\mathit{max}_{\mathit{num}}\) & \(\mathit{max}_{\mathit{num}}\) & \(-1.631_{\pm 0.068}\) & \(1.983_{\pm 0.449}\) & 10.9 \\ \hline Ours\(\langle\lambda\) ; \(0\), \(\gamma\) ; \(0\) & \(0.057_{\pm 0.009}\) & \(0.079_{\pm 0.008}\) & \(0.098_{\pm 0.001}\) & \(1.040_{\pm 0.118}\) & \(6.485_{\pm 0.633}\) & \(6.776_{\pm 0.763}\) & \(-3.608_{\pm 0.299}\) & \(0.330_{\pm 0.029}\) & 5.8 \\ Ours\(\langle H\), \(\lambda\) ; \(0\), \(\gamma\) ; \(0\) & \(0.010_{\pm 0.000}\) & \(0.023_{\pm 0.002}\) & \(0.961_{\pm 0.003}\) & \(0.330_{\pm 0.024}\) & \(4.831_{\pm 0.155}\) & \(4.848_{\pm 0.193}\) & \(-4.279_{\pm 0.383}\) & \(0.487_{\pm 0.021}\) & 4.6 \\ Ours\(\langle H\), \(\lambda\) ; \(100\), \(\gamma\) ; \(0\) & \(0.006_{\pm 0.000}\) & \(\mathbf{0.020_{\pm 0.001}}\) & \(0.966_{\pm 0.004}\) & \(\mathbf{0.269_{\pm 0.013}}\) & \(3.211_{\pm 0.154}\) & \(3.099_{\pm 0.144}\) & \(-5.323_{\pm 0.178}\) & \(0.325_{\pm 0.018}\) & 3.0 \\ Ours\(\langle H\), \(\lambda\) : \(100\), \(\gamma\) ; \(3\) & \(\mathbf{0.005_{\pm 0.001}}\) & \(0.021_{\pm 0.002}\) & \(0.953_{\pm 0.005}\) & \(0.275_{\pm 0.021}\) & \(\mathbf{2.744_{\pm 0.223}}\) & \(2.747_{\pm 0.240}\) & \(-5.625_{\pm 0.254}\) & \(\mathbf{0.239_{\pm 0.010}}\) & 2.6 \\ \hline \hline \end{tabular} \end{table} Table 6: Statistical similarity results from census dataset. \(\uparrow\) denotes higher is better and \(\downarrow\) denotes lower is better. The best value is bolded, and the second best is underlined. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{survey} & \multicolumn{3}{c}{income} & \multicolumn{3}{c}{c} & \multicolumn{3}{c}{c} & \multicolumn{3}{c}{c} \\ \cline{2-10} Model & AA(train) & AA(test) & AA(privacy) & AA(train) & AA(test) & AA(privacy) & AA(train) & AA(test) & AA(privacy) \\ \hline MC-Gumbel & \(0.381_{\pm 0.048}\) & \(0.386_{\pm 0.047}\) & \(0.016_{\pm 0.009}\) & \(0.286_{\pm 0.197}\) & \(0.319_{\pm 0.154}\) & \(0.043_{\pm 0.0 model is trained in step 1 without the entropy regularization term, we observe that the aggregated posterior exhibits a high level of complexity. Furthermore, in the absence of the entropy regularization term, the range of values that the latent variables can take is significantly expanded. In the visualization process, a dimension reduction technique PCA (Principal Component Analysis) is utilized to preserve the global structure of the sampled latent variables. Table 4 demonstrates that our proposed model attains the highest average rank when evaluated on survey dataset. This indicates that our model generates synthetic data with the most effective performance. Furthermore, in comparison to the top-performing alternative model, MC-WGAN-GP, our model outperforms it by achieving the top scores on 5 out of 8 metrics, consistently showcasing superior performance (MC-WGAP-GP achieves the top scores on 2 out of 8 metrics). When analyzing Table 5, we observe that our proposed model, while having a slightly lower average rank compared to the top-performing alternative model, MC-WGAN-GP, excels when examining the metrics where the highest scores are achieved. MC-WGAN-GP outperforms in only 2 out of 8 metrics, whereas our proposed model secures the highest scores in 5 out of 8 metrics. This consistent performance across a range of metrics underscores the strong competitiveness of our approach to generating synthetic data. In the case of census dataset (Table 6), our proposed model exhibits a very slight decrease in average rank compared to the top-performing alternative model, MC-WGAN-GP. Additionally, it achieves the highest metric score in 3 out of 8 metrics, which is slightly less than MC-WGAN-GP's 4 out of 8. However, our model secures either 1st or 2nd place in all metrics except the Coverage metric. Notably, in the marginal statistical similarity metrics, our model outperforms MC-WGAN-GP significantly. Thus, while our model's rank is lower in the Coverage metric, resulting in a lower average rank, it demonstrates competitive or superior performance in the other seven metrics. **Metrics 2. Privacy.** Comparing AA(train) in Table 7 with MC-WGAN-GP, the top-performing model among the statistical similarity metrics, our proposed model consistently shows competitive results, being the second-best performer in survey and income datasets. Notably, it outperforms MC-WGAN-GP by a significant margin in census dataset. This indicates that our model effectively replicates real data, making the generated data practically useful and challenging for the adversarial classifier to distinguish from real training data. In terms of AA(test) in Table 7, which represents the ability to reproduce results for unseen data, our model exhibits competitive performance across all datasets. Moreover, except for survey dataset, our model shows minimal differences between AA(train) and AA(test) compared to other models, indicating that it doesn't overfit the real training dataset. Regarding privacy-preserving performance as shown by AA(privacy) in Table 7, our proposed model, when compared to MC-WGAN-GP, demonstrates lower values in income and census datasets, and competitive values in survey dataset. Hence, it can be concluded that our proposed model carries a lower risk of privacy leakage. ## 5 Conclusion and Limitations This paper introduces a novel two-step learning framework designed to estimate both the marginal and joint distributions effectively. We achieve this by incorporating two regularization techniques: the Cramer-Wold distance and the classification loss. Moreover, we establish a theoretical distinction between our proposed approach and existing two-step learning methods. Specifically, we provide an accurate mathematical derivation of the objective function in step 1, introducing the previously overlooked entropy regularization term (the second term in (4) on the right-hand side). We demonstrate that this term plays a significant role in enhancing the synthetic data generation performance of our model, distinguishing it from conventional two-step learning. For all datasets, the parameter \(\lambda\) consistently demonstrates good performance without the need for tuning, with a fixed value of 100. However, the weight parameter for the classification loss regularization and \(\beta\), the weight parameter for KL-divergence in step 2, require dataset-specific tuning. Additionally, while our approach exhibits strong performance in most metrics measuring statistical similarity with the marginal distribution, it falls short in terms of support coverage compared to other models. Addressing this limitation will be a part of our future work. On the other hand, the introduction of Electronic Health Records (EHR) has led to the generation of vast amounts of data, and numerous studies have focused on the generation of synthetic EHRs to facilitate data-driven research while addressing privacy concerns and the risk of re-identification [8, 15, 37, 27, 51, 10, 32, 28]. Since many EHR datasets consist of discrete or categorical variables, we anticipate that our proposed methodology can be readily applied to synthetic data generation for EHR data. In this regard, our paper primarily focuses on improving synthetic data generation performance. And, due to the trade-off between the quality of synthetic data and privacy preservation, our approach provides a lower level of privacy protection than other models. Hence, our future work will involve refining our proposed method to enhance both synthetic data generation for EHR data and the model's privacy-preserving performance concurrently. ## Appendix A Appendix ### Mathematical Derivations #### a.1.1 The forward KL-divergence \[\mathcal{KL}(p(\mathbf{x})\|\hat{p}(\mathbf{x}))\] \[= \mathcal{KL}\left(p(\mathbf{x})\|\int p(\mathbf{z})p(\mathbf{x} |\mathbf{z})d\mathbf{z}\right)\] \[= \int p(\mathbf{x})\log p(\mathbf{x})d\mathbf{x}-\mathbb{E}_{p( \mathbf{x})}\left[\log\int p(\mathbf{z})p(\mathbf{x}|\mathbf{z})d\mathbf{z}\right]\] \[= -H(p(\mathbf{x}))-\mathbb{E}_{p(\mathbf{x})}\left[\log\int p( \mathbf{z})p(\mathbf{x}|\mathbf{z})\frac{q(\mathbf{z}|\mathbf{x})}{q(\mathbf{ z}|\mathbf{x})}d\mathbf{z}\right]\] \[\leq -H(p(\mathbf{x}))-\mathbb{E}_{p(\mathbf{x})}\left[\int q(\mathbf{ z}|\mathbf{x})\log p(\mathbf{x}|\mathbf{z})\frac{p(\mathbf{z})}{q(\mathbf{z}| \mathbf{x})}d\mathbf{z}\right]\] \[= \int p(\mathbf{x})q(\mathbf{z}|\mathbf{x})\log p(\mathbf{x})d \mathbf{x}d\mathbf{z}\] \[-\int p(\mathbf{x})q(\mathbf{z}|\mathbf{x})\log\frac{p(\mathbf{ x}|\mathbf{z})p(\mathbf{z})}{q(\mathbf{z}|\mathbf{x})}d\mathbf{z}\] \[= \int p(\mathbf{x})q(\mathbf{z}|\mathbf{x})\log\frac{p(\mathbf{x}) q(\mathbf{z}|\mathbf{x})}{p(\mathbf{x}|\mathbf{z})q(\mathbf{z})}d\mathbf{z}\] \[+\int p(\mathbf{x})q(\mathbf{z}|\mathbf{x})\log\frac{q(\mathbf{z} )}{p(\mathbf{z})}d\mathbf{z}\] \[= \underbrace{\mathcal{KL}\Big{(}p(\mathbf{x})q(\mathbf{z}|\mathbf{x })\|p(\mathbf{x}|\mathbf{z})q(\mathbf{z})\Big{)}}_{(i)}+\underbrace{\mathcal{KL }\Big{(}q(\mathbf{z})\|p(\mathbf{z})\Big{)}}_{(ii)}\] where \(H(\cdot)\) is the entropy function (for notational simplicity, we drop notations for parameters). #### a.1.2 The upper bound of the entropy regularization term \[\mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[\log q( \mathbf{z};\phi))]\] \[= \mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[\log q( \mathbf{z}|\mathbf{x};\phi)]-\mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x };\phi)}[\log q(\mathbf{z};\phi)],\] and \[\mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[\log q( \mathbf{z};\phi)]\] \[= \int q(\mathbf{z};\phi)\log q(\mathbf{z};\phi)d\mathbf{z}\] \[= \int q(\mathbf{z};\phi)\log\left(\int p(\mathbf{x})q(\mathbf{z}| \mathbf{x};\phi)d\mathbf{x}\right)d\mathbf{z}\] \[\geq \iint p(\mathbf{x})q(\mathbf{z};\phi)\log q(\mathbf{z}|\mathbf{x };\phi)d\mathbf{x}d\mathbf{z}.\] Therefore, \[\mathbb{E}_{p(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}|\mathbf{x}; \phi)\|q(\mathbf{z};\phi))]\] \[\leq \mathbb{E}_{p(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)}[\log q( \mathbf{z}|\mathbf{x};\phi)]-\mathbb{E}_{p(\mathbf{x})q(\mathbf{z};\phi)}[ \log q(\mathbf{z}|\mathbf{x};\phi)].\]
2308.14729
Laser Scheme for Doppler Cooling of the Hydroxyl Cation (OH$^+$)
We report on a cycling scheme for Doppler cooling of trapped OH$^+$ ions using transitions between the electronic ground state $X^3\Sigma^-$ and the first excited triplet state $A^3\Pi$. We have identified relevant transitions for photon cycling and repumping, have found that coupling into other electronic states is strongly suppressed, and have calculated the number of photon scatterings required to cool OH$^+$ to a temperature where Raman sideband cooling can take over. In contrast to the standard approach, where molecular ions are sympathetically cooled, our scheme does not require co-trapping of another species and opens the door to the creation of pure samples of cold molecular ions with potential applications in quantum information, quantum chemistry, and astrochemistry. The laser cooling scheme identified for OH$^+$ is efficient despite the absence of near-diagonal Franck-Condon factors, suggesting that broader classes of molecules and molecular ions are amenable to laser cooling than commonly assumed.
Niccolò Bigagli, Daniel W. Savin, Sebastian Will
2023-08-28T17:32:44Z
http://arxiv.org/abs/2308.14729v1
# Laser Scheme for Doppler Cooling of the Hydroxyl Cation (OH\({}^{+}\)) ###### Abstract We report on a cycling scheme for Doppler cooling of trapped OH\({}^{+}\) ions using transitions between the electronic ground state \(X^{3}\Sigma^{-}\) and the first excited triplet state \(A^{3}\Pi\). We have identified relevant transitions for photon cycling and repumping, have found that coupling into other electronic states is strongly suppressed, and have calculated the number of photon scatterings required to cool OH\({}^{+}\) to a temperature where Raman sideband cooling can take over. In contrast to the standard approach, where molecular ions are sympathetically cooled, our scheme does not require co-trapping of another species and opens the door to the creation of pure samples of cold molecular ions with potential applications in quantum information, quantum chemistry, and astrochemistry. The laser cooling scheme identified for OH\({}^{+}\) is efficient despite the absence of near-diagonal Franck-Condon factors, suggesting that broader classes of molecules and molecular ions are amenable to laser cooling than commonly assumed. ## I Introduction Laser cooling and quantum control of atoms and atomic ions has enabled a plethora of scientific investigations over the last decades [1; 2; 3; 4]. Over recent years, the field has expanded towards molecules [5; 6; 7; 8; 9], as their more complex quantum-state structure opens a broader scope of physics to be studied, including measurements of fundamental constants [10; 11; 12], investigations of quantum chemistry [13; 14; 15], applications in quantum information [6; 7; 16], and access to novel many-body quantum systems [17]. Identifying molecules that are relevant for scientific and technological applications and at the same time amenable to laser cooling is an active area of research but, so far, it has almost exclusively focused on neutral molecules [18; 19; 20; 21; 22; 23]. Molecular ions have broad scientific use cases [6]. Progress on laser cooling and quantum control schemes for molecular ions promises to open new scientific avenues, building on the enormous success of atomic ions in quantum science [1; 24]. For quantum information, the rich internal structure of molecular ions may allow the realization of efficient gate operations and long qubit storage times [16; 25] in the same physical system, akin to neutral molecules [26; 27; 28]. Molecular ions enable Coulomb-mediated two-qubit operations [29], and have been proposed as an alternative to neutral molecules [30; 31]. They also may allow the realization of qudits, multi-level systems that constitute a powerful extension of the qubit-based quantum information paradigm [32; 33]. In addition, cold molecular ions in well-defined quantum states can be employed in collisional studies in quantum chemistry and astrochemistry. For example, gas-phase chemistry in cold interstellar clouds is driven by ion-neutral reactions where the ions are in their lowest electronic, vibrational, and rotational levels due to the relaxation of internal excitations close to the 2.7 K cosmic microwave background [34; 35]. Fully quantum mechanical calculations for these reactions are beyond current computational capabilities for systems with four or more atoms. Therefore, laboratory measurements with molecules in quantum states similar to those in interstellar space would be helpful to elucidate the chemical kinetics [36; 37; 38; 39; 40]. Today, the standard approach to the preparation of trapped molecular ions relies on sympathetic cooling via a co-trapped ionic [28; 41] or neutral [42] atomic species. Direct laser cooling of molecular ions has not been explored extensively. Although cooling via co-trapped species has proven effective and useful [31], it is technically challenging. In addition, for sensitive applications in quantum information and precision measurements, where high fidelity and low dephasing are critical [43], the presence of a second species may eventually be limiting [30; 44]. All-optical cooling schemes should be highly attractive but, so far, only few theoretical studies on the prospects of laser cooling of molecular ions exist [44] and most have not accounted for the full rovibrational structure of the ion under investigation [45; 46; 47]. In this article, we discuss a laser-cooling scheme for the hydroxyl cation, OH\({}^{+}\). We envision the scheme to be applied to a single ion or an ensemble of ions held in a deep and tightly confining ion trap, as schematically illustrated in Fig. 1 (a). In this setting, three-dimensional laser cooling can be achieved with cooling and repumping laser beams incident from a single direction that has a finite angle with all three Cartesian axes of the trap [48]. The proposed scheme can provide cooling in the Doppler regime, where the motional energy of the ion is significantly larger than the trap frequency [31]. To cool below the Doppler limit, a secondary Raman sideband cooling step may be employed [44]. However, in this work we solely focus on the description of the Doppler cooling scheme for OH\({}^{+}\). The OH\({}^{+}\) ion is particularly relevant in the context of astrophysics and astrochemistry. The production of cold, pure, and trapped samples of OH\({}^{+}\) would enable reaction studies that can help shed light on processes such as the cosmic ray ionization rate of the interstellar medium [49] and the gas-phase pathway to the formation of water [50]. Additional uses of laser-cooled OH\({}^{+}\) molecules may be in quantum information. Due to large rotational spacings (\(B\sim 500\) GHz [51]), coherent control of rotational qubits in OH\({}^{+}\) would be challenging but possible via a two-photon Raman transfer. Given that Doppler cooling schemes have not been widely discussed for molecular ions, this study also uses OH\({}^{+}\) as a proof-of-concept case that may encourage the development of laser cooling schemes for other molecular ions. The relevant low-lying potential energy curves of OH\({}^{+}\) are shown in Fig. 1 (b) [52]. OH\({}^{+}\) has favorable characteristics for laser cooling. Its energy level structure is relatively simple, with its first few electronic states having either a triplet or a singlet nature. Furthermore, OH\({}^{+}\) is extremely tightly bound. As can be seen from the minima of the potential energy curves (see Fig. 1), the bond length is about one \(a_{0}\), where \(a_{0}\) is the Bohr radius. As a result, the vibrational and rotational energy spacings are large, of the order of 300 cm\({}^{-1}\) and 60 cm\({}^{-1}\), respectivelyDue to the large vibrational spacing, only about a dozen vibrational states exist in the \(X^{3}\Sigma^{-}\) potential below the \(A^{3}\Pi\) potential, which limits the number of decay channels for potential transitions of a laser cooling scheme. However, there is no transition in OH\({}^{+}\) with a near-diagonal Franck-Condon factor (FCF), a characteristic that is often believed necessary for a functional cooling scheme [44]. The highest branching ratio for a single transition is 0.56 [53]. Despite this complication, as we demonstrate below, OH\({}^{+}\) supports a laser cooling scheme that is not more complex than state-of-the-art cooling schemes for neutral molecules [54]. OH\({}^{+}\) also has a non-zero nuclear spin of \(I=1/2\)[55], which we do not explicitly take into account in this study. However, we do not expect the resulting hyperfine structure to fundamentally prevent the laser cooling from functioning, as has been seen in earlier demonstrations of laser cooling [56]. ## II Laser Cycling Scheme The proposed cycling scheme makes use of transitions between the electronic ground state of the molecule, \(X^{3}\Sigma^{-}\), and its first electronically excited triplet state, \(A^{3}\Pi\). The specific transitions of the scheme are shown in Fig. 2. We explain below how a sufficient degree of closure can be reached. The required spectroscopic data was extracted from the ExoMol database [58], which for OH\({}^{+}\) relies on the optical transitions published in Refs. [51; 59]. We utilize the provided energy levels, transition frequencies, and Einstein \(A\) coefficients. The logic of the cooling scheme is as follows: We start from the absolute molecular ground state, \(X^{3}\Sigma^{-}\)\(|v=0,\,N=0,\,J=1\rangle\), following Hund's case (b) [60], where \(v\), \(N\) and \(J\) are the vibrational quantum number, the angular momentum excluding electron and nuclear spin, and the angular momentum excluding nuclear spin, respectively. Due to the \(\Delta J=\pm 1\) selection rule, excitation both to \(J=0\) and \(J=2\) states is possible. Excitation to \(J=0\) is favorable as the only decay channel is to \(J=1\), reducing the number of needed repumper lasers by a factor of two. In addition, excitation to a low-lying vibrational state is favorable due to higher FCFs. Thus, we choose \(A^{3}\Pi\)\(|0,\,1,\,0\rangle\) as the first excited state of the scheme. The main decays of this state are to the eight \(X^{3}\Sigma^{-}\) states shown in Fig. 2, with only two other observed decays with negligible branching ratios (\(\sim 10^{-8}\)). Overall, the scattering rate of \(A^{3}\Pi\)\(|0,\,1,\,0\rangle\) is relatively small at \(3.5\times 10^{5}\) s\({}^{-1}\) and by itself would lead to long cooling times. By adding a second excited state to the cycling scheme, the \(A^{3}\Pi\)\(|1,\,1,\,0\rangle\) state, it is possible to speed up the cooling time by about a factor of two, as we discuss below. Among the decay channels of the excited \(A^{3}\Pi\)\(|0,\,1,\,0\rangle\) and \(A^{3}\Pi\)\(|1,\,1,\,0\rangle\) states are 6 and 8 decay channels, respectively, with branching ratios above \(10^{-3}\), as shown in Fig. 3. This is a practical threshold for the relevance of a transition in a typical laser cooling scheme [61]. Branching ratios are calculated using the Einstein coefficients \(A_{i}\) for spontaneous decay to a state \(i\)[58] through the relation BR\({}_{i}=A_{i}/\sum_{j}A_{j}\), where the sum runs over all possible radiative decay paths to a lower energy state \(j\) for a given \(A^{3}\Pi\) state. We have investigated the question whether loss to other Figure 1: Laser cooling of OH\({}^{+}\) ions trapped in an ion trap. (a) Schematic of an OH\({}^{+}\) ion a cylindrical quadrupole ion trap [57]. The left image shows a three-dimensional sketch of the ion trap; the right image a cross section, including field lines. The specific trap shown is for illustration purposes only; the cooling scheme is general and can be implemented in other types of ions traps. The cooling and repumping beams are incident collinearly from one direction that has a finite angle with all three Cartesian axes of the trap, providing cooling in three dimensions. (b) Potential energy curves of OH\({}^{+}\)[52]. Depicted are the low-lying electronic potentials relevant to this study, two with triplet spin character and two with singlet spin character. The inset shows a pictorial representation of an OH\({}^{+}\) molecular ion. states could potentially hamper the \(X^{3}\Sigma^{-}\leftrightarrow A^{3}\Pi\) cooling scheme. We find that the effects of state mixing between \(A^{3}\Pi\) and the singlet states \(b^{\,1}\Sigma^{+}\) and \(a^{\,1}\Delta\) should be minimal. For the nearby \(b^{\,1}\Sigma^{+}\) state, spin-orbit coupling with a strength of about 75 cm\({}^{-1}\)[62] leads to an admixture of 0.5% \(b^{\,1}\Sigma^{+}\)\(|v=0\rangle\) to the \(A^{3}\Pi\)\(|v=0,1\rangle\) states, calculated from the diagonalization of the Hamiltonian of these states with an off-diagonal spin-orbit contribution. Although this is a non-negligible admixture, it is not expected to lead to relevant losses out of the cycling scheme (see Appendix). This is due to the unusual structure of OH\({}^{+}\), where decay from \(b^{\,1}\Sigma^{+}\) to \(a^{\,1}\Delta\) is suppressed by the \(\Delta\Lambda=0,\pm 1\) selection rules [63]. Furthermore, direct coupling between \(b^{\,1}\Sigma^{+}\) and vibrationally excited \(a^{\,1}\Delta\) states should not be an issue, as the only loss channel via mixing of \(a^{\,1}\Delta\) can be due to relaxation into lower vibrational states of \(a^{\,1}\Delta\). However, spontaneous decay between vibrational states is suppressed in the first order [63] and driving of such transitions by black-body radiation would be slow compared to the timescale of the cooling scheme. In line with these arguments, to our knowledge, such decays in OH\({}^{+}\) have not been reported. Finally, we note that loss from predissociation [64] is fully suppressed for the \(X^{3}\Sigma^{-}\leftrightarrow A^{3}\Pi\) cooling scheme. ## III Results & Discussion Using the branching ratios from Fig. 3, we quantify the closure that can be achieved by adding an increasing number of repumping transitions. We calculate the number of scatterings that will lead to a probability of 10% (\(n_{10\%}\)) and 90% (\(n_{90\%}\)) for retaining an ion in the cycling scheme [65]. calculations are made assuming the use of two excited states, as shown in Fig. 2. We employ a Monte Carlo model in which an ion is initialized in the ground state and then each scattering event is simulated by updating the probability of populating each state in the cooling scheme after each step. Table 1 shows the results of these calculations. The quantity \(p\) is the closure of the scheme, calculated via \(n_{x\%}=\ln(x/100)/\ln(p)\)[65]. A conservative estimate of the time to complete \(n_{x\%}\) scattering events, \(t_{x\%}\), is also provided. To calculate this quantity we use the relation \(t_{x\%}=n_{x\%}/R\), where \(R=\Gamma/(G+1+2\sum I_{\rm sat,}/I_{l})\) is the scattering rate [66; 67]. Here, \(\Gamma\) is the excited state linewidth; \(G\) is the number of driven transitions; \(I_{\rm sat,\,}i=\pi hc\Gamma/3\lambda_{i}^{3}\) is the saturation intensity [67] of the \(i^{\rm th}\) transition; and \(I_{l}\) is the intensity of the laser addressing said transition, set to \(I_{l}=10^{3}\,{\rm mW\,cm^{-2}}\), which is an intensity that is easily achievable in an experiment at the Figure 2: Laser cooling scheme. The green (purple) arrows represent the cooling and repumping beams from the \(X^{3}\Sigma^{-}\) ground electronic state to the \(A^{3}\Pi\)\(|0,1,0\rangle\) (\(|1,1,0\rangle\)) state. The numbers on the left represent the quantum numbers \(|v,N,J\rangle\). Next to each arrow, we indicate the wavelength of each transition. Transitions from left to right are ordered from smaller-to-larger transition wavelength and larger-to-smaller transition strength. The labels above each transition for the first (F) and second (S) legs of the cooling scheme are introduced for easier comparison to Fig. 3 Figure 3: Branching ratios for decay from the excited states (a) \(A^{3}\Pi\)\(|0,1,0\rangle\) and (b) \(A^{3}\Pi\)\(|1,1,0\rangle\). The decay paths for each scheme are numbered in order of decreasing branching ratios and the labels and color coding refer to the transitions shown in Fig. 2. All decays are to the same states in the \(X^{3}\Sigma^{-}\) manifold illustrated in Fig. 2. given wavelengths. In the definition of the saturation intensity, \(c\) is the speed of light, and \(\lambda_{t}\) the wavelength of the addressed transition. This results in an overestimate of \(t_{\text{c}\%}\) as the expression for \(R\) assumes a single excited state, only \(A^{3}\Pi\ |0,1,0\rangle\). In an experiment, where several repumping transitions cycle through a second excited state, here \(A^{3}\Pi\ |1,1,0\rangle\), the scattering rate is boosted and the cooling time lowered, as has been experimentally demonstrated in laser cooling schemes for neutral molecules, e.g. in Ref. [22]. We calculate the number of scatterings, \(n_{\text{cool}}\), required to bring the sample to close-to-zero motional temperature from room (300 K) and cryogenic (4 K) temperatures. In addition, we calculate the time \(t_{\text{cool}}\) necessary for this process. For the experimental setup, we assume that the ion is electrostatically trapped and that cooling and repumping laser beams come from a single direction that has a finite angle with all three Cartesian trap axes, as illustrated in Fig. 1 (a). Using a single incoming direction for the lasers enables cooling in all spatial directions thanks to the confinement provided by the ion trap, as suggested in Ref. [48], and greatly simplifies the laser setup. Table 2 summarizes the results. Cooling OH\({}^{+}\) ions down from room temperature, will require \(2\times 10^{4}\) scattering events. Comparing this requirement to the results in Table 1, this can be achieved with 90% efficiency within \(<500\) ms using 8 transitions (F1-F4, S4, and S6-S8). Cooling OH\({}^{+}\) ions from cryogenic temperatures, will require \(2.3\times 10^{3}\) scattering events, which can be achieved with 90% efficiency within \(<50\) ms using 6 transitions (F1-F4, S4, and S6). We note that OH\({}^{+}\) can be easily produced at cryogenic temperatures, as demonstrated in earlier work via electronic impact on water vapor [38; 40]. It is important to note that in practical experiments less stringent requirements on the probability of retention should be acceptable. For a 10% probability of retention, cooling from room temperature can be achieved with 6 to 7 lasers (F1-F4, S4, S6-S7) within about 1 s, and from cryogenic temperatures with 5 to 6 lasers (F1-F4, S4, S6) within about 50 ms. Hence, on average it would take 10 s and 0.5 s, respectively, to successfully initialize the system, a time that is short compared to the storage time of a molecular ion once it is trapped and cooled. ## Conclusions In summary, we have presented a direct laser cooling scheme of OH\({}^{+}\). The scheme is expected to work both for a single trapped ion and trapped ensembles. In the case of an ensemble, the direction of the laser cooling beams could even be arbitrary, as the Coulomb interactions in the ensemble are expected to couple all directions of motion [31]. While the scheme requires some technical effort, laser cooling of OH\({}^{+}\) would come with several benefits: Cold samples can be prepared without the need for sympathetic cooling via neutral or ionic atoms, and efficiently temperatures can be reached where Raman sideband cooling can take over to cool ions to the vibrational ground state (of the trap). Rapid progress in laser technology [68] promises to drastically reduce the technical complications involved in a cooling scheme with multiple lasers. Also, due to the confined trapping region of an ion trap, the lasers can be tightly focused and we expect that only very moderate laser powers are be necessary. Once cooled, OH\({}^{+}\) will enable studies relevant for gas-phase astrochemistry of interstellar clouds. Furthermore, quantum control of OH\({}^{+}\) will be an enabling step for its use as a qubit or qudit in quantum information experiments. Finally, this work, similar to the case of C\({}_{2}\) which we discussed in earlier work [61], suggests that also molecules with non-diagonal FCFs can be amenable to laser cooling. We hope this work will inspire further investigations on the application of laser cooling techniques to more complex molecules that are scientifically relevant, but have so far been deemed unsuitable for laser cooling. ## Acknowledgements We thank Peter F. Bernath, Tim de Jongh, Abel Kalosi, Ian Stevenson, and Sergey Yurchenko for stimulating discussions, and Octavio Roncore for help with the potential energy curves. This work was supported by a Columbia University Research Initiative in Science and Engineering (RISE) award. D.W. Savin was additionally supported by the NASA Astrophysics Research and Analysis program under 80NSSC19K0969. S.W. acknowledges additional support from the Alfred P. Sloan Foundation. ## Appendix As noted in the main text, a loss channel for the \(X^{3}\Sigma^{-}\leftrightarrow A^{3}\Pi\) cooling scheme could arise from spin-orbit coupling between \(A^{3}\Pi\) and nearby singlet states. Here, we provide more \begin{table} \begin{tabular}{c c|c c} \hline \hline \(T=4\) K & \multicolumn{2}{c}{\(T=300\) K} \\ \(n_{\text{cool}}\) & \(t_{\text{cool}}\) (ms) & \(n_{\text{cool}}\) & \(t_{\text{cool}}\) (ms) \\ \hline \(2.3\times 10^{3}\) & 45 & \(2.0\times 10^{4}\) & 400 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of the photon scatterings analysis starting at cryogenic and room temperature. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Driven transitions & \(p\) & \(n_{10\%}\) & \(t_{10\%}\) (ms) & \(n_{90\%}\) & \(t_{90\%}\) (ms) \\ \hline F1 & 0.631 & 5 & 0.02 & 1 & \\ F1-F2 & 0.873 & 17 & 0.1 & 1 & \\ F1-F3 & 0.950 & 45 & 0.5 & 3 & 0.02 \\ F1-F4 & 0.994 & 361 & 5 & 17 & 0.2 \\ F1-F4, S4 & 0.997 & 918 & 17 & 43 & 0.8 \\ F1-F4, S4, S6 & 0.9986 & 15941 & 800 & 731 & 36 \\ F1-F4, S4, S6-S7 & 0.99995 & 44 287 & \(2.6\times 10^{3}\) & 2028 & 100 \\ F1-F4, S4, S6-S8 & 0.9999997 & 2.563 381 & \(1.9\times 10^{6}\) & 117295 & \(9.0\times 10^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Closure, number of scatterings, and scattering time for an increasing number of repumping transitions. detail to show that such processes should be sufficiently suppressed for practical purposes. As mentioned in the main text, the 0.5% admixture of \(b^{1}\Sigma^{+}\ket{v=0}\) to the \(A^{3}\Pi\ket{v=0,1}\) states does not open up a loss channel to first order due to prohibitive selection rules. To second order, decay could happen via the admixture of \({}^{1}\Pi\) character to the \(b^{1}\Sigma\) potential, with the closest \({}^{1}\Pi\) state lying about 15,000 cm\({}^{-1}\) above the \(b^{1}\Sigma\) potential [52]. With a coupling constant of 75 cm\({}^{-1}\) (assuming a coupling similar to \(b^{1}\Sigma^{+}\) and \(A^{3}\Pi\); to our knowledge there is no reported literature value) this leads to a \(10^{-5}\) admixture of \({}^{1}\Pi\) character to the \(b^{1}\Sigma\) state. Taken together, this amounts to a loss channel on the \(5\times 10^{-8}\) level. Loss channels allowed through higher order processes should contribute significantly less. Also direct coupling between \(A^{3}\Pi\) and \(a^{1}\Delta\) is not expected to lead to significant loss. The only decay path out of the cycling scheme would be spontaneous decay into lower lying vibrational states of \(a^{1}\Delta\) which is forbidden [63]. Based on these arguments it is highly probable that the proposed cooling scheme will be sufficiently closed, especially taking into account that only closure to the \(10^{-4}\) level will be needed for the scheme to be practically useful (see main text and Table 2).
2307.02916
The impact of an employee's psychological contract breach on compliance with information security policies: intrinsic and extrinsic motivation
Despite the rapid rise in social engineering attacks, not all employees are as compliant with information security policies (ISPs) to the extent that organisations expect them to be. ISP non-compliance is caused by a variety of psychological motivation. This study investigates the effect of psychological contract breach (PCB) of employees on ISP compliance intention (ICI) by dividing them into intrinsic and extrinsic motivation using the theory of planned behaviour (TPB) and the general deterrence theory (GDT). Data analysis from UK employees (\textit{n=206}) showed that the higher the PCB, the lower the ICI. The study also found that PCBs significantly reduced intrinsic motivation (attitude and perceived fairness) for ICI, whereas PCBs did not moderate the relationship between extrinsic motivation (sanction severity and sanctions certainty) and ICI. As a result, this study successfully addresses the risks of PCBs in the field of IS security and proposes effective solutions for employees with high PCBs.
Daeun Lee, Harjinder Singh Lallie, Nadine Michaelides
2023-07-06T11:07:39Z
http://arxiv.org/abs/2307.02916v1
# The Impact of an Employee's Psychological Contract ###### Abstract Despite the rapid rise in social engineering attacks, not all employees are as compliant with information security policies (ISPs) to the extent that organisations expect them to be. ISP non-compliance is caused by a variety of psychological motivation. This study investigates the effect of psychological contract breach (PCB) of employees on ISP compliance intention (ICI) by dividing them into intrinsic and extrinsic motivation using the theory of planned behaviour (TPB) and the general deterrence theory (GDT). Data analysis from UK employees (_n=206_) showed that the higher the PCB, the lower the ICI. The study also found that PCBs significantly reduced intrinsic motivation (attitude and perceived fairness) for ICI, whereas PCBs did not moderate the relationship between extrinsic motivation (sanction severity and sanctions certainty) and ICI. As a result, this study successfully addresses the risks of PCBs in the field of IS security and proposes effective solutions for employees with high PCBs. _Keywords:_ psychological contract; psychological contract breach; cybersecurity behaviour; information system security; information security policies **Competing interests.** The authors declare that we have no competing financial or non-financial interests that are directly or indirectly related to the work submitted for publication. ## 1 Introduction Organisational information security breaches can largely be explained by human error and omission (ISF, 2020). In other words, if employees deliberately or unintentionally fail to keep the information safe, it is insufficient to take technical countermeasures for the protection of the information. Accordingly, various psychological factors motivating employees' failure to comply with ISP compliance have been raised in the cyber security literature. Among them, the Psychological Contract (PC) was presented as one of the significant human factors provoking employees' cybersecurity behaviours (Ertan et al., 2018; Leach, 2003). The PC is a set of beliefs about reciprocal obligations between an employee and an employer (Robinson and Wolfe Morrison, 2000). According to the existing research, psychological contract breaches (PCBs) provoke poor organisational citizens behaviours (Mai et al., 2016a) and even poor work performance (Bal et al., 2013). These results imply that employees' PCBs are likely to reduce their ISP compliance intentions. However, empirical studies concerning the direct correlation between PCB and ISP compliance intentions have not been sufficiently conducted to date. This research aims to evaluate the impact of PCB, a new potential psychological factor, on deficient ISP compliance intentions. The research also measures the impact of PCB on intrinsic and extrinsic motivation towards ISP compliance intentions in order to multifaceted examine the risks of PCB. Consequently, the study can address the important role of PCB in IS security and provide a set of suggestions for employees having PCBs. The rest of this paper is structured as follows. Section 2 aims to analyse the existing literature on PCB and ISP compliance intention to develop research hypotheses. Section 3 presents data analysis and results based on the research framework. The discussion in Section 4 proceeds to interpret and analyse the results to answer the research questions. Finally, Section 5 describes the conclusions, recommendations, and limitations of this study. ## 2 Background ### Psychological Contract Psychological contract has emerged as one of the most crucial factors in workforce management. Unlike the documented contract, the psychological contract is the unwritten contract and refers to an individual's beliefs about mutual obligations between an employee and an organisation (Rousseau, 1989). When an employee perceives that the organisation is obliged to reciprocity for his or her contributions, the psychological contract is created. The contract has been constituted by paid-for-promises (e.g. high salary, promotion, long-term job security, or career development) made in exchange for some either implied or stated consideration such as hard work, accepting training, or transfers. Thus, psychological contracts are viewed as unwritten promises not as expectations. This leads employees to feel disappointed when psychological contracts are breached (Robinson and Rousseau, 1994). The consequences of psychological contract breaches have been found to negatively impact perceived obligations towards an employer, citizenship behaviour, commitment, satisfaction, intentions to remain and even work performance (Robinson, 1996; Robinson and Rousseau, 1994; Robinson et al., 1994). For example, employees who experienced PCB do not tend to contribute to their organisation since they have no expectation of future benefit, which is the organisation's obligation. Moreover, extreme cases of psychological contract breach could result in retaliation, sabotage, identity theft, and aggressive behaviour (Morrison and Robinson, 1997). Recent empirical studies have found PCB to negatively impact organisa tional behaviour (AL-Abrow et al., 2019; Mai et al., 2016), job satisfaction, commitment and intention to leave (Trybou and Gemmel, 2016), user resistance for the information system implementation (Lin et al., 2018), trust in organisation (Abela and Debono, 2019), and productive work behaviour (Ma et al., 2019). PCB could lead to cybercrime conducted as a result of insider threat brought about by the PCB. However, this has not been thoroughly investigated. The Relationship Between Psychological Contracts and Intention to comply with Information Security Policies ISP (Information Security Policy) refers to any document that covers security programs, system controls and user behaviour within an organisation to realise security objectives (Landoll, 2017). ISP can be categorised into four levels: organisational-level policies, security program-level policies, user-level policies, and system and control-level policies. Among these, the present study focuses on user-level policies in order to identify an employee's psychological factors that influence their behaviour and intentions. According to _ISO (International Standards Organisation) 27001/2_, user-level policies consist of eight elements; security responsibility agreement, acceptable use of assets, security awareness program, removable media disposal procedures, document control plan, mobile device security policy, telework security policy, and disciplinary process (Landoll, 2017). As cybercrime increases and becomes more severe and sophisticated, organisations put greater effort into information security risk management by implementing security measures and policies. Nonetheless, not only is the establishment of ISP within the organisation required, but employees must actively comply with ISP, playing a key role in substantially protecting cyber threats. Especially these days when social engineering is prevalent, the importance of encouraging employees to conform to ISP is increasingly emphasised (Flores and Ekstedt, 2016). Therefore, it is expected that not only the information systems but also the users are obliged to adhere to the ISP statements. However, if employees do not understand the importance of ISP compliance and are not willing to comply with it, all the technical measures and strategies that organisations have put in place will be in vain (Herath and Rao, 2009). Hence, human factors affecting ISP compliance intentions are needed to be understood to encourage their motivation. The PCB has been proposed as one of the most important factors influencing employees to perform security behaviours and to comply with security procedures. Leach (2003) stated that employees are psychologically pressured to act in accordance with the expectations of the organisation by voluntarily limiting and maintaining their behaviours within the range of accepted practices. Therefore, if employees feel that the company breached their psychological contract, they could feel exasperated and compelled to get even with the company. In addition, Abraham (2011) proposed PCB as one of the most influential factors associated with psychological ownership, organisational commitment, trust, as well as procedural justice. While the necessity of investigating the impact of PCB in IS security has been increased, relevant empirical studies have not been sufficiently conducted. To the best of our knowledge there has been only one relevant empirical study: Han et al. (2017) examined the mediating role of PCF (Psychological Contract Fulfillment) between perceived costs and ISP compliance intentions. The study conducted quantitative research seperated into supervisor and supervisee groups. As a result, it was found that PCF mitigates the negative impact of perceived costs on ISP compliance intentions only in the supervisor group. However, in this study, the perceived cost had no significant influence on ISP compliance intentions in both supervisor and supervisee groups. Accordingly, the study presents the hypothesis below. _**H1**: High Psychological Contract Breach has a strong negative effect on ISP compliance intentions._ ### Motivational Factors for ISP compliance intentions Extensive research has been done to examine human factors which influence employee compliance with ISP. Many behavioural theories (e.g. TPB (Theory of Planned Behaviour), GDT (General Deterrence Theory), PMT (Protection Motivation Theory), SCT (Social Cognitive Theory)) in IS literature have addressed motivators affecting ISP compliance. According to systematic literature reviews on behavioural theories, the most frequently used theory in IS security, was TPB followed by GDT (Alias et al., 2019; Lebek et al., 2013, 2014). The TPB suggests that an individual's behavioural intentions are determined by self-direction along with efforts to perform a target behaviour, or by motivation in terms of conscious plan and decision (Conner, 2020). The TPB is mainly composed of attitudes, self-efficacy, and subjective norms. Attitudes are an individual's overall assessments of a target behaviour, and self-efficacy is an individual's expectation of how well they can control the target behaviour. Additionally, subjective norm is a function of normative beliefs which are an individual's perceptions of the preferences of those around him who believe he should engage in targeted behaviour (Conner, 2020). These three components are the most important psychological factors in motivating and predicting ISP compliance behaviours and intentions (Lebek et al., 2014; Nasir et al., 2017). On the other hand, the GDT explains that a psychological process is made by deterring criminal behaviour only when people perceive that legal sanctions are clear, expeditious and harsh (Williams and Hawkins, 1986). The GDT primarily consists of sanction severity and sanction certainty; sanction severity refers to an individual's perception that penalties for non-compliance are severe and sanction certainty indicates an individual's perception that risk of delinquent behaviour to be detected is high (Williams and Hawkins, 1986; Safa et al., 2019). ### Intrinsic and Extrinsic Motivation People are motivated both internally and externally to take certain actions. Organisations typically seek to establish external measures such as sanctions and penalties for deviant cybersecurity behaviours, rather than increasing employees' internal motivations. Extrinsic Motivation is defined as decision-making based on external factors such as a reward, surveillance, and punishment (Benabou and Tirole, 2003) as opposed to Intrinsic Motivation, which is an inherent desire to undertake the work even without specific rewards (Benabou and Tirole, 2003; Makki and Abid, 2017). However, intrinsic and extrinsic motivation sometimes conflict with each other. According to Benabou and Tirole (2003), some researchers insist that extrinsic motivations such as sanctions and rewards are often counterproductive since they often impede intrinsic motivation. This is because extrinsic motivations have limited effect on current employee engagement and reduces motivation to perform the same task later without compensation. Therefore, many social psychology studies emphasise the necessity to increase employee self-esteem rather than increase extrinsic motivation (Benabou and Tirole, 2003). Accordingly, the study compares the effects of PCB on intrinsic and extrinsic motivation for ISP compliance intentions to identify how to motivate people who have experienced PCB to adhere to ISP. #### 2.4.1 Intrinsic Motivations A psychological contract breach is known to induce negative emotional responses, which in turn reduces intrinsic motivation at work (de Lange et al., 2011; Morrison and Robinson, 1997). Conversely, it has been shown that psychological contract fulfilment increases motivation towards organisational commitment (Berman and West, 2003). Therefore, the present study suggests PCB negatively influences intrinsic motivation towards ISP compliance intentions. The study adopted attitudes and self-efficacy of TPB as intrinsic motivators of ISP compliance intentions. This is because attitude has been studied as the most significant intrinsic motivator (Bulgurcu et al., 2011), and intrinsic motivation consists of autonomy and competence, which are aligned with self-efficacy (Alzahrani et al., 2018). Additionally, employee psychological contract violations have been found to provoke negative organisational attitudes (e.g. job satisfaction, effective commitment, turnover intentions) (Pate et al., 2003; Zhao et al., 2007). On the other hand, the correlation between perceived contract violation and low job satisfaction was found to be weaker as the work-related self-efficacy increased (De Clercq et al., 2019). Therefore, it is necessary to study the mitigating role of self-efficacy on the negative effects of PCB. Employees who have experienced psychological contract breach may think that following the ISP is important but unfair, which may unwittingly lead to inadequate cybersecurity. Perceived Fairness can be defined as an individual's perception of the fairness of an organisation's ISP requirements, that exists within the internal context of ISP compliance (Bulgurcu et al., 2011). Perceived fairness has been found positively affect attitudes towards ISP compliance (Bulgurcu et al., 2009, 2011). In terms of the relationship between perceived fairness and PCB, some research has found that employees' beliefs of unfairness in the organisation's regulations and treatments can be directly linked to psychological contract violation (Harrington and Lee, 2015; Morrison and Robinson, 1997). Moreover, psychological contract fulfilment has been found to raise employees' perception of performance appraisal fairness (Harrington and Lee, 2015). It was also found that higher perceived fairness mitigated the negative influence of PCB on those with violated feelings (Lin et al., 2018). Hence, the study additionally measures an employee's perceived fairness towards ISP compliance as an intrinsic motivational factor. Accordingly, the study proposes the following hypotheses: _**H2**: Higher intrinsic motivation (Attitudes, Self-efficacy, and Perceived Fairness) has a stronger positive effect on ISP compliance intentions._ _H3a_: _There is a negative effect of Psychological Contract Breach on Attitudes towards ISP compliance intentions._ _H3b_ _There is a negative effect of Psychological Contract Breach on Self-efficacy towards ISP compliance intentions._ _H3c_: _There is a negative effect of Psychological Contract Breach on Perceived Fairness towards ISP compliance intentions._ #### 2.4.2 Extrinsic Motivations Employees are sometimes compelled to follow organisational policies, even if they are unwilling to do so, to avoid disadvantages such as penalties and reputational damage. The study suggested subjective norm of TPB as well as sanction severity and sanction certainty of GDT as extrinsic motivators to influence employee compliance with ISP. Although some researchers have regarded subjective norms as somewhat voluntary behaviours, it has been considered as an extrinsic motivator in IS studies since intrinsic motivations are based on the employee's desire to perform the task for himself or herself (Herath and Rao, 2009a). Therefore, subjective norm, sanction severity, and sanction certainty can be classified as extrinsic motivation factors in this study. However, those extrinsic factors motivating employees to adhere to the policies can conflict with intrinsic motivation. According to a systematic literature review on IS behaviour theories, those extrinsic motivators of GDT - sanction severity and sanction certainty - have been found not to significantly influence IS deviant behaviours compared to TPB (Nasir et al., 2017; Safa et al., 2019). This implies that the intrinsic motivators including PCB can either reduce the positive correlation between extrinsic motivation and ISP compliance intentions or reverse the direction of the positive correlation in a negative way. For instance, higher extrinsic motivation has a less positive effect or has no distinct effect on ISP compliance intentions when an employee's PCB is high. Conversely, when an employee's PCB is low, higher extrinsic motivation has a higher positive effect on ISP compliance intentions. Therefore, the study proposes the following hypotheses: _H4: High extrinsic motivation (Subjective norms, Sanction severity, and Sanction certainty) has a stronger positive effect on and ISP compliance intentions._ _H5a_: _Psychological contract breach moderates the relationship between Subjective Norms and ISP compliance intentions._ _H5b_: _Psychological contract breach moderates the relationship between Sanction Severity and ISP compliance intentions._ _H5c_: _Psychological contract breach moderates the relationship between Sanction Certainty and ISP compliance intentions._ Lastly, since PCB can be classified as intrinsic motivation, it is assumed that PCB has a greater negative effect on people having intrinsic motivation than those having extrinsic motivation. Therefore, people who follow ISPs due to the external factors may not be relatively affected by the PCB since external factors are not changed by PCB. However, those who intrinsically seek to follow ISPs may be greatly affected by the PCB. Accordingly, the study suggests the following hypothesis: _H6: The effect of PCB on Intrinsic Motivation is stronger than the moderating effect of PCB between Extrinsic Motivation and ISP compliance intention._ Consequently, the study combines two behaviour theories, TPB and GDT, classifying into intrinsic and extrinsic motivation based on the main research question, the negative impact of PCB on ISP compliance intentions. Accordingly, the study can differ the impact of PCB on intrinsic motivation to extrinsic motivation towards ISP compliance intentions. The proposed theoretical framework is presented in **Figure 1**. ### Research Contribution In an era where most cyber-attack strategies target human weaknesses, it has become imperative for organisations to understand which human factors impact their employees' security behaviour and foster their willingness to abide by the security regulations. However, enhancing employee intention requires more than providing a security awareness program. In order to properly comply with ISP, firstly, employees should be able to understand and practically apply the given information. Secondly, they should have attitudes and intentions to willingly comply with the policies (Bada et al., 2019). However, the attitudes and intentions to comply with ISP are accompanied by multifaceted psychological factors; employees' evaluation of their capabilities to obey ISP (self-efficacy), disadvantages when not following the compliance (sanctions), and employees' perceived expectations of coworkers (subjective norms) Figure 1: Proposed theoretical framework of the study (Topa and Karyda, 2015). Many behavioural theories have been researched in the field of IS security to date, grouping the relevant psychological factors. Among the various theories, this study will focus on social factors of TPB and GDT, dividing them into intrinsic and extrinsic motivational factors for ISP compliance intentions. Meanwhile, although much research has found that many psychological factors affect ISP compliance behaviour and intentions, there are still potential factors that have not yet been properly studied. Likewise, no research has yet focused on the direct relation between PCB and ISP compliance intentions, although some theoretical studies Ertan et al. (2018); Leach (2003) have implied the important role of PCB against complying with security policies. Conversely, one research study explored the mediating role of PCF (Psychological Contract Fulfilment) between perceived costs of Rational Choice Theory and ISP compliance intention. As a result of the study, the impact of PCF was influential in the supervisor group, but not prominent in the supervisee group (Han et al., 2017a). However, because they conducted the research with only limited factors, the influence of the PC in the non-administrator group was not thoroughly examined. Accordingly, the research examines the research question: "How does an employee's Psychological Contract Breach affect Information Security Policies Compliance Intentions?" ## 3 Methodology ### Data collection We used an online survey and recruited an FTSE 250 UK industrial goods and services company as a partner company for the survey. A single specific company was selected because it was important to ensure that participants were members of a company which had an appropriate ISP and that employees are aware of the ISP. Therefore, rather than distributing the survey to any employees, we decided to partner with a large corporation that provides a dedicated ISP. The survey was distributed only to employees working in the UK to facilitate communication and scheduling. A total sample size of 1,000 employees was selected through simple random sampling from a population of 3,021 employees of the partner company in the UK. As a result, 265 survey responses were received and only 208 responses were fully completed. Accordingly, the survey response rate was roughly 26.5% and the survey completion rate was over 78.4%. As a result of simply screening the data, there was two invalid responses within the 208 completed survey responses. Therefore, 206 completed responses remained valid for data analysis. ### Measures The questionnaire for this study was developed and combined, adopting reliable existing studies to collect quantitative data. The questionnaire is divided into the first part for the personal characteristics and the second part for the factors for substantial analysis. In part 1, a questionnaire for an employee's demographic characteristics has been asked which are primarily identified as control variables in relevant empirical studies. Therefore, the following five variables have been included in the questionnaire. On the other hand, part 2 presents the substantial constructs of this study, consisting of 8 factors - Psychological Contract Breach (PCB), Attitudes (ATT), Self-efficacy (SE), Perceived Fairness (PF), Subjective norms (SN), Sanction Severity (SS), Sanction Certainty (SC), ISP Compliance Intention (ICI) - and 35 indicators. The full questionnaire is shown in **Appendix A**. ### Analysis and Results The data analysis was conducted, divided into 1) descriptive statistics for identifying personal characteristics, 2) measurement model analysis for construct validity and reliability, 3) structural model analysis for hypothesis testing, and 4) bivariate analysis for investigating the correlation between variables. The study employed IBM SPSS for descriptive statistics and SmartPLS 3.0 for confirmatory factor analysis (CFA). #### 3.3.1 Descriptive Statistics The personal characteristics collected in the Part 1 of the survey are shown in **Table 1**. The age group over the age of 19 is almost evenly distributed in all groups except for the oldest age group. Similarly, responses were received almost evenly from female and male respondents. By position, there were about twice as many non-managers as managers. Additionally, more than 40% of respondents have worked for this organisation for one to five years and rates between about 9% to 21% have been shown in other tenure groups. Lastly, employee types have been divided into temporary and permanent type, with approximately 90% of respondents were regular workers. The normality test results of Part 2 are shown in **Table 7** in **Appendix B**. The mean value ranged from 1.42 (PCB8) to 2.23 (PCB4) for PCB and from 3.35 (SS2) to 4.83 (ICI1) for others. These statistics indicate that most respondents had moderately positive responses for the constructs of the study. The skewness value ranged from -2.862 (ATT2) to 2.354(PCB8), excluding ATT1 and ICI 1-3. Similarly, the kurtosis value ranged from -0.817 (PCB4) to 9.21(ATT2), except for ATT1 and ICI 1-4. ATT1 and ICI 1-4 failed the normality test since ATT1 and ICI 1-3 had absolute skewness values greater than or equal to 3.0, and ATT1 and ICI 1-4 had absolute kurtosis values greater than or equal to 10.0 (Brown, 2015). Therefore, a linear regression model, which is a non-parametric method that does not require normally distributed data, was additionally used in this study for variables that failed a normality test (Fathian et al., 2014). Additionally, the descriptive statistics, including the mean, minimum and maximum values of PCB and ICI according to personal characteristics, are described in **Table 2**. Firstly, younger groups tend to have higher PCB. Additionally, the older group was more likely to comply with ISP overall, while the 20-29-year age group (4.78) had almost as high ICI as the 40-59-year age group (4.76). Secondly, managers (4.78) are more willing to comply with ISP than non-managers (4.73) although they have higher PCB. By tenure, the group with the shortest tenure had the lowest PCB (1.23) and ICI (4.71). Lastly, non-regular workers (1.45) had a much lower PCB level than regular workers (1.81), and their intention to comply with ISP (4.80) was higher than that of regular workers (4.74). Comparatively, there was no significant difference by gender in samples. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{2}{|l|}{Personal characteristics} & \multicolumn{1}{l|}{Value} & \multicolumn{1}{l|}{Frequency} & \multicolumn{1}{l|}{Percent} & \multicolumn{1}{l|}{Cumulative Percent} \\ \hline \multirow{6}{*}{Age} & Under 20 & 0 & 0 & 0 \\ \cline{3-6} & 20-29 & 30 & 14.6 & 14.6 \\ \cline{3-6} & 30-39 & 51 & 24.8 & 39.3 \\ \cline{2-6} & 40-49 & 51 & 24.8 & 64.1 \\ \cline{3-6} & 50-59 & 58 & 28.2 & 92.2 \\ \cline{3-6} & 60 and above & 16 & 7.8 & 100 \\ \hline \multirow{2}{*}{Gender} & Female & 99 & 48.1 & 48.1 \\ \cline{2-6} & Male & 107 & 51.9 & 100 \\ \hline \multirow{2}{*}{Job position} & Manager & 67 & 32.5 & 32.5 \\ \cline{2-6} & Non-manager & 139 & 67.5 & 100 \\ \hline \multirow{6}{*}{Tenure} & less than 1 year & 18 & 8.7 & 8.7 \\ \cline{2-6} & 1-5 years & 86 & 41.7 & 50.5 \\ \cline{1-1} \cline{2-6} & 6-10 years & 30 & 14.6 & 65 \\ \cline{1-1} \cline{2-6} & 10-15 years & 28 & 13.6 & 78.6 \\ \cline{1-1} \cline{2-6} & more than 15 years & 44 & 21.4 & 100 \\ \hline \multirow{2}{*}{Employment type} & temporary & 21 & 10.2 & 10.2 \\ \cline{2-6} & permanent & 185 & 89.8 & 100 \\ \hline \end{tabular} \end{table} Table 1: Personal characteristics of the survey \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Personal characteristics} & \multirow{2}{*}{Value} & \multicolumn{1}{l|}{PCB} & \multicolumn{1}{l|}{ICI} & \multicolumn{1}{l|}{} \\ \cline{3-8} & & Mean & Min. & Max. & Mean & Min. & Max. \\ \hline \multirow{6}{*}{Age} & Under 20 & N/A & N/A & N/A & N/A & N/A & N/A \\ \cline{2-6} & 20-29 & 1.96 & 1.00 & 4.33 & 4.78 & 3.25 & 5.00 \\ \cline{2-6} & 30-39 & 1.89 & 1.00 & 4.67 & 4.60 & 2.00 & 5.00 \\ \cline{2-6} & 40-49 & 1.76 & 1.00 & 4.78 & 4.76 & 2.00 & 5.00 \\ \cline{2-6} & 50-59 & 1.69 & 1.00 & 4.22 & 4.76 & 1.00 & 5.00 \\ \cline{2-6} & 60 and above & 1.50 & 1.00 & 3.44 & 4.91 & 4.50 & 5.00 \\ \hline \multirow{2}{*}{Gender} & Female & 1.75 & 1.00 & 4.33 & 4.74 & 1.00 & 5.00 \\ \cline{2-6} & Male & 1.80 & 1.00 & 4.78 & 4.74 & 2.00 & 5.00 \\ \hline \multirow{2}{*}{Job position} & Manager & 1.86 & 1.00 & 4.78 & 4.78 & 2.00 & 5.00 \\ \cline{2-6} & Non-manager & 1.74 & 1.00 & 4.67 & 4.73 & 1.00 & 5.00 \\ \hline \multirow{6}{*}{Tenure} & less than 1 year & 1.23 & 1.00 & 2.00 & 4.71 & 3.00 & 5.00 \\ \cline{2-6} & 1-5 years & 1.85 & 1.00 & 4.67 & 4.75 & 3.00 & 5.00 \\ \cline{1-1} \cline{2-6} & 6-10 years & 1.95 & 1.00 & 4.78 & 4.80 & 2.00 & 5.00 \\ \cline{1-1} \cline{2-6} & 10-15 years & 1.89 & 1.00 & 3.89 & 4.85 & 4.00 & 5.00 \\ \cline{1-1} \cline{2-6} & more than 15 years & 1.68 & 1.00 & 3.56 & 4.63 & 1.00 & 5.00 \\ \hline \multirow{2}{*}{Employment type} & temporary & 1.45 & 1.00 & 3.33 & 4.80 & 3.25 & 5.00 \\ \cline{2-6} & permanent & 1.81 & 1.00 & 4.78 & 4.74 & 1.00 & 5.00 \\ \hline \end{tabular} \end{table} Table 2: Descriptive statistics for PCB and ICI according to the personal characteristics #### 3.3.2 Inferential Statistics To verify construct validity and reliability, Confirmatory Factor Analysis (CFA) was performed in this study. The higher the factor loading size (0.8), the better the condition (Wiktorowicz et al., 2016). However, since the factor loading of 0.4 has been also considered as significant (MRC, 2013), the construct validity and reliability of all items in this study are significantly good or moderate, see **Table 8** in **Appendix C**. In **Table 9** of **Appendix C**, the constructs were additionally verified through multiple measurement models including Cronbach's Alpha, rho_A, CR, and AVE. Subsequently, the study analysed motivational process for ISP compliance intention with various constructs, by dividing into three structural models. The first model consists of PCB and ICI to investigate the direct correlation between them. In the second model, intrinsic motivation factors including ATT, SE, PF were added to the constructs in the first model. The third model was to determine the moderating effect of PCB on the relationship between extrinsic motivators such as SN, SS, SC, and ICI. The structural models were examined with T test, path coefficient, and P values to multifaceted investigate the relationships between factors. **Table 3** shows the result of the structural analysis for the hypotheses of this study. The direct correlation between PCB and ICI (**H1**) was verified to be strongly significant with p values of 0.002. Moreover, with the -0.195 value of path coefficient, the weak negative effect of PCB on ICI was shown. On the other hand, the relationship between intrinsic motivation and ICI (**H2**, **H3**) was assessed to be partially significant because only ATT among the three intrinsic motivators was significantly related to ICI. However, ATT-ICI and PCB-ATT relationships were found to have p values of 0.008 and 0.000 respectively. Additionally, indirect relationship of PCB-ATT-ICI had p values of 0.028. On the other hand, the impact of PCB was found to be the most significant on PF through t test and path coefficient along with p values. However, since PF-ICI relationship was not supported, the impact of PCB on PF towards ICI was not proved through a structural analysis. Therefore, while SE and PF were not found to be significant, the impact of PCB on ATT towards ISP compliance intentions were shown to be very strong. Lastly, among the three extrinsic motivators (**H4**, **H5**), the impacts of both SN and SC on ICI were very strong with p values of 0.000 and 0.001 respectively, while there was no relevance between SS and ICI. Comparatively, the moderating effect of PCB was not significant on SN, SS, as well as SC. **Figure 2** demonstrates the results of statistical analysis based on the theoretical framework of this study. #### 3.3.3 bivariate analysis **Figure 3** shows scattered plots with simple linear regression analyses. The PCB has negative correlation with all intrinsic motivation (ATT, SE, PF) as well as ICI, supporting hypotheses 1, 3, and 5. On the other hand, hypotheses 2 and 4 were supported by the positive correlation between ICI and all constructs except PCB (ATT, SE, PF, SN, SS, SC). ## 4 Discussion As a result of the hypothesis test, it was found that the intention to comply with ISP was significantly affected by PCB, ATT, SN, and SC. Firstly, it has been shown that the higher the PCB of an employee, the more likely they are to be compliant with ISP. Of the three intrinsic motivators (ATT, SE, PF), only the ATT-ICI relationship was found to be Figure 2: Structural statistics for theoretical framework of the study \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & & PCB & ATT & SE & PF & ICI \\ \hline \multirow{2}{*}{PCB} & Pearson Correlation & 1 & -.219** & -0.078 & -.331** & -.158* \\ \cline{2-7} & Sig. (2-tailed) & 0.002 & 0.268 & 0 & 0.023 & \\ \hline \multirow{2}{*}{ATT} & Pearson Correlation & -.219** & 1 &.300** &.496** &.520** \\ \cline{2-7} & Sig. (2-tailed) & 0.002 & & 0 & 0 & 0 \\ \hline \multirow{2}{*}{SE} & Pearson Correlation & -0.078 &.300** & 1 &.193** &.230** \\ \cline{2-7} & Sig. (2-tailed) & 0.268 & 0 & & 0.005 & 0.001 \\ \hline \multirow{2}{*}{PF} & Pearson Correlation & -.331** &.496** &.193** & 1 &.407** \\ \cline{2-7} & Sig. (2-tailed) & 0 & 0 & 0.005 & & 0 \\ \hline \multirow{2}{*}{ICI} & Pearson Correlation & -.158* &.520** &.230** &.407** & 1 \\ \cline{2-7} & Sig. (2-tailed) & 0.023 & 0 & 0.001 & 0 & \\ \hline \multicolumn{7}{l}{**Significant at the 0.01 level (2-tailed).} \\ \multicolumn{7}{l}{* Significant at the 0.05 level (2-tailed).} \\ \end{tabular} \end{table} Table 3: Pearson correlation coefficient analysis between PCB, ATT, SE, and PF and ICI Figure 3: Linear regression analysis for the impact of PCB (top) and the predictors of ICI (bottom) significant while both SE and PF did not appear to affect ICI. In addition, PCB had a great negative effect on ATT and the indirect relationship of PCB-ATT-ICI was also found to be significant. Thus, PCB were found to have a negative impact on attitudes towards ISP compliance intentions. On the other hand, the PF-ICI relationship was too weak to support the hypothesis although PCB had a negative impact on PF for ISP. Accordingly, only the impact of PCB on ATT towards ICI was supported in the second model. Among the three extrinsic motivators (SN, SC, SS), SN and SC showed a positive relation with ICI as expected by the existing theories. In contrast, the effect of SS on ICI was not significant. Additionally, the moderating role of PCB between the three factors and ICI was not significant at all, suggesting that PCB do not moderate the strongly positive SN-ICI and SC-ICI relationships. Subsequently, **H6** was significantly supported. Among intrinsic motivators, PCB negatively influenced ATT, which had a correlated effect on ICI to a significant extent. On the other hand, while SN and SC were found to affect ICI positively, they were not moderated by PCB. This result can be interpreted that PCB can reduce positive intrinsic motivation for ICI while PCB does not influence the extrinsic motivation for ICI. Therefore, the effect of PCB on intrinsic motivation is stronger than the moderating effect of psychological contract breach between extrinsic motivation and ISP compliance intention. The Pearson correlation coefficient explained that all relationships in the theoretical framework of the study are correlated, except for PCB-SE relationship. Additionally, contrary to the structural analysis results, the PF-ICI and SS-ICI relationships were shown to have a significant positive correlation. Furthermore, as a result of simple linear regression analysis, PCB showed a negative correlation with intrinsic motivation and ICI, whereas all motivation factors except PCB have a positive correlation with ICI. To sum up the results, it was confirmed that the negative correlation and causal relationship between PCB and ICI were significant, verifying hypothesis 1. These results can contribute to expanding existing research on the negative effects of PCB in organisations. Second, the study aimed to investigate how psychological factors such as intrinsic and extrinsic motivators for ICI could be negatively affected by PCB. As a result, ATT for ICI was significantly negatively affected by PCB, suggesting that PCB could decrease positive attitudes towards ICI. Lastly, it was shown that PCB did not moderate the positive correlation between extrinsic motivation and ICI. Based on the findings, the study can propose that increasing intrinsic motivation and \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & & SN & SS & SC & ICI & ICI \\ \hline \multirow{2}{*}{SN} & Pearson Correlation & 1 &.254** &.346** &.433** & -.158* \\ \cline{2-6} & Sig. (2-tailed) & & 0 & 0 & 0 & \\ \hline \multirow{2}{*}{SS} & Pearson Correlation &.254** & 1 &.530** &.203** &.520** \\ \cline{2-6} & Sig. (2-tailed) & 0 & & 0 & 0.003 & 0 \\ \hline \multirow{2}{*}{SC} & Pearson Correlation &.346** &.530** & 1 &.411** &.230** \\ \cline{2-6} & Sig. (2-tailed) & 0 & 0 & & 0 & 0.001 \\ \hline \multirow{2}{*}{ICI} & Pearson Correlation &.433** &.203** &.411** & 1 &.407** \\ \cline{2-6} & Sig. (2-tailed) & 0 & 0.003 & 0 & & 0 \\ \hline \multicolumn{6}{l}{**Significant at the 0.01 level (2-tailed).} \\ \end{tabular} \end{table} Table 4: Pearson correlation coefficient analysis between SN, SS, and SC and ICI establishing extrinsic factors for prevent employees with PCB from performing inadequate cybersecurity behaviour. In particular, organisations should pay attention to fulfil their employees' psychological contracts and strive to improve their attitudes for ISP compliance. Additionally, to address the risk of psychological contract breaches, organisations can encourage employees' extrinsic motivation by building a cybersecurity culture and establishing certain sanctions for ISP compliance breaches. ## 5 Conclusions Most cyber threat actors today often leverage human factors, as known as social engineering attacks or people hacking, which makes employee ISP compliance much more important. Nevertheless, not all employees are willing to be ISP compliant as the organisation expects them to be. Although most employees claim that they do not have enough time to comply with all ISPs during work, ISP noncompliance is rather driven by a variety of psychological motivations. Psychological contract breach has emerged as a major issue in the business environment because it fosters negative employee beliefs against the organisation. Therefore, the study conducted an empirical study to investigate the effect of PCB on ICI. In this study, the psychological factors of the Theory of Planned Behaviour and General Deterence Theory were additionally applied by classifying it as intrinsic and extrinsic motivation. Data analysis primarily revealed that high PCB significantly led to low ISP compliance intentions. As a result, it was found that PCB greatly reduced intrinsic motivation (attitudes and perceived fairness) for ICI but did not moderate the relationship between extrinsic motivation (subjective norms and sanction certainty) and ICI. Overall, this study showed that an employees' PCB played a significant role in influencing ISP compliance intentions. ### Recommendations Based on the findings, the study can propose that increasing intrinsic motivation and establishing extrinsic factors prevent employees with PCB from performing inadequate cybersecurity behaviour. In particular, organisations should pay attention to fulfilling their employees' psychological contracts and strive to improve their attitudes for ISP compliance. Additionally, to address the risk of psychological contract breaches, organisations can encourage employee extrinsic motivation by building a cybersecurity culture and establishing certain sanctions for ISP compliance breaches. In addition to establishment of ISP, employee ISP compliance is essential to avoid threats of people hacking and social engineering. Therefore, reducing PCB is important not only for employee engagement and work performance but also for information security risk management. The most important ways to address the risks of employee PCB is to make promises clear from the beginning. Alternatively, PCB can be mitigated by open communication, trust in the supervisor, and specific obligations (e.g. job content, career development, organisational policies, leadership and social contacts, work-life balance, job security, rewards) (van Gilst et al., 2020). Besides, it was found that the relationship between PCB and work performance was moderated in employees having high social interaction, perceived organisational support, and trust (Bal et al., 2010). Second, organisations should strive to increase positive attitudes and perceived fairness. In addition to fulfilling the employee psychological contract, a manager's persuasive strategy can increase employee attitudes and intrinsic motivation more effectively than an assertive strategy (Chiu, 2018). In addition, organisations should identify why employees perceive that the ISP compliance requirements are unfair. Third, the study also suggests that an organisation's cybersecurity culture can mitigate an employee's undesirable security behaviours, which can be caused by high PCB. Organisations must make significant investments in implementing transformational change to build a cybersecurity culture that goes beyond simply offering a SETA program (Alshaikh, 2020). In addition, despite the PCB, employees are inevitably inclined to comply with the ISP to avoid their misbehaviours getting caught up. Thus, the final proposal of this study is to pay attention to employee behaviour and ISP compliance. Organisations can also establish security measures to monitor and alert employees for breaches of security compliance. ### Limitations and Directions for Future Research This study conducted a cross-sectional survey that measured only partial and static phenomenon due to the time frame of the study (Bravo et al., 2019). Therefore, the path coefficient was analysed in order to examine the causal relationship between PCB and motivators as well as ICI. Nevertheless, the study was unable to identify whether PCB was created before other psychological factors. Accordingly, a longitudinal study is proposed to be employed in future research. In addition, the average value of PCB collected from the partner company was very low (1.78), while the average ICI was very high (4.74). Therefore, this might have affected the significance of the impact of PCB on ICI. Out of the 206 valid responses, only 30 employees had PCBs of 3.0 or higher and 176 employees had PCBs less than 3.0. Therefore, the study was unable to divide the sample into breached and non-breached groups. Furthermore, only 3 employees had an ICI of less than 3, while 203 employees had an ICI of 3 or much higher. Such biased data could have affected the significance of relationships between factors. Thus, in future research, it is desirable to recruit multiple companies to diversify the range of PCB and ICI. The structural model analysis found that SE, PF, and SS were not significant for ICI. There were also no significant effects of SE for ICI with correlation efficient although SE has been long studied to have very strong correlation with ICI in IS studies (Lebek et al., 2014; Nasir et al., 2017). On the contrary, PF and SS showed a significant correlation with ICI through correlation coefficient analysis. This can lead to the conclusion that the relationship between PCB and the three factors were not fully investigated. Therefore, these relationships should be further investigated in future studies, especially in longitudinal studies.
2310.01608
Neural Network Emulation of Spontaneous Fission
Large-scale computations of fission properties are an important ingredient for nuclear reaction network calculations simulating rapid neutron-capture process (the r process) nucleosynthesis. Due to the large number of fissioning nuclei contributing to the r process, a microscopic description of fission based on nuclear density functional theory (DFT) is computationally challenging. We explore the use of neural networks (NNs) to construct DFT emulators capable of predicting potential energy surfaces and collective inertia tensors across the whole nuclear chart. We use constrained Hartree-Fock-Boguliubov (HFB) calculations to predict the potential energy and collective inertia tensor in the axial quadrupole and octupole collective coordinates, for a set of nuclei in the r-process region. We then employ NNs to emulate the HFB energy and collective inertia tensor across the considered region of the nuclear chart. Least-action pathways characterizing spontaneous fission half-lives and fragment yields are obtained using the nudged elastic band method. The potential energy predicted by NNs agrees with the DFT value to within a root-mean-square error of 500 keV, and the collective inertia components agree to within an order of magnitude. The exit points on the outer turning line are found to be well emulated. For the spontaneous fission half-lives the NN emulation provides values that are found to agree with the DFT predictions within a factor of $10^3$ across more than 70 orders of magnitude. Neural networks are able to emulate the potential energy and collective inertia well enough to reasonably predict physical observables. Future directions of study, such as the inclusion of additional collective degrees of freedom and active learning, will improve the predictive power of microscopic theory and further enable large-scale fission studies.
Daniel Lay, Eric Flynn, Samuel A. Giuliani, Witold Nazarewicz, Leó Neufcourt
2023-10-02T19:59:38Z
http://arxiv.org/abs/2310.01608v2
# Neural Network Emulation of Spontaneous Fission ###### Abstract **Background:** Large-scale computations of fission properties are an important ingredient for nuclear reaction network calculations simulating rapid neutron-capture process (the \(r\) process) nucleosynthesis. Due to the large number of fissioning nuclei potentially contributing to the \(r\) process, a microscopic description of fission based on nuclear density functional theory (DFT) is computationally challenging. **Purpose:** We explore the use of neural networks (NNs) to construct DFT emulators capable of predicting potential energy surfaces and collective inertia tensors across the whole nuclear chart, starting from a minimal set of DFT calculations. **Methods:** We use constrained Hartree-Fock-Boguliubov (HFB) calculations to predict the potential energy and collective inertia tensor in the axial quadrupole and octupole collective coordinates, for a set of nuclei in the \(r\)-process region. We then employ NNs to emulate the HFB energy and collective inertia tensor across the considered region of the nuclear chart. Least-action pathways characterizing spontaneous fission half-lives and fragment yields are then obtained by means of the nudged elastic band method. **Results:** The potential energy predicted by NNs agrees with the DFT value to within a root-mean-square error of 500 keV, and the collective inertia components agree to within an order of magnitude. These results are largely independent of the NN architecture. The exit points on the outer turning line are found to be well emulated. For the spontaneous fission half-lives the NN emulation provides values that are found to agree with the DFT predictions within a factor of \(10^{3}\) across more than 70 orders of magnitude. **Conclusions:** Neural networks are able to emulate the potential energy and collective inertia well enough to reasonably predict physical observables. Future directions of study, such as the inclusion of additional collective degrees of freedom and active learning, will improve the predictive power of microscopic theory and further enable large-scale fission studies. ## I Introduction Large scale calculations of fission properties are an essential ingredient for the modelling of the rapid neutron-capture process (\(r\) process), responsible for the production of roughly half of the nuclei heavier than iron found in nature [1; 2]. Fission determines the range of the heaviest nuclei that can be synthesized during the \(r\)-process, recycles the material during the neutron irradiation phase, and shapes the final abundances [3; 4; 5]. Given the large amount of energy released in this decay, the presence of fissioning nuclei can leave fingerprints in the electromagnetic counterpart produced in neutron star mergers [6; 7]. However, as most of the fissioning nuclei produced during the \(r\) process cannot be measured, theoretical predictions are indispensable to perform accurate nuclear reaction network calculations. During the last decades, several efforts have been devoted to the systematic estimation of fission barriers [8; 9; 10; 11; 12; 13], spontaneous fission half-lives [14; 15; 13], and fragment distributions [16; 17; 18; 19; 20] of \(r\)-process nuclei. However, due to the inherent complexities characterizing the theoretical description of the fission process [21], most of the available calculations resort to phenomenological approaches based on simplified assumptions. This limitation can be overcome by employing the nuclear DFT [22; 23; 24], which is the quantum many-body method based on effective nucleon-nucleon interactions applicable across the whole nuclear landscape. But given its computational costs, using DFT for fission is a daunting task for large-scale studies of \(r\)-process nuclei [25; 21; 26]. As such, the usage of DFT emulators can be an invaluable tool to extend the current reach of microscopic fission calculations. Machine learning has been used with great success in many areas of nuclear physics (see [27] for a recent review on this topic). In particular, machine learning has been used in many DFT studies to emulate potential energy surfaces (PESs), in both quantum chemistry [28; 29; 30; 31] and in nuclear physics [32; 33]. However, these have generally focused on emulating individual potential energy surfaces, rather than many nuclei across a portion of the nuclear chart (or many related chemical systems in the quantum chemistry case). In an important study, Ref. [34] succeeded in emulating PESs and other quantities using committees of multilayer neural networks. In this study, we use fully connected, feedforward NNs to emulate the PES and collective inertia tensor, parameterized by the axial quadrupole and octupole moments \(Q_{20}\) and \(Q_{30}\), between nuclei in the \(r\)-process region of the nuclear chart. The paper is organized as follows: Section II reviews the theoretical approach to spontaneous fission used in this work. Section III describes the characteristics of the employed NNs. Section IV demonstrates the performance of the NNs on the HFB energy and collective inertia tensor, and Sec. V compares the exit points and spontaneous fission half-lives obtained using the DFT inputs and the emulated NN inputs. Finally, conclusions are summarized in Sec. VI. ## II Spontaneous fission within the nuclear density functional theory Spontaneous fission (SF) is a dynamical process where the nucleus evolves from the ground-state into a split configuration. In the adiabatic approximation, SF is modeled using a finite set of collective variables \(\{q_{i}\}\) usually describing the nuclear shape. The SF half-life can be computed within this approach as \(t_{1/2}=\ln 2/nP_{\text{fs}}\), where \(n\) is the number of assaults on the fission barrier, and \(P_{\text{fs}}\) the fission probability given by the probability of the nucleus to tunnel through the fission barrier, which can be estimated using the semiclassical Wentzel-Kramers-Brillouin (WKB) approach [35]: \[P_{\text{fs}}=\frac{1}{1+\exp{(2S(L))}}, \tag{1}\] where \(S(L)\) is the collective action computed along the stationary trajectory \(L[s]\) that minimizes \(S\) in the multi-dimensional space defined by the collective coordinates: \[S(L[s])=\frac{1}{\hbar}\int_{s_{\text{in}}}^{s_{\text{out}}}\sqrt{2\mathcal{ M}_{\text{eff}}(s)(V(s)-E_{0})}\;ds\,, \tag{2}\] with \(V\) and \(\mathcal{M}_{\text{eff}}\) being the potential energy and inertia tensor, respectively, computed along the fission path \(L[s]\). The integration limits \(s_{\text{in}}\) and \(s_{\text{out}}\) correspond to the classical inner and outer turning points, respectively, defined by the condition \(V=E_{0}\), where \(E_{0}\) is the collective ground-state zero-point energy stemming from quantum fluctuations in the collective coordinates. While the latter can be estimated from, e.g., the curvature of \(V\) around the ground state (g.s.) configuration, in many SF studies \(E_{0}\) is taken as a fixed positive constant ranging between 0.5 and 2.0 MeV above the ground-state energy [15; 13]. For simplicity, we follow the latter approach and fix \(E_{0}=E_{\text{g.s.}}\). And, throughout this work, we will refer to the collective coordinates at \(s_{\text{out}}\) (in this work, \((Q_{20},Q_{30})\)) as the exit point [36]. From Eq. (2) it can be deduced that the main ingredients required for the estimation of the SF half-lives are the effective potential energy \(V\) and collective inertia \(\mathcal{M}_{\text{eff}}\). In this work, we compute these quantities by employing the self-consistent mean-field method [22; 24] summarized in the following. Nuclear configurations are obtained by means of the HFB method, where the many-body wave function \(|\Psi\rangle\), described as a generalized quasiparticle product state, is given by the minimization of the mean value of the Routhian: \[\widehat{\mathcal{H}}^{\prime}=\widehat{\mathcal{H}}_{\text{HFB}}-\sum_{\tau= n,p}\lambda_{\tau}\widehat{N}_{\tau}-\sum_{\mu=1,2,3}\lambda_{\mu}\widehat{Q}_{ \mu 0}\,. \tag{3}\] In Eq. (3), \(\widehat{\mathcal{H}}_{\text{HFB}}\) is the HFB Hamiltonian, and \(\lambda_{Z}\) and \(\lambda_{N}\) are the Lagrange multipliers fixing the average number of protons and neutrons, respectively. The shape of the nucleus is enforced by constraining the moment operator \(\widehat{Q}_{\mu\nu}\) with multipolarity \(\mu\) and magnetic quantum number \(\nu\). In this work, we explore the evolution of the total energy and collective inertia tensor as a function of the elongation of the nucleus and its mass asymmetry, which are described by the axial quadrupole \(Q_{20}\) and octupole \(Q_{30}\) moment operators, respectively: \[\widehat{Q}_{20} =\hat{z}^{2}-\frac{1}{2}(\hat{x}^{2}+\hat{y}^{2})\,; \tag{4a}\] \[\widehat{Q}_{30} =\hat{z}^{3}-\frac{3}{2}(\hat{y}^{2}+\hat{x}^{2})\hat{z}\,. \tag{4b}\] In order to reduce the computational cost, axial symmetry is enforced in all the calculations (\(\langle\widehat{Q}_{\mu\nu}\rangle=0\) for all \(\nu\neq 0\)), and the additional constraint \(\langle\widehat{Q}_{10}\rangle=0\) is imposed to remove the spurious center-of-mass. Finally the nuclear HFB Hamiltonian \(\widehat{\mathcal{H}}_{\text{HFB}}\) is given by the finite-range density-dependent nucleon-nucleon Gogny interaction. We employ the D1S parametrization [37], which has been widely used in nuclear structure studies across the whole nuclear chart including the description of fission properties of heavy and superheavy nuclei [38]. The effective potential is then given by \(V=E-E_{\text{rot}}\), where \(E\) is the energy obtained from the HFB equations for the Routhian (3), and \(E_{\text{rot}}\) is the energy correction related to the restoration of rotational symmetry, computed using the approach of Ref. [39]. Calculations are carried out by employing the HFB solver HFBaxial, which solves the HFB equations by means of a gradient method with an approximate second-order derivative [40]. The quasiparticle wave functions are expanded in an axially-symmetric deformed harmonic oscillator single-particle basis, containing states with \(J_{z}\) quantum number up to 35/2 and up to 26 quanta in the \(z\)-direction. The basis quantum numbers are restricted by the condition \(2n_{\perp}+|m|+n_{z}/q\leq N_{z}^{\text{max}}\), where \(q=1.5\) and \(N_{z}^{\text{max}}=17\). This choice of the basis parameters allows for a proper description of the elongated prolate shapes characteristic of the fission process [41]. The collective inertia tensor \(\mathcal{M}_{\mu\nu}\) is computed within the Adiabatic-Time-Dependent HFB (ATDHFB) approximation using the non-perturbative scheme [42; 43; 44]: \[\mathcal{M}_{\mu\nu}=\frac{\hbar^{2}}{2\hat{q}_{\mu}\hat{q}_{\nu}}\sum_{\alpha \beta}\frac{F_{\alpha\beta}^{\mu*}F_{\alpha\beta}^{\nu}+F_{\alpha\beta}^{\mu}F _{\alpha\beta}^{\nu*}}{E_{\alpha}+E_{\beta}}\,, \tag{5}\] where \(q_{i}\) are the collective coordinates and \[\frac{F^{\mu}}{\dot{q}_{\mu}}=A^{\dagger}\frac{\partial\rho}{\partial q_{\mu}}B^{ *}+A^{\dagger}\frac{\partial\kappa}{\partial q_{\mu}}A^{*}-B^{\dagger}\frac{ \partial\rho^{*}}{\partial q_{\mu}}A^{*}-B^{\dagger}\frac{\partial\kappa^{*}}{ \partial q_{\mu}}B^{*} \tag{6}\] is given in terms of the matrices of the Bogoliubov transformation \(A\) and \(B\), and the corresponding particle \(\rho\) and pairing \(\kappa\) densities. Then, the effective inertia tensor is given as \[\mathcal{M}_{\rm eff}=\sum_{\mu\nu}\mathcal{M}_{\mu\nu}\frac{dq_{\mu}}{ds} \frac{dq_{\nu}}{ds}\,. \tag{7}\] It is important to remark that the \(\mathcal{M}_{\mu\nu}\) components can suffer from rapid oscillations in the presence of single-particle level crossings near the Fermi surface. Such abrupt changes of occupied single-particle configurations produce variations in the derivatives of the densities in Eq. (6), resulting in pronounced peaks of \(\mathcal{M}_{\rm eff}\) along the fission path [43; 44; 45]. The least action paths are computed using the nudged elastic band method (NEB) [36]. Due to the large number of paths that must be explored, NEB parameters cannot be tuned by hand. Instead, multiple NEB runs are started, with initial paths ending at various points along the outer turning line. The NEB algorithm depends on two parameters, \(k\) and \(\kappa\), which adjust spring and harmonic restoring forces, respectively. Not varying \(k\) and \(\kappa\) will, on occasion, miss some LAPs, akin to skipping over a narrow minimum in an optimization routine. Different runs are started for \(k\) and \(\kappa\) in the range \(0.05-10\), for each initial path. These runs converge to a number of different stationary paths. Typically, there is some component of the path that travels along the outer turning line. To select the final tunneling path, the paths are interpolated using 500 points, and truncated when near the outer turning line and within an energy tolerance of 0.5 MeV. The unique paths are chosen based on the clustering of the exit point using the mean shift algorithm as implemented in scikit-learn [46], and the path corresponding to a given exit point with the least action is chosen as the LAP. ## III Neural networks In this work, we use feedforward NNs as our emulators. We train separate NNs on the potential energy \(V\) and the components of \(\mathcal{M}\). Each NN takes as input \((A,Z,Q_{20},Q_{30})\), specifying the nucleus and deformation, then outputs the value (either \(V\) or one of \(\mathcal{M}_{\mu\nu}\)) at that point. As discussed in Sec. IV.1, to further improve NN performance, we rescale the NN inputs to lie between zero and one. We train NNs with a number of hidden layers varying between 2 and 7, with 200 hidden nodes in the first layer, and a decreasing number of nodes in each subsequent layer. We use the RELU activation function, and train to minimize the root-mean-square error in the desired quantity. For each variant on the NN depth, we train multiple NNs, forming a committee of NNs. We then combine the predictions from each NN in the committee in a weighted average, to further reduce the error on the prediction. To train the NNs, we have computed PESs and the collective inertia for 194 nuclei, each on a regular grid of 4 b for \(0\leq Q_{20}\leq 248\) b, and 6 b\({}^{3/2}\) for \(0\leq Q_{30}\leq 60\) b\({}^{3/2}\). These nuclei are then labeled as either training, combining, or validation, with the latter two sampled randomly across the chart. These different datasets are indicated in Fig. 1. For each nucleus, the entire grid is used in the training/combining/validation. The nuclei in the training set are used to train individual NNs, the nuclei in the combining dataset are used to combine predictions from the committee members in a weighted average, and the nuclei in the validation set are used for validation of the NN predictions. The weights for each committee member are chosen to minimize the root-mean-square error on the nuclei in the combining dataset. As can be seen, most of the nuclei (about 70%) are used for training, with the remaining 30% split equally between the combining and validation datasets. In general, the NN performance is not sensitive to the distribution of training data, provided the NN does not attempt to extrapolate across the nuclear chart. No detailed optimization of the choice of training nuclei was carried out. As mentioned in Sec. II, it is known that the collective inertia tensor can develop discontinuities and rapid variations due to level crossings. This makes emulation of the tensor challenging since the tensor components can span many orders of magnitude as a function of deformation. If the NN is trained on the inertia tensor components by themselves, the network predictions are poor. However, while these problems are features of the approximations used to calculate the inertia, the NN can still learn certain features of the inertia tensor by carrying out the eigenvalue decomposition of the inertia tensor, \[\mathcal{M}=U\Sigma U^{T} \tag{8}\] where \(U\) is the \(2\times 2\) matrix of eigenvectors and \(\Sigma\) is the diagonal matrix of eigenvalues. Since \(U\) is an orthogonal matrix, we can represent \(U\) as an element of the set SO(2) parameterized by Euler angle \(\theta\). In this representation, \(\mathcal{M}\) is completely parameterized by its eigenvalues and the Euler angle \(\theta\). So, the NN is trained on \(\theta\) and the log of the eigenvalues at each point (\(Q_{20}\), \(Q_{30}\)). Training on this representation of the tensor is similar to normalizing the network inputs, as both put NN inputs/outputs on a similar scale. Additionally, this forces the tensor predictions to be positive semi-definite. We also transform \(\theta\) to the range \((-\pi/2,\pi/2)\), so that the angles are mostly clustered near zero (on the interval \((0,\pi)\), there are two clusters: one at 0 and one at \(\pi\), which the NN has difficulties learning). Once the NNs are trained, PESs and inertias are com puted for the same grid of deformations as the original DFT calculations. While the NNs can be evaluated at arbitrary \((Q_{20},Q_{30})\), it is less computationally expensive to use a standard cubic spline interpolator on the grid predicted by the NN. Moreover, the LAPs computed using the NN evaluations and the spline interpolator agree well with each other. Due to the relatively large number of LAP calculations, we report the LAPs computed using the spline-interpolated NN predictions, rather than using the NN predictions themselves. ## IV Neural Network Quality Here, we examine the quality of the NNs, on both the PES and the collective inertia. In general, we observe that the NN is able to reproduce both the PES and the collective inertia for most of the nuclei under consideration. Moreover, the quality of the NN is relatively stable across the different architectures considered. Throughout this section, we will refer to the PES and collective inertia computed using DFT as the reference data, and the PES and inertia computed using the NN as the NN reconstruction. ### Potential energy surfaces For a single nucleus, we define the root-mean-square error (RMSE) \(\Delta V(A,Z)\) in energy over the collective domain considered as \[\Delta V(A,Z)^{2}=\frac{1}{n}\sum_{Q_{20},Q_{30}}[V^{\rm DFT}(Q_{ 20},Q_{30},A,Z)\\ -V^{\rm NN}(Q_{20},Q_{30},A,Z)]^{2}, \tag{9}\] where \(n=693\) is the number of gridpoints evaluated in the PES. A similar quantity can be defined for the components of \(\mathcal{M}_{\rm eff}\), although there \(n\) varies slightly from nucleus to nucleus. Figure 1 shows \(\Delta V(A,Z)\) across the region of the nuclear chart considered, for the deepest NN (7 hidden layers, with 200-175-150-125-100-75-50 hidden units), with rescaled inputs. As can be seen, for most nuclei, \(\Delta V(A,Z)\lesssim 0.5\) MeV. Exceptions occur, with most remaining below 1.5 MeV. For some nuclei, such as \({}^{308}\)Cf, \({}^{314}\)Fm, and \({}^{318}\)No, relatively poor performance may be expected: these nuclei are on the outer edge of the region of the nuclear chart considered, and hence the NN is extrapolating from the training region to reach them. For other nuclei, such as \({}^{232}\)Th and \({}^{280}\)Cm, poor performance is unexpected: these nuclei are surrounded by training nuclei, and so should be emulated fairly well. As such, it seems unlikely that poor performance is due solely to the location of the nucleus on the nuclear chart relative to the training data. To understand the reduced performance, we examine the nuclei in question. Figure 2 shows both the reference PES and its NN reconstruction for \({}^{280}\)Cm. This nucleus is chosen because it has \(\Delta V=2.15\) MeV, which is the largest of all nuclei in the validation set. While the PESs are not identical, features such as the ground state and outer turning line locations, as well as the general shape of the PES, agree quite well. Large discrepancies tend to be limited to the high-energy region of \(Q_{20}\approx 50\) b, \(Q_{30}\gtrsim 30\) b\({}^{3/2}\), which is hardly relevant to fission. We conclude therefore that even for nuclei with larger RMSE, NNs could provide a very reasonable description of the fission path. This aspect will be examined further in Sec. V. To assess the sensitivity of our results with respect to the NN architecture, we repeated our calculations employing different NN sizes and rescaling the inputs. Figure 3 shows the RMSE (now averaged across all nuclei in a given dataset) across the different datasets for a variety of NN depths. As is generally expected, the training dataset has a monotonically decreasing RMSE as the NN depth increases; this is simply due to the increasing number of tunable parameters in the NN. On the other hand, the RMSE for the combining and validation sets is fairly stable with respect to the number of hidden layers of the NN. A general improvement is observed when normalizing the inputs \((A,Z,Q_{20},Q_{30})\) to be between 0 and 1. This is due to two factors. First, Figure 1: \(\Delta V(A,Z)\) (in MeV) for the deepest NN. The different shapes indicate which dataset each nucleus belongs to. Figure 2: The reference PES (left) and the NN reconstruction (right) for \({}^{280}\)Cm, in MeV. The ground state is marked with a \(\times\) symbol. invariant. An input much larger than 1 is equivalent to a large initial weight, with a normalized input. Because the final NN weights are expected to be small (hence initializations following e.g. the Xavier initialization [47], as in this work), the initial weights are far from the final values, and convergence slows. Second, the optimization method itself is not scale-invariant: non-normalized inputs correspond to an ill-conditioned Hessian matrix, in which case gradient descent (and related methods) converge slowly [48; 49]. We conclude that the NN performance in predicting the PES is relatively stable with respect to the NN architecture; Sec. V will demonstrate that performance on this level is adequate for predicting SF observables. ### Collective Inertia Since the components of \(\mathcal{M}\) vary across multiple orders of magnitude and the network is trained on the log of the eigenvalue decomposition, a loss function such as the root-mean-squared error is not an adequate measure of the performance of the NN. Instead, Fig. 4 shows the reference inertia components, plotted against the NN reconstructions, for all nuclei considered. The NN used is the 7-hidden-layer NN with rescaled inputs, with the number of hidden units as described in Sec. IV.1. The diagonal components \(\mathcal{M}_{22}\) and \(\mathcal{M}_{33}\) are predicted fairly well, as the distributions align roughly along the diagonal. It is worth noting that the distributions are slightly misaligned, in all data sets considered, indicating that the NN tends to underpredict relatively large values, and overpredict relatively small values. This, in turn, shows that the NN is slightly biased towards the mean value of the inertia. However, the off-diagonal component \(|\mathcal{M}_{23}|\) is not aligned along the diagonal, except for large values. This is because this component actually varies across almost 10 orders of magnitude (compared to the 4 orders of magnitude for \(\mathcal{M}_{22}\) and \(\mathcal{M}_{33}\)), and so the NN is biased towards predicting the larger values more accurately, resulting in a general overprediction of \(\mathcal{M}_{23}\). In terms of the angle \(\theta\) that is actually determined by the NN, it is difficult to predict both small and large angles, and because \(\theta\) is allowed to be negative, a logarithm transform is not possible. Nevertheless, one obtains a reasonable-looking distribution above \(|\mathcal{M}_{23}|\gtrsim 10^{-4}\,\text{MeV}^{-1}\,\text{b}^{-5/2}\), indicating that some learning has indeed taken place. And, the poorly-learned values below \(10^{-4}\,\text{MeV}^{-1}\,\text{b}^{-5/2}\) are truncated at values \(10^{-6}-10^{-2}\,\text{MeV}^{-1}\,\text{b}^{-5/2}\). When changing the depth of the NN, performance is similar. For shallow networks, predictions on the training dataset show a larger bias: the distribution of points on the inertia plot is less aligned with the diagonal for the \(\mathcal{M}_{22}\) and \(\mathcal{M}_{33}\) components. In other words, the larger reference values are underestimated, and the smaller reference values are overestimated. The validation dataset is aligned similar to the deepest network, shown in Fig. 4. As the depth of the network is increased, the training data points are aligned closer with the diagonal. This is indicative of the NN tending to overfit on the training data as the number of variational parameters increases. The distribution of \(\mathcal{M}_{23}\) values remains approximately Figure 3: The RMSE for a variety of different NN sizes. The dashed line shows the same depth NN, but with input variables normalized to the range \([0,1]\). the same when increasing NN depth, with a slight improvement on the truncated \(\mathcal{M}_{23}\) values. In general, the NN performance on the validation dataset is mostly stable when varying the NN depth. The overarching question is whether this performance is sufficient for predicting observable quantities of interest. As with the PES, this question can be directly answered by looking at NN predictions of physical observables. ## V Impact on Observable Quantities While encouraging, the results discussed in Sec. IV do not give a perfectly clear estimation of the performance of the NNs. For instance, the NN reconstruction of the PES for \({}^{280}\)Cm may be adequate for reproducing fission observables - especially SF fragment yields and half-lives, despite the poor RMSE, since the largest deviations occur at deformations that will not be explored by LAPs. Similarly, the NN commonly fails to reproduce the off-diagonal component of the collective inertia, \(\mathcal{M}_{23}\), but primarily for small values of \(\mathcal{M}_{23}\). Here, we examine the performance of the NN on the lifetime-weighted exit point, as a proxy for the fragment yield [20, 50], and the half-life of the nucleus. For both quantities, we compare three sets of data: the quantity computed using (i) the reconstructed PES and the identity inertia; (ii) using the reference PES and the reconstructed inertia; and (iii) the reconstructed PES and inertia. In this way, we can isolate the impact of the PES and inertia emulations separately, and combine them to assess the overall error of the emulator. In this section, we use the 7-hidden-layer NN with rescaled inputs, with a number of hidden units described in Sec. IV.1. Based on the relative insensitivity to the depth of the NN shown in Sec. IV, the overall performance is expected to be similar for different NN depths. Similar to Sec. IV, exit points and SF half-lives computed using only DFT inputs will be referred to as reference quantities; those with any NN input will be referred to as reconstructed quantities. ### Exit Points As demonstrated in [20, 50], the location of the exit points is sufficient for roughly estimating the fission fragment yields. For this reason, we can consider exit points as reasonable proxies for the fragment yields. When multiple fission channels exist, the combined fragment yields are attained by adding the yields of each channel, weighted by the probability of populating a particular channel. Thus, agreement of the lifetime-weighted exit point indicates strong agreement in the fission fragment yields (and, by necessity, indicates that the dominant fission mode is also in agreement between the reference data and the NN reconstruction). Figure 5 shows the difference in the octupole moment of the lifetime-weighted exit point, for configuration (iii) mentioned above. The octupole moment is chosen because it is critical for explaining multimodality in SF. The \(Q_{30}\) error is similar for the other configurations, and the quadrupole moment is typically within \(\pm 1\,\mathrm{b}\) for all configurations. The agreement is good between the reference exit point and the NN reconstruction: at \(\pm 1\)\(\mathrm{b}^{3/2}\), we expect the fragment yields to agree well (within the hybrid method of Refs. [20, 50]). This agreement is mainly due to the accurate PES reconstruction, as previous studies have shown that the exit point location is fairly robust with respect to variations in the collective inertia [50, 51, 52, 45]. This agreement holds even for nuclei whose PES reconstruction has a large error, such as \({}^{280}\)Cm, indicating that the qualitative features shown in Fig. 2 are reconstructed well enough to describe multimodality in SF. Notice, however, that the exit point locations are not reproduced perfectly for some nuclei, especially in the thorium (\(Z=90\)) chain, where the difference can be as much as \(5\,\mathrm{b}^{3/2}\). This is not due to the PES reconstruction: Figure 1 shows that the thorium isotopes have RMSE \(\Delta V(Z=90)\lesssim 100\) keV, and the exit point reconstruction when considering configuration (i) is within \(1\,\mathrm{b}^{3/2}\) of the reference value. Additionally, a side-by-side comparison of the collective inertia components does not show a systematic deviation between the reference inertia and the NN reconstruction. Nevertheless, the error is due to the inaccurate collective inertia reconstruction. However, it is not a systematic error. Rather, random error is present for every deformation considered, and it is the accumulation of this random error that causes the discrepancy. While the location of any individual exit point is not sensitive to the collective inertia, the probability of tunneling to a particular point depends on the probability given in Eq. (1). Because the probability is exponentially dependent on the action (and therefore exponentially dependent on the collective inertia reconstruction), comparatively small errors can add up and actually switch the Figure 5: The \(Q_{30}\) component of the reconstructed lifetime-weighted exit point, minus the reference \(Q_{30}\) component, in \(\mathrm{b}^{3/2}\). These results were computed using configuration (iii), i.e. the NN was used to reconstruct both the PES and the collective inertia. dominant exit point, from asymmetric to symmetric and vice versa. This is especially important for nuclei with a wide fission barrier, as the cumulative error along the path is large. In general, we observe that both the PES and the collective inertia are emulated well enough to predict exit points that agree with the reference data. And, for most nuclei, the dominant mode is also in agreement. Together, this means that the SF fragment yields are in agreement between the reference data and the NN reconstruction for most nuclei under consideration. ### Spontaneous fission half-lives In this section we examine the performance of the NN when predicting the SF half-life, \(t_{1/2}^{\text{sf}}\). For the sake of simplicity, we do not include triaxiality and pairing correlations as collective degrees of freedom, despite their large impact on the predicted \(t_{1/2}^{\text{sf}}\)[53; 54; 55; 56; 51]. Figure 6 shows \(t_{1/2}^{\text{sf}}\) computed using the reference data vs. \(t_{1/2}^{\text{sf}}\) computed using the NN reconstruction, for configurations (i) and (iii) mentioned above (results for configuration (ii) are similar to those of (iii)). As can be seen, the \(t_{1/2}^{\text{sf}}\) predictions agree well, typically within 3 orders of magnitude across the approximately 80 orders of magnitude under consideration. Figure 6(a) demonstrates that the PES reconstruction is sufficient to predict \(t_{1/2}^{\text{sf}}\) values that agree well with the reference values. As with the SF fragment yields, this is true even for nuclei with a large \(\Delta V\), e.g. \({}^{280}\)Cm, once again demonstrating that the PES emulation quality is indeed sufficient to reproduce SF observables. Figure 6(b) includes the collective inertia emulation. As can be seen, the reproduced \(t_{1/2}^{\text{sf}}\) values agree less well with the reference values, although the disagreement is still within 3 orders of magnitude for most nuclei. This is not unexpected: Sec. V.1 shows that the collective inertia emulation, while sufficient for most nuclei, is not accurate enough for all nuclei. Similar to Sec. V.1, the reason for the disagreement in \(t_{1/2}^{\text{sf}}\) is the accumulation of random errors when the fission pathway goes across the fission barrier. Now, rather than changing the dominant fission mode, \(t_{1/2}^{\text{sf}}\) is simply changed from the reference value in a more-or-less random manner. The effect is most prominent for long-lived nuclei, where errors in the collective inertia add up to a fairly large value as the pathway traverses a wider fission barrier. While it may be desirable in principle to improve the emulation, the nuclei whose \(t_{1/2}^{\text{sf}}\) values are reproduced with a large error are those predicted to be stable to SF, within the \((Q_{20},Q_{30})\) collective space. As such, errors in the SF observables have little effect on results that are further dependent on \(t_{1/2}^{\text{sf}}\), such as \(r\)-process network calculations. The inset panels in Fig. 6 magnify the range \(10^{-5}-10^{10}\) s, to highlight the relevant \(r\)-process range. As can be seen, almost all nuclei within this range are reproduced nicely within three orders of magnitude. Therefore, we conclude that NNs are able to reproduce both the PES and the collective inertia well enough that \(t_{1/2}^{\text{sf}}\) is reproduced within 3 orders of magnitude for nuclei for which SF is relevant in the \(r\)-process region. ## VI Conclusions In this work, we have shown that fully connected feed-forward NNs are able to emulate both the potential energy and the collective inertia across a region of the nuclear chart, in the collective space consisting of the axial quadrupole and octupole moments. In general, the emulation error on the potential energy is about 500 keV, and the largest discrepancies are found in high-energy regions far from the fission path. The inertia tensor is reproduced within roughly an order of magnitude. We find that the NN performance is stable with respect to changes in the architecture, while the rescaling of input variables produces a general improvement overall. Most of the exit points predicted by the NN agree with the DFT predictions within a \((\Delta Q_{20},\Delta Q_{30})=(2\,\text{b},1\,\text{b}^{3/2})\) range. The SF half-lives are usually reproduced within a factor \(10^{3}\) over a span of more than 70 orders of magnitude. We find that the largest source of discrepancies is the emulation of the collective inertia tensor, due to the rapid changes of the inertia tensor in regions where single-particle level crossings are present. For some very long-lived nuclei, the associated error accumulates along the wider fission barrier. Conversely, in nuclei where fission can be a major decay mode, the emulations are in Figure 6: The half-life predicted using the DFT reference data, \(t_{1/2}^{\text{sf}\text{-DFT}}\), plotted against the half-life computed using the NN reconstruction, \(t_{1/2}^{\text{sf}\text{-NN}}\). Panel (a) shows configuration (i), in which only the PES is emulated; panel (b) shows configuration (iii), in which both the PES and the collective inertia are emulated. The black line marks the diagonal: \(t_{1/2}^{\text{sf}\text{-DFT}}=t_{1/2}^{\text{sf}\text{-NN}}\). Gray bars are drawn at \(t_{1/2}^{\text{sf}\text{-DFT}}\times 10^{\pm 3}\), i.e. 3 orders of magnitude above and below the diagonal. Insets show the range \(10^{-5}-10^{10}\,\text{s}\), to highlight the relevant \(r\)-process range.
2310.19047
Transversity PDFs of the proton from lattice QCD with physical quark masses
We present a lattice QCD calculation of the transversity isovector- and isoscalar-quark parton distribution functions (PDFs) of the proton utilizing a perturbative matching at next-to-leading-order (NLO) accuracy. Additionally, we determine the isovector and isoscalar tensor charges for the proton. In both calculations, the disconnected contributions to the isoscalar matrix elements have been ignored. The calculations are performed using a single ensemble of $N_f = 2 +1$ highly-improved staggered quarks simulated with physical-mass quarks and a lattice spacing of $a = 0.076$ fm. The Wilson-clover action, with physical quark masses and smeared gauge links obtained from one iteration of hypercubic smearing, is used in the valence sector. Using the NLO operator product expansion, we extract the lowest four to six Mellin moments and the PDFs via a neural network from the matrix elements in the pseudo-PDF approach. In addition, we calculate the PDFs in the quasi-PDF approach with hybrid-scheme renormalization and the recently developed leading-renormalon resummation technique, at NLO with the resummation of leading small-$x$ logarithms.
Xiang Gao, Andrew D. Hanlon, Swagato Mukherjee, Peter Petreczky, Qi Shi, Sergey Syritsyn, Yong Zhao
2023-10-29T15:48:21Z
http://arxiv.org/abs/2310.19047v2
# Transversity PDFs of the proton from lattice QCD with physical quark masses ###### Abstract We present a lattice QCD calculation of the transversity isovector- and isoscalar-quark parton distribution functions (PDFs) of the proton utilizing a perturbative matching at next-to-leading-order (NLO) accuracy. Additionally, we determine the isovector and isoscalar tensor charges for the proton. In both calculations, the disconnected contributions to the isoscalar matrix elements have been ignored. The calculations are performed using a single ensemble of \(N_{f}=2+1\) highly-improved staggered quarks simulated with physical-mass quarks and a lattice spacing of \(a=0.076\) fm. The Wilson-clover action, with physical quark masses and smeared gauge links obtained from one iteration of hypercubic (HYP) smearing, is used in the valence sector. Using the NLO operator product expansion, we extract the lowest four to six Mellin moments and the PDFs from the matrix elements via a neural network. In addition, we calculate the \(x\)-dependence of the PDFs with hybrid-scheme renormalization and the recently developed leading-renormalon resummation technique, at NLO with the resummation of leading small-\(x\) logarithms. ## I Introduction A significant goal of hadron physics is the determination of the full structure of nucleons. There has been much progress towards this end, both experimentally and theoretically. Experimentally, the leading-twist parton distributions functions (PDFs) for both the unpolarized and longitudinally polarized proton have been determined with high precision through global analyses [1; 2; 3; 4] of experimental data collected for example at HERA, the Tevatron, the LHC, etc. In addition, to obtain the full collinear structure at leading twist requires the transversity PDF, which gives the difference in the probability to find a parton aligned and anti-aligned with the transversely polarized hadron. However, the transversity PDF is less well constrained experimentally, but measurements of the single transverse-spin asymmetries from semi-inclusive deep-inelastic scattering (SIDIS) processes by COMPASS [5] and HERMES [6], as well as dihadron production in SIDIS by COMPASS [7; 8] and HERMES [9], and \(pp\) collisions by RHIC [10; 11; 12], have led to a series of extractions of the transversity PDF [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. However, the uncertainties can still be as large as 40% or more in the valence region [30]. One of the major goals of the JLab 12 GeV upgrade and the upcoming Electron-Ion Collider (EIC) is to gain more information on the spin structure of the nucleon, including the transversity PDF [31; 32]. On the theoretical side, there has been significant development in the first-principles calculations of PDFs through lattice QCD (see reviews in Refs. [33; 34; 35; 36; 37; 38; 39]). Among them, the two most widely used approaches utilize either the quasi-PDF within the framework of large-momentum effective theory (LaMET) [40; 36; 41] or the pseudo-PDF [42; 43]. Both the quasi-PDF and pseudo-PDF are defined from the matrix elements of a gauge-invariant equal-time bilinear operator in a boosted hadron state [40], which can be directly simulated on the lattice. In the LaMET approach, the PDF can be calculated from the quasi-PDF through a power expansion and effective theory matching at large hadron momentum, with controlled precision for a range of moderate \(x\). On the other hand, the pseudo-PDF method relies on a short-distance factorization in coordinate space [44; 45; 46], which allows for a model-independent extraction of the lowest Mellin moments [46] or a model-dependent extraction of the \(x\)-dependent PDF. Both methods require larger hadron momenta to extract more information on the PDF, and can complement each other in practical calculations [47; 48]. Over the past decade, there have been a few calculations of the transversity PDF from lattice QCD using both the quasi- [49; 50; 51; 52; 53; 54] and pseudo-PDF [55] approaches, which were all carried out with a next-to-leading-order (NLO) perturbative matching correction. Among them, the first physical pion mass calculations [50; 51; 52] were accomplished with the regularization-independent momentum subtraction (RI/MOM) scheme [56; 57; 58; 59] for the lattice renormalization, which is flawed by the introduction of non-perturbative effects at large quark-bilinear separation. To overcome this problem, the hybrid scheme [60] was proposed to subtract the Wilson line mass with a matching to the \(\overline{\rm MS}\) scheme at large quark-bilinear separation, which was used in the recent calculation of Ref. [54] with continuum and physical pion mass extrapolations. More recently, a systematic way to remove the renormalon ambiguity in the Wilson-line mass matching, called leading-renormalon resummation (LRR), was proposed in Refs. [48; 61]. In this work we carry out a lattice QCD calculation of the proton isovector and isoscalar quark transversity PDFs at physical quark masses, where the latter have been calculated without the inclusion of disconnected diagrams. This is an extension of our previous calculation of the proton isovector unpolarized PDF [62]. Here we utilize both methods for calculation, which can help to understand the significance of the different systematics within them. In particular, for the quasi-PDF method, we adopt the hybrid scheme with LRR and work at NLO with leading-logarithmic (LL) resummation that accounts for PDF evolution, which gives us a reliable estimate of the sysmtematic uncertainty in the small-\(x\) region [61]. The rest of the paper is organized as follows. First, in Sec. II, we review the setup of our lattice calculation. Then in Sec. III we describe our analysis strategy to extract the ground-state matrix elements, which includes an estimate for the tensor charge. In Sec. IV we use the ground-state matrix elements to extract the lowest few Mellin moments from the leading-twist OPE. We then move on to the determination of the transversity PDF with the pseudo-PDF method in Sec. V and the quasi-PDF method in Sec. VI. And, finally, we conclude in Sec. VII. ## II Lattice details Our setup is nearly identical to that used in our previous work on the unpolarized proton PDF [62], and is also similar to our work on the pion valence PDF [63; 64]. There are only two differences here: i) the specific correlators needed for the transversity PDF, which were, in fact, computed at the same time as the correlators needed for the unpolarized PDF; and ii) an increase in statistics for the \(P_{z}=\frac{2\pi 6}{L}\), \(t_{\rm sep}=12a\) data. Therefore, we only repeat the most pertinent details here. The calculations are performed on a \(64^{3}\times 64\) ensemble of \(N_{f}=2+1\) highly-improved staggered quarks (HISQ) [65] with masses tuned to their physical values and a lattice spacing of \(a=0.076\) fm, which was generated by the HotQCD collaboration [66]. For the valence quarks, the tree-level tadpole-improved Wilson-clover action is used with physical quark masses and a single iteration of HYP smearing [67]. In order to build a nucleon operator with good overlap onto a highly-boosted nucleon state, the quark fields are smeared using Coulomb-gauge momentum smearing [68] as described in App. A of Ref. [69]. Within this method, for a given desired momentum \(P_{z}\equiv\frac{2\pi n_{z}}{L}\) of the nucleon, the momentum smearing assumes a quark boost of \(\frac{2\pi k_{z}}{L}\), where \(n_{z},k_{z}\in\mathbb{Z}\). For an optimal signal, \(k_{z}\) should be about half of \(n_{z}\). We use the Qlua software suite [70] for calculating the quark propagators and subsumquently constructing the final correlators. The needed inversions are performed using the multigrid solver in QUDA [71; 72] and utilize all-mode averaging (AMA) [73] to reduce the total computational cost. The residual used in our solver is \(10^{-10}\) and \(10^{-4}\) for exact and sloppy solves, respectively. Some of the more important details, including the total statistics achieved, can be found in Tab. 1. ### Correlation functions We use the standard interpolating operator for the nucleon, given by \[N_{\alpha}(x,t)=\varepsilon_{abc}u_{a\alpha}(x,t)(u_{b}(x,t)^{T}C\gamma_{5}d_{ c}(x,t)), \tag{1}\] where \(C=\gamma_{t}\gamma_{y}\) is the charge-conjugation matrix. These are then used to construct the two-point correlation functions as follows \[C_{\mathcal{P}}^{\rm 2pt}(\vec{p},t_{\rm sep};\vec{x},t_{0})=\] \[\qquad\sum_{\vec{p}}e^{-i\vec{p}\cdot(\vec{y}-\vec{x})}\mathcal{P} _{\alpha\beta}^{\rm 2pt}\left\langle N_{\alpha}^{(s)}(\vec{y},t_{\rm sep}+t_{0}) \overline{N}_{\beta}^{(s^{\prime})}(\vec{x},t_{0})\right\rangle, \tag{2}\] where the superscripts on the nucleon operators specify whether the quarks are smeared (\(s=S\)) or not (\(s=P\)), and \(\mathcal{P}^{\rm 2pt}\) is a projection operator. As described in Ref. [62], we always use smeared quarks at the source time, but consider both smeared and unsmeared quarks at the sink time, which helps to more reliably extract the \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline Ensembles & \(m_{\pi}\) & \(N_{\rm cfg}\) & \(n_{z}\) & \(k_{z}\) & \(t_{\rm sep}/a\) & [(\#ex,\#sl)] \\ \(a,L_{t}\times L_{s}^{3}\) & (GeV) & & & & & \\ \hline \(a=0.076\) fm & 0.14 & 350 & 0 & 0 & 6 & (1, 16) \\ \(64\times 64^{3}\) & & & 0 & 0 & 8,10 & (1, 32) \\ & & & 0 & 0 & 12 & (2, 64) \\ & & & 1 & 0 & 6,8,10,12 & (1, 32) \\ & & & 4 & 2 & 6 & (1, 32) \\ & & & 4 & 2 & 8,10,12 & (4, 128) \\ & & & 6 & 3 & 6 & (1, 20) \\ & & & 6 & 3 & 8 & (4, 100) \\ & & & 6 & 3 & 10 & (5, 140) \\ & & & 6 & 3 & 12 & (13, 416)* \\ \hline \hline \end{tabular} \end{table} Table 1: The more important details on the ensemble and the statistics gathered for our calculation. The integer momentum \(n_{z}\) of the nucleon and the corresponding integer boost momentum \(k_{z}\) of the quarks are given. The sink-source separations used are given by \(t_{\rm sep}\). And, finally, the number of samples used for the exact and sloppy solves is given by \(\#\)ex and \(\#\)sl, respectively. The asterisk indicates where extra samples were generated as compared to our previous work in Ref. [62]. spectrum by looking for agreement between the independent analysis of both correlators. The three-point correlators computed are given by \[C^{\rm 3pt}_{\mathcal{P},\Gamma,f}(\vec{p},\vec{q},t_{\rm sep},t_{ \rm ins},z;\vec{x},t_{0})=\] \[\sum_{\vec{y},\vec{z}_{0}}e^{-i\vec{p}\cdot(\vec{y}-\vec{x})}e^{-i \vec{q}\cdot(\vec{x}-\vec{z}_{0})}\mathcal{P}^{\rm 3pt}_{\alpha\beta}\] \[\times\left\langle N_{\alpha}(\vec{y},t_{\rm sep}+t_{0})\mathcal{ O}^{f}_{\Gamma}(\vec{z}_{0}+z\hat{z},t_{\rm ins}+t_{0})\overline{N}_{\beta}( \vec{x},t_{0})\right\rangle, \tag{3}\] where \(\mathcal{P}^{\rm 3pt}\) is a projection operator, \(\vec{p}\) is the momentum of the sink nucleon, \(\vec{q}\) is the momentum transfer, and the inserted operator is \[\mathcal{O}^{f}_{\Gamma}(\vec{z}_{0}+z\hat{z},t_{\rm ins}+t_{0})= \overline{\psi}^{f}(\vec{z}_{0},t_{\rm ins}+t_{0})\Gamma\] \[\times W(\vec{z}_{0},t_{\rm ins}+t_{0};\vec{z}_{0}+z\hat{z},t_{ \rm ins}+t_{0})\psi^{f}(\vec{z}_{0}+z\hat{z},t_{\rm ins}+t_{0}), \tag{4}\] where \(\Gamma\) is a product of gamma matrices, \(\psi^{f}(\vec{z},t)\) is a quark field of flavor \(f\), and \(W\) is a straight Wilson line of length \(z\) connecting the quark fields. For the three-point functions, we only consider nucleon operators built from smeared quarks. The Wilson line is formed from products of the HYP-smeared gauge links and is needed to construct a gauge-invariant operator. In this work, we consider the light quark flavors \(f=u,d\) separately, allowing us to access the isovector \((u-d)\) and isoscalar \((u+d)\) combinations. Note, however, that all disconnected contributions are ignored, leading to uncontrolled errors due to their neglect in the isoscalar combination. This approximation is expected to be reasonable given that estimates from PNDME for the disconnected contributions to the tensor charge have indicated they are smaller than the statistical error on the connected contributions [74; 75]. In what follows, we only consider zero momentum transfer \(\vec{q}=0\) and the sink momenta are always in the \(z\)-direction \(\vec{p}=\frac{2\pi n}{L}\hat{z}\equiv P_{z}\hat{z}\). We use four different values for the sink momenta \(n_{z}\in\{0,1,4,6\}\) which in physical units corresponds to \(P_{z}=\{0,0.25,1.02,1.53\}\) GeV. The statistics gathered and quark boosts used for each \(n_{z}\) are given in Tab. 1. In this work, we are interested in the tensor charge and the transversity PDF, which can be accessed with \(\Gamma\propto\sigma^{zj}\) (with \(j\) being either \(x\) or \(y\)) and \[\mathcal{P}^{\rm 3pt}=\frac{1}{2}(1+\gamma_{t})(1-i\gamma_{5}\hat{s}\cdot \vec{\gamma}) \tag{5}\] which projects the nucleon to positive parity and its spin to be aligned along the direction given by \(\hat{s}\). Here we use \(\Gamma=-i\sigma^{zy}=-i\gamma_{z}\gamma_{y}\) and \(\hat{s}=\hat{x}\). Throughout the remainder of the text, we use \(\delta\) to denote the specific operator and polarization used, which is motivated by the standard usage of \(\delta q(x)\) in the literature for the transversity PDF. In order to guarantee the cancellation of amplitudes that appear in the spectral decompositions of the three- and two-point functions, we set \(\mathcal{P}^{\rm 2pt}=\mathcal{P}^{\rm 3pt}\equiv\mathcal{P}\), and we will denote this in the two-point functions by \(C^{\rm 2pt}_{\mathcal{S}_{x}}\). ## III Ground-state matrix elements In this section, we extract the ground-state bare matrix elements from the three-point correlation fucntions. Our analysis strategy is nearly identical to that used in our previous work of Ref. [62], and we repeat the most important details here for convenience. The only difference in the strategy is the choice in our preferred fit ranges. In this work, the quality of our data has increased, giving us more confidence in our fits, and therefore, we end up excluding less time insertions from our final fits. Our approach first extracts the spectrum and ratios of amplitudes from the two-point correlation functions in order to use these as priors on the parameters shared in our fits to the ratio of three-point to two-point functions. Although the two-point correlation functions differ slightly from those used in our previous work in Ref. [62], because \(\mathcal{P}^{\rm 2pt}\) is different, we do not include any discussions here, as the analysis strategy is identical and the results only change by a slight increase in the error. The increase in error can be understood from the fact that the change in \(\mathcal{P}^{\rm 2pt}\) amounts to only using a single spin polarization, as opposed to averaging over both spin polarizations as done previously. ### Analysis strategy We follow the standard approach for extracting the bare matrix elements, which begins by forming an appropriate ratio of the three-point to two-point correlation functions given by \[R^{f}_{\delta}(P_{z},t_{\rm sep},t_{\rm ins},z)\equiv\frac{C^{\rm 3pt}_{ \delta,f}(\vec{p}=P_{z}\hat{z},\vec{q}=0,t_{\rm sep},t_{\rm ins},z)}{C^{\rm 2 pt}_{S_{x}}(\vec{p},t_{\rm sep})}. \tag{6}\] The main reason for this choice is that it can be shown that \[\lim_{t_{\rm ins},t_{\rm sep}\to\infty}R^{f}_{\delta}(P_{z},t_{\rm sep},t_{\rm ins },z)=h^{f}_{\delta;0,0}(z,P_{z}), \tag{7}\] where \(h^{f}_{\delta;0,0}(z,P_{z})\) is the desired bare ground-state matrix element. Since the values of \(t_{\rm sep}\) considered here are not likely in the asymptotic region, we include the effects from the lowest \(N\) states by substituting the spectral decompositions of the three- and two-point functions truncated at the \(N\)th state. After some algebra, we find \[R^{f}_{\delta}(P_{z},t_{\rm sep},t_{\rm ins},z;N)=\] \[\frac{\sum_{m,n=0}^{N-1}h^{f}_{\delta;m,n}\prod_{l,k,r=1}^{m}e^{- \Delta_{i,l-1}t_{\rm sep}e(\Delta_{k,k-1}-\Delta_{r,r-1})t_{\rm ins}}}{1+\sum _{i=1}^{N-1}r_{i}\prod_{j=1}^{i}e^{-\Delta_{j,j-1}t_{\rm sep}}}, \tag{8}\] where \(\Delta_{i,j}\equiv E_{i}-E_{j}\), \(r_{i}\equiv|A^{(i)}_{\alpha}(P_{z})|^{2}/|A^{(0)}_{\alpha}(P_{z})|^{2}\), \(A^{(n)}_{\alpha}(P_{z})\equiv\bra{\Omega}N_{\beta}\mathcal{P}_{\beta\alpha}\ket{ n,P_{z}}\) (\(|\Omega\) is the vacuum state and \(|n,P_{z}\rangle\) is the \(n\)-th nucleon state with momentum \(P_{z}\)), and \[h^{rf}_{\delta;m,n}\equiv\frac{A^{(m)}_{\alpha}(P_{z})A^{(n)}_{\alpha}(P_{z})^{* }h^{f}_{\delta;m,n}(z,P_{z})}{A^{(0)}_{\alpha}(P_{z})A^{(0)}_{\alpha}(P_{z})^{* }}. \tag{9}\] The parameters \(h^{rf}_{\delta;m,n}\) depend on \(z\) and \(P_{z}\), but this dependence is suppressed to save space. For convenience, we typically suppress the indices on the matrix elements when referring to the ground state matrix element (i.e. \(h^{f}_{\delta}(z,P_{z})\equiv h^{f}_{\delta;0,0}(z,P_{z})\)). As the excited-state matrix elements are never used, this should not Figure 1: The Wilson-line length dependence of the (upper) real and (lower) imaginary parts of the isovector ground-state bare matrix elements from two-state and summation fits with \(n_{\rm exc}=2,3\) for the three nonzero values of momentum considered (one for each column). The results shown are averaged with the negative \(z\) fits. Figure 2: The same as Fig. 1, but for the isoscalar matrix elements. cause any confusion. We fit the ratio of data in Eq. (6) to \(R_{\delta}^{f}(P_{z},t_{\rm sep},t_{\rm ins},z;N)\), where \(h_{\delta;m,n}^{f^{\prime}}\), \(\Delta_{i,j}\), and \(r_{i}\) are the fit parameters. The parameters \(\Delta_{i,j}\) and \(r_{i}\) are priored using the fit results from the two-point functions (see Ref. [62] for details). In this work, we only consider \(N=1,2\), as our limited data tends to lead to unreliable fits when \(N>2\). In order to reduce the effects from unaccounted for excited-states as much as possible, we remove some of the data points nearest the sink and source times. We do this in a symmetric way, i.e. for each \(t_{\rm ins}\) not included in the fit, we also do not include \(t_{\rm sep}-t_{\rm ins}-1\). We define \(n_{\rm exc}\) to be the number of insertion times removed on each side of the middle point for each \(t_{\rm sep}\). Therefore, for each \(t_{\rm sep}\), the insertion times included in the fit are \(t_{\rm ins}\in[n_{\rm exc}+1,t_{\rm sep}-n_{\rm exc}-1]\). However, making \(n_{\rm exc}\) too large can leave too little data left, and therefore we only consider \(n_{\rm exc}\leq 3\). As described in our previous work of Ref. [62], the two-point function fits show contributions from three states for \(t_{\rm sep}\leq n_{\rm exc}+1\) requiring the use of an effective value for the prior on the gap \(\Delta_{1,0}\) that takes into account effects from higher states. The specific value used for the prior on the gap comes from the two-state fit to the two-point function with the lower fit range \(t_{\rm min}=n_{\rm exc}+1\). As an additional consistency check on our fit results, we also make use of the summation method, which involves first summing \(R_{\delta}^{f}(P_{z},t_{\rm sep},t_{\rm ins},z)\) over the subset \(t_{\rm ins}\in[n_{\rm exc}+1,t_{\rm sep}-n_{\rm exc}-1]\) for each \(t_{\rm sep}\) \[S_{\delta}^{f}(P_{z},t_{\rm sep},z;n_{\rm exc})\equiv\sum_{t_{\rm ins}=n_{\rm exc }+1}^{t_{\rm sep}-n_{\rm exc}-1}R_{\delta}^{f}(P_{z},t_{\rm sep},t_{\rm ins},z), \tag{10}\] which reduces the leading contamination from excited states. The bare ground-state matrix element can then be extracted from a linear fit to the sum as \[S_{\delta}^{f}(P_{z},t_{\rm sep},z;n_{\rm exc})=B_{0}+t_{\rm sep}h_{\delta}^{f} (z,P_{z}). \tag{11}\] In Figs. 1 and 2, we show comparisons of the two-state and summation fit results for the isovector and isoscalar combinations, respectively, as a function of the Wilson line length for both \(n_{\rm exc}=2\) and \(3\). We see generally good agreement across these different fits, and, with the better data quality as compared to the unpolarized case, we choose our preferred fit as the two-state fit to the ratio Eq. (6) with \(n_{\rm exc}=2\). Several representative fits, all using our preferred fit strategy, are shown in App. B. There we include the fits to the zero-momentum local matrix elements, relevant for the tensor charge, in Fig. 18 for the isovector and isoscalar combinations. We also include various fits to the non-local matrix elements, relevant for information on the PDFs, in Figs. 19 and 20 for the isovector combination and in Figs. 21 and 22 for the isoscalar combination. ### Tensor charge \(g_{T}\) Although the focus of this work is based on the non-local matrix elements, we first turn our attention to the local ones which give us access to the nucleon tensor charge \(g_{T}\). The bare matrix elements must be renormalized in a standard scheme (like \(\overline{\rm MS}\)) in order to make comparisons with phenomenological results and other lattice determinations. The matrix elements are multiplicatively renormalizable, and we first determine the ratio of renormalization constants \(Z_{T}/Z_{V}\) in the RI-MOM scheme which is then converted to \(\overline{\rm MS}\) at the scale \(\mu=2\,\rm GeV\) (see App. A for details). Then, using the ratio of bare charges \(g_{T}^{\rm bare}/g_{V}^{\rm bare}\) along with the expectation of \(Z_{V}g_{V}^{\rm bare}=1\), the renormalized tensor charge \(g_{T}=Z_{T}g_{T}^{\rm bare}\) can be determined. Using our estimate for \(Z_{T}/Z_{V}\), we find \[g_{T}^{u-d} =1.05(2),\ \ \overline{\rm MS}(\mu=2\,\rm GeV), \tag{12}\] \[g_{T}^{u+d} =0.64(2),\ \ \overline{\rm MS}(\mu=2\,\rm GeV).\] In Tab. 2, we show a comparison of our results to the other \(N_{f}=2+1\) results given in the FLAG review 2021 [76]. ## IV Mellin moments from the leading-twist OPE We now move on to the extraction of the lowest few Mellin moments using the leading-twist OPE approximation. Here we avoid the need for the renormalization factors, which depend on the Wilson-line length \(z\) and the lattice spacing \(a\), by forming the renormalization-group invariant ratio [85] \[\mathcal{M}_{\delta}^{ff^{\prime}}(\lambda,z^{2};P_{z}^{0})=\frac{h_{\delta}^{ f}(z,P_{z})}{h_{\delta}^{f}(z,P_{z}^{0})}/\frac{h_{\delta}^{f}(0,P_{z})}{h_{ \delta}^{f^{\prime}}(0,P_{z}^{0})}, \tag{13}\] where \(\lambda\equiv zP_{z}\) is known as the Ioffe time. In the literature, this ratio is referred to as the Ioffe time pseudo-distribution (pseudo-ITD). In order to cancel the renormalization factors, the \(z=0\) matrix elements are not necessary, but this choice is favorable in that it enforces a normalization and cancels correlations. In this work, \begin{table} \begin{tabular}{c c c c} \hline \hline & \(g_{T}^{u-d}\) & \(g_{T}^{u}\) & \(g_{T}^{d}\) \\ \hline This work & 1.05(2) & 0.84(2) & -0.21(1) \\ NME [77] & 0.95(5)(2) & & \\ RBC/UKQCD [78] & 1.04(5) & & \\ Mainz [79, 80] & 0.965(38)(\(13^{1}_{1}\)) & 0.77(4)(6) & -0.19(4)(6) \\ LHPC [81] & 0.972(41) & & \\ JLQCD [82] & 1.08(3)(3)(9) & 0.85(3)(2)(7) & -0.24(2)(0)(2) \\ LHPC [83] & 1.038(11)(12) & & \\ RBC/UKQCD [84] & 0.9(2) & & \\ \hline \end{tabular} \end{table} Table 2: Comparison of our extracted tensor charges with those in the FLAG review 2021 [76] with \(N_{f}=2+1\). The results are ordered by year. we only consider the case with \(P_{z}^{0}=0\), commonly referred to as the reduced pseudo-ITD [43, 86, 87, 88, 89, 90, 91]. Additionally, since there are no gluons involved in the case of the transversity distributions, the leading-twist OPE expansion of the pseudo-ITD does not depend on the flavor combination \(f^{\prime}\), even if \(f\neq f^{\prime}\), and we, therefore, opt to omit the \(f^{\prime}\) from our notation in order to not be overly cumbersome. In what follows, when extracting the isovector flavor combination, \(f=f^{\prime}=u-d\), and for the isoscalar flavor combination, \(f=u+d\) and \(f^{\prime}=u-d\). Then, using the leading-twist OPE approximation, we can write down the reduced pseudo-ITD as an expansion in Mellin moments \[\mathcal{M}_{\delta}^{f}(\lambda,z^{2},P_{z}^{0}=0)=\sum_{n=0} \frac{C_{n}^{\delta}(\mu^{2}z^{2})}{C_{0}^{\delta}(\mu^{2}z^{2})}\frac{(-i \lambda)^{n}}{n!}\frac{\langle x^{n}\rangle_{\delta}^{f}\left(\mu\right)}{ \langle x^{0}\rangle_{\delta}^{f}\left(\mu\right)}\\ +\mathcal{O}(\Lambda_{\text{QCD}}^{2}z^{2}), \tag{14}\] where \(C_{n}^{\delta}(\mu^{2}z^{2})\) are the Wilson coefficients for the transversity computed in the ratio scheme up to NLO in the strong coupling \(\alpha_{s}(\mu)\), which at fixed order are given by [49, 55] \[C_{n,\text{NLO}}^{\delta}(\mu^{2}z^{2})=1+\frac{\alpha_{s}(\mu) C_{F}}{2\pi}\Bigg{[}2\ln\!\left(\frac{\mu^{2}z^{2}e^{2\gamma_{E}+1}}{4} \right)\sum_{j=2}^{n+1}\frac{1}{j}\\ -2\Bigg{(}\sum_{j=1}^{n}\frac{1}{j}\Bigg{)}^{2}-2\sum_{j=1}^{n} \frac{1}{j^{2}}\Bigg{]}, \tag{15}\] \(C_{F}=4/3\), and \(\langle x^{n}\rangle_{\delta}^{f}\left(\mu\right)\) is the \(n\)th Mellin moment of the transversity PDF of flavor \(f\) defined at the factorization scale \(\mu\), i.e. \[\langle x^{n}\rangle_{\delta}^{f}\left(\mu\right)=\int_{-1}^{1} \mathrm{d}x\,x^{n}\delta q^{f}(x,\mu), \tag{16}\] where \(\delta q^{f}(x,\mu)\) is the transversity PDF of a quark with flavor \(f\) for \(x\geq 0\) and of its antiquark for \(x<0\). Estimates for the strong coupling itself are determined from Ref. [92], and we exclusively work at the scale \(\mu=2\) GeV, resulting in \(\alpha_{s}(\mu=2\) GeV\()=0.2930\). Further, we also consider the effects from target mass corrections (TMCs), which can be incorporated with the following substitution \[\langle x^{n}\rangle_{\delta}^{f}\rightarrow\langle x^{n}\rangle_{\delta}^{f} \sum_{k=0}^{n/2}\frac{(n-k)!}{k!(n-2k)!}\bigg{(}\frac{m_{N}^{2}}{4P_{z}^{2}} \bigg{)}^{k}. \tag{17}\] As the Wilson coefficients are all real, it is clear from Eq. (14) that the real and imaginary parts of the reduced pseudo-ITD, \(\mathcal{M}_{\delta}^{f}(\lambda,z^{2},0)\), can be written solely in terms of the even and odd moments, respectively. Therefore, we choose to separately fit the real and imaginary parts of the reduced pseudo-ITD to \[\operatorname{Re}\mathcal{M}_{\delta}^{f}(\lambda,z^{2},P_{z}^{0} =0)=\\ \sum_{n=0}^{[N_{\text{max}}/2]}\frac{C_{2n}^{\delta}(\mu^{2}z^{2} )}{C_{0}^{\delta}(\mu^{2}z^{2})}\frac{(-i\lambda)^{2n}}{(2n)!}\,\langle x^{2n} \rangle_{\delta}^{f}\,,\] \[\operatorname{Im}\mathcal{M}_{\delta}^{f}(\lambda,z^{2},P_{z}^{0} =0)=\\ \sum_{n=1}^{[N_{\text{max}}/2]}\frac{C_{2n-1}^{\delta}(\mu^{2}z^{2 })}{C_{0}^{\delta}(\mu^{2}z^{2})}\frac{(-i\lambda)^{2n-1}}{(2n-1)!}\,\langle x^{ 2n-1}\rangle_{\delta}^{ff}\,, \tag{18}\] respectively, where the reduced moments \(\langle x^{n}\rangle_{\delta}^{f}\equiv\langle x^{n}\rangle_{\delta}^{f}/\, \langle x^{0}\rangle_{\delta}^{f}\) with \(n>0\) are the fit parameters. The \(n=0\) reduced moment is identically one, which is enforced explicitly in the fit. Additionally, \(g_{\text{T}}^{f}\equiv\langle x^{0}\rangle_{\delta}^{f}\), which implies that the reduced moments are the original moments in units of the tensor charge, and we express all results as such. We start the analysis as before in Ref. [62] by first assessing the validity of the leading-twist approximation (i.e. how important are the \(\mathcal{O}(\Lambda_{\text{QCD}}^{2}z^{2})\) corrections which are ignored). To this end, we perform fits to Eq. (18) at only a single value for \(z^{2}\) (referred to as a fixed-\(z^{2}\) analysis) and look for any dependence of the extracted moments on the specific value of \(z^{2}\). Observing little or no dependence on \(z^{2}\) would suggest that the higher-twist contributions are negligible within our statistics and that the leading-twist approximation is valid. As \(z\) increases, the higher-moment terms in Eq. (14) begin to become important. We can determine when these higher-moment terms are expected to be non-negligible by using the leading-twist OPE with the moments extracted from the global analysis of JAM3D-22 [28]. We found that including two \(n\neq 0\) moments in the OPE for both the real and imaginary parts is necessary for \(z>4a\sim 0.304\) fm, and that a third \(n\neq 0\) moment becomes necessary for \(z>8a\sim 0.608\) fm. However, using only one value of \(z^{2}\) allows for up to two moments to be fit to both the real and imaginary data, as the number of non-zero \(P_{z}\) considered is three. Therefore, for the fixed-\(z^{2}\) analysis, the largest value of \(z\) used is \(z=8a\) and we use two moments in the fits when \(z>4a\). Our results for the fixed-\(z^{2}\) analysis for both the isovector and isoscalar combinations are show in Fig. 3. All results shown always include TMCs. Initially, we only considered the LO and fixed-order NLO Wilson coefficients in the analysis. However, the fixed-order NLO results show significant \(z\) dependence for \(\langle x\rangle\) at small values of \(z\). This is not completely unexpected, as discretization errors [93] and large logs can be significant for small values of \(z\), see Appendix B in Ref. [64]. Note, however, in that work, the analysis was done for the pion PDF, where the large logs become important at somewhat smaller \(z\) compared to the range of \(z\) where we see strong dependence here. To better understand the effects of large logs for the transversity PDF of the proton, we also use the NLO Wilson coefficients combined with renormalization group resummation (RGR) at next-to-leading logarithm (NLL) accuracy, given by \[\begin{split} C^{\delta}_{n,\text{NLO+RGR}}(\mu^{2}z^{2})& =C^{\delta}_{n,\text{NLO}}(\mu_{0}^{2}z^{2})\\ &\quad\times e^{-\frac{\gamma_{n}^{(1)}\ln_{n}\frac{a_{s}(\mu_{0 })}{a_{s}(\mu_{0})}}{\gamma_{0}}-\frac{(-\beta_{1}\gamma_{n}^{(1)}+\beta_{0} \gamma_{n}^{(2)})\ln_{n}\frac{\beta_{0}+\beta_{1}a_{s}(\mu)}{\gamma_{0}\beta_{1 }}}{\gamma_{0}\beta_{1}}},\end{split} \tag{19}\] where \(a_{s}=\alpha_{s}/(2\pi)\), \(\beta_{n}\) is the \(n\)th order coefficient of the \(\beta\) function, and \(\gamma_{n}^{(1)}\) and \(\gamma_{n}^{(2)}\) are the anomalous dimensions of the \(n\)th moments [95; 94]. The RGR evolve the running coupling \(\alpha_{s}\) from the physical scale \(\mu_{0}=2e^{-\gamma_{E}}/z\) to the factorization scale \(\mu\). As can be seen from Fig. 3, the use of the NLO+RGR Wilson coefficients produces results mostly consistent with the NLO case at small \(z\). This suggests the significant \(z\) dependence at small \(z\) is mainly a discretization effect rather than due to large logs. Next, we move on to including a range of values of \(z\) in our fits, considering various ranges \(z\in[z_{\text{min}},z_{\text{max}}]\). With the extra data, we can include an extra moment in the fits for \(z>8a\). The results for both the isovector and isoscalar moments are shown in Fig. 4. Given the small effect from the RGR which also becomes unstable when \(a_{s}(\mu_{0})\) runs close to the Landau pole, we opt to use the fixed-order NLO Wilson coefficients, and also always include TMCs for the final resul Figure 3: Results for the lowest four Mellin moments of the (upper) isovector and (lower) isoscalar PDF as a function of \(z\) from fits of the reduced pseudo-ITD at fixed \(z\) with \(n_{z}\in[1,4,6]\) to the leading-twist OPE using LO, fixed-order NLO, and NLO+RGR Wilson coefficients evaluated at \(\mu=2\) GeV, all of which include TMCs. Only the first two moments are extracted for \(z\leq 4a\). The horizontal dashed lines and bands correspond to the central values and errors, respectively, of the moments extracted from the global analysis of JAM3D-22 [28] defined at the scale \(Q=2\) GeV. Figure 4: Results for the lowest four Mellin moments of the (upper) isovector and (lower) isoscalar PDF from uncorrelated fits of the reduced pseudo-ITD to the leading-twist OPE as a function of \(z_{\text{max}}\), with \(z\in[z_{\text{min}},z_{\text{max}}]\) and \(n_{z}\in[1,4,6]\). The results use the fixed-order NLO Wilson coefficients evaluated at \(\mu=2\) GeV and include TMCs. Only the first two moments are considered for \(z_{\text{max}}\leq 4a\). The next two moments, \(\langle x^{3}\rangle\) and \(\langle x^{4}\rangle\) are included for \(4a<z_{\text{max}}\leq 8a\). And, two more moments, \(\langle x^{5}\rangle\) and \(\langle x^{6}\rangle\) are included for \(z_{\text{max}}>8a\). The horizontal dashed lines are the same as in Fig. 3. Figure 5: The (left) real and (right) imaginary parts of the (upper) isovector and (lower) isoscalar reduced pseudo-ITD for the three momentum used in this work. The data come from our preferred fit strategy described in Sec. III.1. The fit ranges used are \(z\in[3a,10a]\). The shaded bands correspond to the fits using the leading-twist OPE with fixed-order NLO Wilson coefficients and TMCs evaluated at \(\mu=2\) GeV and including three moments in both the real and imaginary parts. Figure 6: Summary plots of our extracted (left) isovector and (right) isoscalar moments from the leading-twist OPE approximation evaluated at \(\mu=2\) GeV for various fitting strategies compared to the results from JAM3D-22 [28] defined at the scale \(Q=2\) GeV. Two \(z_{\rm max}\) are considered, as well as Wilson coefficients at LO and fixed-order NLO. dence on the choice of \(z_{\text{min}}\), as expected from the fixed-\(z^{2}\) analysis results, however it is rather mild. Our preferred fit range is \(z\in[3a,10a]\), which removes most of the effects from discretization errors and large logs at small \(z\) and keeps \(z_{\text{max}}\) small enough to likely keep higher-twist contributions negligible. The results of these preferred fits are shown in Fig. 5. Finally, we show a summary of the results from different strategies and their comparison to JAM3D-22 [28] in Fig. 6. It is interesting to note the rather good agreement with the global analysis from JAM3D-22, especially for the lowest two moments, whereas we found tension for the lowest non-trivial moment in the unpolarized case [62]. However, comparing the matrix elements presented here versus those from the unpolarized ones, there is some hint of smaller excited-state contamination in the matrix elements of this work, which may be responsible for the better agreement. ## V PDF from leading-twist OPE: DNN reconstruction ### Method It has been shown that we can extract the Mellin moments of transversity PDFs by applying the OPE formula to the ratio-scheme renormalized matrix elements, model independently. Limited by the finite \(\lambda=zP_{z}\), the lattice data is only sensitive to the first few moments while the higher ones are factorially suppressed. As a result, to predict the \(x\) dependence of the PDFs, one needs to introduce additional prior knowledge or a reasonable choice of model. Commonly used models are usually of the form \[q(x)=Ax^{\alpha}(1-x)^{\beta}(1+\text{sub-leading terms}), \tag{20}\] which is inspired by the end-point behavior of the PDFs. However, the sub-leading terms may play an important role, particularly in moderate regions of \(x\), and one may find reasonable models for the sub-leading terms that give acceptable fits to the data. But, unless the data is precise, the model could introduce an uncontrolled bias. The use of a deep neural network (DNN) is a flexible way to maximally avoid any model bias -- but cannot remove the bias entirely, as a nueral network is still a model -- which is capable of approximating any functional form given a complicated enough network structure. As proposed in Ref. [62], we parametrize the PDFs by, \[q(x;\alpha,\beta,\mathbf{\theta})\equiv Ax^{\alpha}(1-x)^{\beta}[1+\epsilon(x) \sin(f_{\text{DNN}}(x,\mathbf{\theta}))], \tag{21}\] where \(f_{\text{DNN}}(x,\mathbf{\theta})\) is a DNN multistep iterative function, constructed layer by layer. The initial layer consists of a single node, denoted as \(a_{1}^{1}\), which represents the input variable \(x\). Subsequently, in the hidden layers, a linear transformation is performed using the equation: \[z_{i}^{(l)}=b_{i}^{(l)}+\sum_{j}W_{ij}^{(l)}a_{j}^{(l-1)}. \tag{22}\] Here, \(z_{i}^{(l)}\) is the intermediate result obtained by adding the bias term \(b_{i}^{(l)}\) to the sum of the weighted inputs from the previous layer, represented by \(W_{ij}^{(l)}a_{j}^{(l-1)}\). Following this linear transformation, a nonlinear activation function \(\sigma^{(l)}(z_{i}^{(l)})\) is applied, and the resulting output serves as the input to the next layer, represented by \(a_{i}^{(l)}\). We specifically employed the exponential linear unit activation function \(\sigma_{\mathsf{elw}}(z)=\theta(-z)(e^{z}-1)+\theta(z)z\). Lastly, the final layer generates the output \(f_{\text{DNN}}(x,\mathbf{\theta})\), which is subsequently utilized to evaluate \(q(x;\alpha,\beta,\theta)\). The lower indices \(i=1,...,n^{(l)}\) are used to identify specific nodes within the \(l\)th layer, where \(n^{(l)}\) denotes the number of nodes in the \(l\)th layer. The upper indices, enclosed in parentheses, \(l=1,...,N\), are employed to indicate the individual layers, where \(N\) corresponds to the number of layers, representing the depth of the DNN. The parameters of the DNN, namely the biases \(b_{i}^{(l)}\) and weights \(W_{ij}^{(l)}\), represented by \(\mathbf{\theta}\), are subject to optimization (training) by minimizing the loss function defined as \[J(\mathbf{\theta})\equiv\frac{\eta}{2}\mathbf{\theta}\cdot\mathbf{\theta}+\frac{1}{2} \chi^{2}(\mathbf{\theta},\alpha,\beta,...). \tag{23}\] The first term in the loss function serves the purpose of preventing overfitting and ensuring that the function represented by the DNN remains well-behaved and smooth. The details of the \(\chi^{2}\) function can be found in the appendix of Ref. [62]. Due to the limited statistics, a simple network structure such as \(\{1,16,16,1\}\) (indicating the number of nodes in each layer) is sufficient to provide a smooth approximation of the sub-leading contribution. In practice, we experimented with different values of \(\eta\) ranging from \(10^{0}\) to \(10^{-2}\) and considered network structures of sizes \(\{1,16,16,1\}\), \(\{1,16,16,16,1\}\), and \(\{1,32,32,1\}\). However, the results remained consistent across these variations. Therefore, we opted for \(\eta=0.1\) and selected a DNN structure with four layers, including the input and output layers, specified as \(\{1,16,16,1\}\). To balance the model bias and data precision, the contribution of the DNN is limited by \(|\epsilon(x)\sin(f_{\text{DNN}})|\lesssim\epsilon(x)\), which can be fully removed by setting \(\epsilon(x)=0\). It is also possible to control the size of the DNN parametrized sub-leading contribution at each specific \(x\). However, in this work, given the limited statistics, we simply fix \(\epsilon(x)\) to be a small constant, e.g. 0.1. ### DNN represented PDF To train the PDFs, we re-write the short distance factorization as, \[\tilde{h}_{\delta}^{f}(z,P_{z},\mu)=\int_{-1}^{1}d\alpha\,\mathcal{C}^{\delta} (\alpha,\mu^{2}z^{2})\,\int_{-1}^{1}dy\,e^{-iy\alpha\lambda}\delta q^{f}(y, \mu), \tag{24}\] where the renormalized matrix elements \(\tilde{h}_{\delta}^{f}(z,P_{z},\mu)\) are directly connected to the \(x\)-dependent PDFs \(\delta q^{f}(x,\mu)\) and \(\mathcal{C}^{\delta}(\alpha,\mu^{2}z^{2})\) can be determined from the Wilson coefficients \(C_{\delta}^{\delta}(\mu^{2}z^{2})\)[46; 95]. In this section we use the NLO fixed-order Wilson coefficients. In our case, the real and imaginary parts of the reduced pseudo-ITD \(\mathcal{M}_{\delta}^{\frac{f}{2}}(\lambda,z^{2},P_{z}^{0}=0)\) are related to \(\delta q^{f,-}(x)\) and \(\delta q^{f,+}(x)\), defined as \[\delta q^{f,-}(x) \equiv\delta q^{f}(x)-\delta q^{\tilde{f}}(x), \tag{25}\] \[\delta q^{f,+}(x) \equiv\delta q^{f}(x)+\delta q^{\tilde{f}}(x),\] Figure 8: The DNN represented PDFs using the matrix elements in the range \(z\in[2a,10a]\) for the (upper) isovector and (lower) isoscalar cases are shown. The results with \(\epsilon=0\) and \(\epsilon=0.1\) are shown as the red and blue bands, respectively. Figure 7: The DNN training results using the (left) isovector and (right) isoscalar reduced pseudo-ITD matrix elements in the range \(z\in[2a,10a]\) for the (upper) real part and (lower) imaginary part (lower panel) are shown. The results using \(\epsilon=0\) and \(\epsilon=0.1\) are shown as the solid and dotted curves, respectively. Figure 9: The DNN represented PDFs using the matrix elements in the range \(z\in[2a,z_{\rm max}]\) for the (upper) isovector and (lower) isoscalar cases are shown. For comparison, we also show the global analysis results from JAM3D-22 [28]. in the region \(x\in[0,1]\) and where \(\delta q^{f}(x)\) and \(\delta q^{\bar{f}}(x)\) are the quark and anti-quark transversity distributions of flavor \(f\), respectively. However, as observed in the literature [54; 55], with the current lattice accuracy, the anti-quark distributions are mostly consistent with zero. We therefore ignore the anti-quark contribution and fit the real and imaginary parts together to \(\delta q^{f}(x;\alpha,\beta,\mathbf{\theta})=\delta q^{f,-}(x)=\delta q^{f,+}(x)\). We use the matrix elements in the range \(z\in[2a,z_{\rm max}]\) for the parameter training, skipping \(z=a\) in order to avoid the most serious discretization effects. In Fig. 7, we show the fit results for \(z_{\rm max}=10a\) with \(\epsilon=0\) and \(\epsilon=0.1\) which both lead to a good description of the data. The corresponding PDFs are shown in Fig. 8, and the results from \(\epsilon=0.1\) exhibit slightly larger errors but mostly overlap with the \(\epsilon=0\) case. It is evident that the effects of the DNN were minimal which is likely a result of the limited statistics. We anticipate the DNN playing a more significant role when more precise data becomes available. In what follows, we use the results with \(\epsilon=0.1\). The short distance factorization could suffer from power corrections at large values of \(z^{2}\). To check this, we vary the \(z_{\rm max}\) used to train the PDFs to investigate such systematic errors. As shown in Fig. 9, by slightly increasing \(z_{\rm max}\), the results do not change significantly within the large errors, suggesting that higher-twist effects are less important compared to the statistics of our data. For comparison, we also show the most recent global analysis results from JAM3D-22 [28], and overall agreement is observed. ## VI \(x\)-space matching We now move on to our final method for extracting information on the transversity PDF. This method utilizes LaMET to match the quasi-PDF -- determined from the Fourier transform of hybrid-renormalized matrix elements -- to the light-cone PDF. ### Hybrid renormalization It is well known that the bare matrix elements can be multiplicatively renormalized by removing the linear divergence originating from the Wilson line self energy and the overall logarithmic divergence \[h_{\delta}^{f}(z,P_{z})=Z_{T}(a)e^{-\delta m(a)z}e^{-\bar{m}_{0}z}\tilde{h}_{ \delta}^{f}(z,P_{z}), \tag{26}\] where \(\tilde{h}_{\delta}^{f}\) is the renormalized matrix element, \(\delta m(a)\) contains the Wilson-line self-energy linear UV divergences, \(Z_{T}(a)\) contains the logarithmic UV divergences, and \(\bar{m}_{0}\) is used to fix the scheme dependence present in \(\delta m(a)\). The Wilson-line self-energy divergence term \(\delta m(a)\) can be extracted from physical matrix elements, like those involving Wilson loops. Here we use the value \(a\delta m(a)=0.1597(16)\) determined from the static quark-antiquark potential taken from Refs [96; 97; 98; 99; 100]. The scheme dependence in \(\delta m(a)\) can be attributed to a renormalon ambiguity, but can be fixed to a particular scheme by appropriate determiation of \(\bar{m}_{0}\)[61; 63], and here we choose the \(\overline{\rm MS}\) scheme. Our strategy for determining \(\bar{m}_{0}\) is to compare the \(P_{z}=0\) bare matrix elements \(h_{\delta}^{f}(z,P_{z}=0)\) to the Wilson coefficient \(C_{0}^{\delta}(\mu^{2}z^{2})\) computed in the \(\overline{\rm MS}\) scheme \[h_{\delta}^{f}(z,P_{z}=0)=Z_{T}(a)e^{-\delta m(a)z}e^{-\bar{m}_{0}z}C_{0}^{ \delta}(\mu^{2}z^{2}). \tag{27}\] In order to remove \(Z_{T}(a)\) and hopefully cancel some of the discretization effects, we next divide (27) by itself with \(z\) shifted by one unit of the lattice spacing. Then, after rearranging, we arrive at \[e^{a\delta m(a)}\frac{h_{\delta}^{f}(z,P_{z}=0,a)}{h_{\delta}^{f}(z-a,P_{z}=0, a)}=e^{-a\bar{m}_{0}}\frac{C_{0}^{\delta}(\mu^{2}z^{2})}{C_{0}^{\delta}(\mu^{2} (z-a)^{2})}. \tag{28}\] Before proceeding, we must first discuss the specifics of the Wilson coefficents used. The renormalon ambiguity, by definition, is an artifact that arises from the summation prescription of the perturbative series in the QCD coupling \(\alpha_{s}\). Therefore, we use the Wilson coefficients after leading renormalon resummation (LRR) given in Ref. [61] under the large-\(\beta_{0}\) approximation by \[C_{0,{\rm LRR}}^{\delta}(\alpha_{s}(\mu), z^{2}\mu^{2})=\int_{0,{\rm PV}}^{\infty}d\omega e^{- \frac{4\pi\omega}{9\alpha_{s}(\mu)}}\frac{2C_{F}}{\beta_{0}}\frac{1}{\omega} \tag{29}\] \[\times\left[\frac{\Gamma(1-\omega)e^{\frac{5}{3}\omega}(z^{2}\mu^ {2}/4)^{\omega}}{(1-2\omega)\Gamma(1+\omega)}-1\right].\] To be consistent with the known fixed-order Wilson coefficients at NLO, in practice, we use \[C_{0}^{\delta\prime}(\alpha_{s}(\mu),z^{2}\mu^{2})=C_{0,{\rm LRR }}^{\delta}(\alpha_{s}(\mu),z^{2}\mu^{2}) \tag{30}\] \[+\left[C_{0,{\rm NLO}}^{\delta}(\alpha_{s}(\mu),z^{2}\mu^{2})-C_ {0,{\rm LRR},{\rm NLO}}^{\delta}(\alpha_{s}(\mu),z^{2}\mu^{2})\right]\] Figure 10: The \(\bar{m}_{0}\) determined using NLO+LRR and NLO+LRR+RGR Wilson coefficients are shown. The bands come from the scale variation. where the \(C^{\delta}_{0,{\rm LRR},{\rm NLO}}\) is the NLO expansion of \(C^{\delta}_{0,{\rm LRR}}\) and the fixed-order NLO Wilson coefficient is given by \[C^{\delta}_{0,{\rm NLO}}(\alpha_{s}(\mu),z^{2}\mu^{2})=1+\frac{ \alpha_{s}(\mu)}{2\pi}C_{F}\left[2\ln\!\left(\frac{\mu^{2}z^{2}e^{2\gamma_{E}}} {4}\right)+2\right]. \tag{31}\] In addition, we can also resum the large logarithms \(\ln\!\left(\mu^{2}z^{2}e^{2\gamma_{E}}/4\right)\) by the renormalization group resummation (RGR) [101]. Using these coefficients, the \(\bar{m}_{0}\) determined using Eq. (28) are shown in Fig. 10 as a function of \(z\). The bands of NLO+LRR come from the scale variation of \(\mu\) in the Wilson coefficients by a factor of \(\sqrt{2}\). When using RGR, the running coupling is evolved from the physical scale \(\mu_{0}=2ke^{-\gamma_{E}}/z\) to the factorization scale \(\mu\)[48; 101]. And we vary \(k\in[1/\sqrt{2},1,\sqrt{2}]\) to estimate the scale uncertainty. It can be observed that the scale uncertainties in the RGR case are smaller at small \(z\), benefiting from the resummation, while they become larger at large \(z\) as they become close to the Landau pole. In addition, plateaus can be observed after \(z\geq 3a\sim 0.228\) fm when the discretization effects become negligible, though the uncertainty bands for the NLO+LRR+RGR case are larger with the running coupling when \(z>0.25\) fm. To avoid discretization effects at small \(z\) and the Landau pole at large \(z\), we choose values at \(z=3a\) which give \(\bar{m}_{0}=28(2)\) MeV and \(129(2)\) MeV for NLO+LRR and NLO+LRR+RGR cases, respectively. In Fig. 11, we show the data points defined on the left-hand side of Eq. (28) using the computed matrix elements and \(\delta m(a)\), along with the ratios defined on the right-hand side of Eq. (28) using \(\bar{m}_{0}\) chosen above and Wilson coefficients at NLO+LRR (orange bands) and NLO+LRR+RGR (red bands). The hybrid scheme renormalized matrix elements are given by \[\begin{split}\tilde{h}^{f}_{\delta}(\lambda,\lambda_{s},P_{z}, \mu)=\theta(z_{s}-z)\frac{h^{f}_{\delta}(z,P_{z},a)}{h^{f}_{\delta}(z,0,a)}\\ +\theta(z-z_{s})\frac{h^{f}_{\delta}(z,P_{z},a)}{h^{f}_{\delta}(z _{s},0,a)}e^{(\delta m(a)+\bar{m}_{0})(z-z_{s})},\end{split} \tag{32}\] with \(z_{s}=3a\). In Fig. 12 we show the hybrid renormalized matrix elements for the isovector case (left panels) and isoscalar case (right panels) for momenta \(n_{z}=4,6\). It can be seen that the large momentum matrix elements show a slow \(P_{z}\) evolution and a good scaling in \(\lambda\) within the statistical errors, suggesting we have good convergence in momentum. ### Extrapolation to large \(\lambda\) Due to the finite extent of the lattice, one can only calculate the matrix elements up to some maximum \(\lambda_{\rm max}\equiv z_{\rm max}P_{z}^{\rm max}\). Further, the signal deteriorates as \(\lambda\) is increased. This poses a problem, as the matrix elements need to be Fourier transformed to obtain the quasi-PDF, and truncating the integral will lead to unphysical oscillations in the resulting quasi-PDF. Therefore, we choose to perform an extrapolation of the data to infinity before performing the Fourier transform. In practice, we estimate the Fourier transform with a discrete sum up to some value \(\lambda_{L}=z_{L}P_{z}\) at which point an integral of the extrapolated function takes over. There are a few considerations when deciding upon an appropriate value for \(\lambda_{L}\). In this work, we choose a value in the region where either the signal is no longer reliable or the values of the matrix elements are nearly consistent with zero. As in our previous work in Ref. [62], the extrapolation itself is done by performing a fit in this region using the exponential decay model \[\frac{Ae^{-m_{\rm eff}\lambda/P_{z}}}{|\lambda|^{d}}, \tag{33}\] where the fit parameters are constrained by \(m_{\rm eff}>0.1\) GeV, \(A>0\), and \(d>0\). Using this constraint on \(m_{\rm eff}\) helps to ensure the extrapolation falls off at a reasonable rate and does not significantly change the results in the regions of \(x\) for which we trust the LaMET procedure. A detailed derivation which motivates the use of this model can be found in App. B of Ref. [63]. Results of the extrapolation fits for the largest two momenta are shown in Fig. 13. ### The quasi-PDF from a Fourier transform The quasi-PDF is defined as the Fourier transform of the renormalized matrix elements \[\delta\tilde{q}^{f}(y,z_{s},P_{z},\mu)=\int\frac{dzP_{z}}{2\pi}e^{iyP_{z}z} \tilde{h}^{f}_{\delta}(z,z_{s},P_{z},\mu), \tag{34}\] Figure 11: The ratio of \(P_{z}=0\) matrix elements (black points) defined in Eq. (28) are shown. The bands are infered from the NLO+LRR and NLO+LRR+RGR Wilson coefficients respectively with scale variation. and is the LO approximation to the light-cone PDF within the LaMET framework. To perform this integral, we first exploit the symmetry of the renormalized matrix elements about \(z=0\), i.e. \(\tilde{h}^{f}_{\delta}(z,z_{s},P_{z},\mu)=\tilde{h}^{f}_{\delta}(-z,z_{s},P_{z}, \mu)^{*}\), to rewrite the integral only over positive \(z\) \[\delta\tilde{q}^{f}(y,z_{s},P_{z},\mu)= \int_{0}^{\infty}\frac{dzP_{z}}{\pi}\operatorname{Re}\tilde{h}^{ f}_{\delta}(z,z_{s},P_{z},\mu)\cos(zP_{z}y)\] \[-\int_{0}^{\infty}\!\frac{dzP_{z}}{\pi}\operatorname{Im}\tilde{h }^{f}_{\delta}(z,z_{s},P_{z},\mu)\sin(zP_{z}y). \tag{35}\] Finally, we split the integrals up into two regions: i) \(0\leq z\leq z_{L}\) where the integral is performed via a sum over the lattice data for \(\tilde{h}^{f}_{\delta}(z,z_{s},P_{z},\mu)\) and ii) \(z_{L}<z<\infty\) where the integral is performed using the resulting extrapolation for \(\tilde{h}^{f}_{\delta}(z,z_{s},P_{z},\mu)\) \[\delta\tilde{q}^{f}(y,z_{s},P_{z},\mu)=\] \[\left[\,\sum_{z=0}^{z_{L}^{\rm re}/a}\frac{z_{L}^{\rm re}P_{z}}{ \pi N_{z_{L}}^{\rm re}}+\int_{z_{L}^{\rm re}}^{\infty}\frac{dzP_{z}}{\pi} \right]\operatorname{Re}\tilde{h}^{f}_{\delta}(z,z_{s},P_{z},\mu)\cos(zP_{z}y)\] \[-\left[\,\sum_{z=0}^{z_{L}^{\rm re}/a}\frac{z_{L}^{\rm im}P_{z}}{ \pi N_{z_{L}}^{\rm im}}+\int_{z_{L}^{\rm im}}^{\infty}\frac{dzP_{z}}{\pi} \right]\operatorname{Im}\tilde{h}^{f}_{\delta}(z,z_{s},P_{z},\mu)\sin(zP_{z}y), \tag{36}\] where \(z_{L}^{\rm re}\) and \(z_{L}^{\rm im}\) are the values of \(z\) in which the extrapolation integral takes over for the real and imaginary parts of \(\tilde{h}^{f}_{\delta}(z,z_{s},P_{z},\mu)\), respectively, and \(N^{\rm re/im}\equiv z_{L}^{\rm re/im}/a+1\). Figure 12: The (upper) real and (lower) imaginary parts of the renormalized matrix elements in the hybrid scheme for the (left) isovector and (right) isoscalar cobminations. Figure 13: The (upper) isovector and (lower) isoscalar hybrid renormalized matrix elements with (left) \(P_{z}=4\frac{2\pi}{L}\) and (right) \(P_{z}=6\frac{2\pi}{L}\). The hatches show the range of data used for the fit to the extrapolation model and the bands are the result of that fit starting from \(\lambda_{L}\). The hybrid renormalized data makes use of the NLO+LRR+RG Wilson coefficients computed at \(\mu_{0}=2e^{-\gamma_{E}}/z\) (i.e. \(k=1\)) and subsequently evolved to \(\mu=2\) GeV. ### Matching to the light-cone PDF The final step in obtaining the light-cone PDF from the quasi-PDF is to match them perturbatively in \(\alpha_{s}(\mu)\) as \[\begin{split}\delta q^{f}(x,\mu)&=\int_{-\infty}^{ \infty}\frac{dy}{|y|}\mathcal{C}_{\delta}^{-1}\left(\frac{x}{y},\frac{\mu}{yP_ {z}},|y|\lambda_{s}\right)\delta\tilde{q}^{f}(y,z_{s},P_{z},\mu)\\ &\quad+\mathcal{O}\left(\frac{\Lambda_{\rm QCD}^{2}}{x^{2}P_{z}^{ 2}},\frac{\Lambda_{\rm QCD}^{2}}{(1-x)^{2}P_{z}^{2}}\right)\\ &\equiv\mathcal{C}_{\delta}^{-1}\left(\frac{x}{y},\frac{\mu}{yP_ {z}},|y|\lambda_{s}\right)\otimes\delta\tilde{q}^{f}(y,z_{s},P_{z},\mu)\\ &\quad+\mathcal{O}\left(\frac{\Lambda_{\rm QCD}^{2}}{x^{2}P_{z}^ {2}},\frac{\Lambda_{\rm QCD}^{2}}{(1-x)^{2}P_{z}^{2}}\right),\end{split} \tag{37}\] where \(\mathcal{C}_{\delta}^{-1}(\frac{x}{y},\frac{\mu}{yP_{z}},|y|\lambda_{s})\) is the inverse of the perturbative matching kernel for the transversity distribution, and the notation \(\otimes\) is used as short-hand for the integral. One caveat of this method is that the leading power corrections to the matching can be seen to be enhanced when \(x\) is near 0 or 1. Therefore, we must be careful to estimate the range in \(x\) in which these power corrections become significant and hence spoil the matching procedure. To see how we obtain the the inverse matching kernel, we start with the perturbative expansion of the matching kernel itself \[\begin{split}\mathcal{C}_{\delta}\left(\frac{x}{y},\frac{\mu}{yP _{z}},|y|\lambda_{s}\right)&=\delta\left(\frac{x}{y}-1\right)\\ &\quad+\sum_{n=1}^{\infty}\alpha_{s}^{n}\mathcal{C}_{\delta}^{(n)} \left(\frac{x}{y},\frac{\mu}{yP_{z}},|y|\lambda_{s}\right),\end{split} \tag{38}\] where only the NLO Wilson coefficients are known, i.e. we have only \(\mathcal{C}_{\delta}^{(1)}(\frac{x}{y},\frac{\mu}{yP_{z}},|y|\lambda_{s})\)[50; 51; 95; 102]. Next, Figure 14: The \(P_{z}\) dependence for the (upper) isovector and (lower) isoscalar (left) quark and (right) antiquark NLO+LRR+RGR light-cone transversity PDF \(\delta q^{f}(x,\mu)\). The darker bands are the statistical errors when setting \(k=1\) and the lighter bands are the additional systematic errors associated with scale variations by additionally using \(k=1/\sqrt{2}\) and \(k=\sqrt{2}\). by imposing the definition of the inverse matching kernel \[\mathcal{C}_{\delta}^{-1}\left(\frac{x}{z},\frac{\mu}{zP_{z}},|z|\lambda_{s} \right)\otimes\mathcal{C}_{\delta}\left(\frac{z}{y},\frac{\mu}{yP_{z}},|y| \lambda_{s}\right)=\delta\left(\frac{x}{y}-1\right), \tag{39}\] we find \[\mathcal{C}_{\delta}^{-1} \left(\frac{x}{y},\frac{\mu}{yP_{z}},|y|\lambda_{s}\right)=\delta \left(\frac{x}{y}-1\right) \tag{40}\] \[-\alpha_{s}\mathcal{C}^{(1)}\left(\frac{x}{y},\frac{\mu}{yP_{z}},|y|\lambda_{s}\right)+\mathcal{O}(\alpha_{s}^{2}).\] As done in our previous work Ref. [62], we approximate the integration by defining the integral on a finite-length grid which can be represented via matrix multiplication with a matching matrix \(C_{xy}^{\delta}\) to obtain the light-cone PDF at NLO as \[\delta q^{f}(x,\mu)=\delta\tilde{q}^{f}(x,\mu)-\delta y\sum_{y}C_{xy}^{\delta,\text{NLO}}\tilde{q}^{f}(y,\mu), \tag{41}\] where \(\delta y=0.001\) is the grid size used for the integration. For the matching coefficients themselves, we also implement LRR and RGR, where the RGR involves running the coupling from the physical scale \(\mu_{0}\) with three choices of \(k\in[1/\sqrt{2},1,\sqrt{2}]\) to \(\mu=2\) GeV which allows for assessing the systematics due to scale variation. ### Results As a first check regarding the significance of the power corrections, we show the momentum dependence of the NLO light-cone PDFs for both the isovector and isoscalar flavor combinations using our largest two momenta in Fig. 14. There we see a relatively mild momentum dependence, which is expected given the observed momentum convergence in the renormalized matrix elements shown in Fig. 12. Figure 15: The (upper) isovector and (lower) isoscalar (left) quark and (right) antiquark transversity PDFs at LO, NLO+LRR, and NLO+LRR+RGR using the largest momentum \(P_{z}=6\frac{3\pi}{L}\) and compared with the global analysis from JAM3D-22 [28] and Radici, Bacchetta [23] which are performed at LO. The darker bands for the NLO+LRR+RGR results are the statistical errors when setting \(k=1\) and the lighter bands are the additional systematic errors associated with scale variations by additionally using \(k=1/\sqrt{2}\) and \(k=\sqrt{2}\). Finally, using the largest available momentum of \(P_{z}=6\frac{2\pi}{L}\), in Fig. 15 we show the quark and antiquark transversity distributions for both the isovector and isoscalar flavor combinations compared to the global analysis from JAM3D-22 [28] and Radici, Bacchetta [23] which are both performed at LO. There are a few things to note about these results. First, recall that the global analysis and our DNN results both assume the anti-quark distribution to be zero. Our \(x\)-space matching results in this section favor this assumption, at least when using an NLO matching kernel. Further, recall that power corrections in the light-cone matching lead to a breakdown of the formalism when \(x\sim 0,1\). However, the RGR results also breakdown at small \(x\), as seen by the onset of oscillations near \(x\sim 0.2\), already giving a natural boundary for where we no longer trust the results. ## VII Conclusions In this paper we have presented various extractions of the transversity isovector and isoscalar quark PDFs, and their lowest few moments, of the proton from lattice QCD using a physical pion mass. This work is a continuation towards the ultimate goal of uncovering the full structure of the proton from first principles. Additionally, the matrix elements needed in this work also allow an estimate of the tensor charge \(g_{T}\) to be extracted, and our results show reasonable agreement with other lattice extractions, as shown in Tab. 2. However, our calculations are performed at a single value of the lattice spacing, and for the isoscalar case we neglected the disconnected diagrams. Regarding the transversity isovector and isosinglet PDFs, in our first method, we utilized the leading-twist OPE expansion of the reduced pseudo-ITD to extract the first few Mellin moments. We found excellent agreement with the global analysis from JAM3D-22 for the lowest two moments and minor tensions for the next two moments. Higher moments could not be reliably extracted. Next, we used the pseudo-PDF approach, based on short-distance factorization, to extract an \(x\)-dependent PDF and utilized a deep neural network to overcome the inverse problem while remaining as unbiased as possible. We saw some mild tension with the results from JAM3D-22 for a few small ranges of \(x\) but otherwise mostly saw agreement. Finally, we used the quasi-PDF approach, based on LaMET, to calculate the \(x\)-dependence PDF from hybrid-scheme renormalized matrix elements. For this we found reasonably good agreement with JAM3D-22 in the moderate region of \(x\), but there is significant tension with the results from Radici, Bacchetta. A number of systematics are being ignored here and are left for future work. These include the use of a single lattice spacing, NNLO corrections in \(\alpha_{s}\), power corrections from the use of finite momentum, and isoscalar disconnected diagrams. We can address the expected significance of these systematics, and we have good reason to expect their effects to be rather small. Regarding the NNLO corrections, we saw in Fig. 15 the NLO corrections were relatively mild in the middle \(x\) regions, suggesting that NNLO corrections will be quite small as seen for the unpolarized proton distribution in our previous work [62]. And, for the disconnected diagrams, we discussed earlier the expectation that the effects of these diagrams for local operator matrix elements would be smaller than the statistical error based on the study done in Refs. [74; 75]. Further, in our study of the unpolarized proton distribution [62], we saw no evidence of convergence in the momentum used. However, as seen in Fig. 14, the convergence in momentum is much more convincing for the transversity distribution. ###### Acknowledgements. ADH acknowledges useful discussions with Fernando Romero-Lopez. We would also like to thank Rui Zhang for discussions on our results. This material is based upon work supported by The U.S. Department of Energy, Office of Science, Office of Nuclear Physics through _Contract No. DE-SC0012704, Contract No. DE-AC02-06CH11357_, and within the frameworks of Scientific Discovery through Advanced Computing (SciDAC) award _Fundamental Nuclear Physics at the Exascale and Beyond_ and the Topical Collaboration in Nuclear Theory _3D quark-gluon structure of hadrons: mass, spin, and tomography_. SS is supported by the National Science Foundation under CAREER Award PHY-1847893 and by the RHIC Physics Fellow Program of the RIKEN BNL Research Center. YZ is partially supported by the 2023 Physical Sciences and Engineering (PSE) Early Investigator Named Award program at Argonne National Laboratory. This research used awards of computer time provided by: The INCITE program at Argonne Leadership Computing Facility, a DOE Office of Science User Facility operated under Contract No. DE-AC02-06CH11357; the ALCC program at the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725; the Delta system at the National Center for Supercomputing Applications through allocation PHY210071 from the ACCESS program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. Computations for this work were carried out in part on facilities of the USQCD Collaboration, which are funded by the Office of Science of the U.S. Department of Energy. Part of the data analysis are carried out on Swing, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. The computation of the correlators was carried out with the Qlua software suite [70], which utilized the multigrid solver in QUDA[71; 72]. The analysis of the correlation functions was done with the use of lsqfit[103] and gvar[104]. Several of the plots were created with Matplotlib[105]. ## Appendix A Renormalization constant \(Z_{t}\) in RI-MOM scheme Here we discuss the extraction of the renormalization constant \(Z_{T}\) in the RI-MOM scheme and its subsequent conversion to the \(\overline{\text{MS}}\) scheme at the scale \(\mu=2\) GeV. The method starts by calculating matrix elements between off-shell quark states with lattice momenta \[ap_{\mu}=\frac{2\pi}{L_{\mu}}(n_{\mu}+\frac{1}{2}\delta_{\mu 0}), \tag{10}\] where \(L_{\mu}\) is the size of the lattice in the \(\mu\)th direction, \(n_{\mu}\in\mathbb{Z}\), and \(\mu=0\) is the temporal direction. These matrix elements are computed in the Landau gauge. The renormalization point is given by \((ap_{R})^{2}\equiv\sum_{\mu=0}^{3}\sin^{2}(ap_{\mu})\), which is inspired by the lattice dispersion relation and helps to reduce discretization errors. Our results for \(Z_{T}\) are shown in Fig. 16. There is a significant dependence on \(p_{R}^{2}\) caused by nonperturbative associated with condensates and discretization errors that can be clearly seen from the "fishbone" structure at large \(p_{R}^{2}\). We had difficulty appropriately modeling these effects and instead chose to form ratios of \(Z_{T}/Z_{V}\) in an attempt to cancel them as much as possible, similar to what was done in Ref. [106]. We then use the conversion factor from RI-MOM to \(\overline{\text{MS}}\) for the tensor current computed in Ref. [107] to three loops. The resulting renormalization factors are then in the \(\overline{\text{MS}}\) scheme at the scale \(\mu^{2}=p_{R}^{2}\), and thus we evolve them to the same scale using the evolution function computed at two loops in Ref. [108]. We evolve \(Z_{T}\) to the scale \(\mu=2\) GeV, as this is a commonly used scale for reporting results of nucleon charges. The resulting ratio \(Z_{T}/Z_{V}\) after conversion to \(\overline{\text{MS}}\) at \(\mu=2\) GeV is then fit to \[Z_{T}/Z_{V}+B/p_{R}^{2}+D_{1}p_{R}+D_{2}p_{R}^{2}, \tag{11}\] where last two terms incorporate discretization effects. In order to remove bias from our choice of fit, we consider six different variations of this fit form, corresponding to setting various terms to zero. Specifically, we consider a linear form (i.e. \(D_{2}=0\)), a quadratic form (i.e. \(D_{1}=0\)), and a linear+quadratic form (i.e. \(D_{1}\neq 0\) and \(D_{2}\neq 0\)). Then for each of these three, we also consider fits in which \(B\) is zero and non-zero. To further give variation to our fits, we use three ranges of the data. The first includes all but the smallest values of \(p_{R}^{2}\), which is always left out. Then we consider removing more of the small \(p_{R}^{2}\) data, and finally removing the largest \(p_{R}^{2}\) data. This gives a total of 18 fits we consider. To give a final estimate, we simply take an AIC average over all fits, giving \[Z_{T}/Z_{V}=1.050(17),\ \ \overline{\text{MS}}(\mu=2\text{ GeV}). \tag{12}\] The results of all these fits are shown in Fig. 17. This, rather conservative, method for estimating the systematic error is justified for this observable which is likely affected by large systematics. ## Appendix B Three-point function fits Here we show a handful of fits to the ratios of three-point to two-point functions used in the main text. All of these fits utilize our preferred fit strategy, i.e. the two-state fit to the ratio \(R_{\delta}\) in (6) with \(n_{\text{exc}}=2\), where Figure 16: The tensor current renormalization factor \(Z_{T}\) as a function of the RI-MOM momentum \(p_{R}\). Figure 17: Ratio of the tensor to vector current renormalization factors \(Z_{T}/Z_{V}\) as a function of the RI-MOM momentum \(p_{R}\). The bands show the 18 different fits considered, all overlaid on top of one another. The AIC-averaged result final result for the ratio is given in the bottom right corner. is the number of data points nearest both the source and sink that are not included in the fit. The fits included here are to the local zero-momentum three to two-point function ratios, shown in Fig. 18, and several non-local three to two-point function ratios, shown in Figs. 19 to 22. These include both isovector and isoscalar combinations.
2302.00707
Why Combining Text and Visualization Could Improve Bayesian Reasoning: A Cognitive Load Perspective
Investigations into using visualization to improve Bayesian reasoning and advance risk communication have produced mixed results, suggesting that cognitive ability might affect how users perform with different presentation formats. Our work examines the cognitive load elicited when solving Bayesian problems using icon arrays, text, and a juxtaposition of text and icon arrays. We used a three-pronged approach to capture a nuanced picture of cognitive demand and measure differences in working memory capacity, performance under divided attention using a dual-task paradigm, and subjective ratings of self-reported effort. We found that individuals with low working memory capacity made fewer errors and experienced less subjective workload when the problem contained an icon array compared to text alone, showing that visualization improves accuracy while exerting less cognitive demand. We believe these findings can considerably impact accessible risk communication, especially for individuals with low working memory capacity.
Melanie Bancilhon, AJ Wright, Sunwoo Ha, Jordan Crouser, Alvitta Ottley
2023-02-01T19:02:26Z
http://arxiv.org/abs/2302.00707v1
# Why Combining Text and Visualization Could Improve Bayesian Reasoning: A Cognitive Load Perspective ###### Abstract Investigations into using visualization to improve Bayesian reasoning and advance risk communication have produced mixed results, suggesting that cognitive ability might affect how users perform with different presentation formats. Our work examines the cognitive load elicited when solving Bayesian problems using icon arrays, text, and a juxtaposition of text and icon arrays. We used a three-pronged approach to capture a nuanced picture of cognitive demand and measure differences in working memory capacity, performance under divided attention using a dual-task paradigm, and subjective ratings of self-reported effort. We found that individuals with low working memory capacity made fewer errors and experienced less subjective workload when the problem contained an icon array compared to text alone, showing that visualization improves accuracy while exerting less cognitive demand. We believe these findings can considerably impact accessible risk communication, especially for individuals with low working memory capacity. Decision-making, Bayesian reasoning, Perception and Cognitive Load 232820232023202320232023202023202023202023202020220220220220220220220220220220222022022202220222022202220222022220222202 Reasoning: A Cognitive Load Perspective. In _Proceedings of recHIntecting (CHI '23)_. ACM, New York, NY, USA, 15 pages. [https://doi.org/10.1145/nnnnnnnnnnnnnn](https://doi.org/10.1145/nnnnnnnnnnnnnn) ## 1. Introduction Scholars have long studied the impact of multimedia formats on comprehension and performance in various settings. In psychology, for example, studies suggest that combining a diagram and text description provides more learning benefits than showing one or the other separately (e.g., (Han et al., 2017; Chen et al., 2018)). Similarly, in education, scholars advocate for multimedia representations over singular formats (Sandel et al., 2017). However, the guidelines are not as clear-cut for visualization, even though combining text and visualization is ubiquitous in mass media storytelling, education, and health communication. One area in visualization research where the efficacy of combining text and visualization is fraught with uncertainty is the communication of conditional probabilities. Conditional probabilities or Bayesian reasoning is necessary to communicate crucial statistical information to a broad audience, especially in medical decision-making. In particular, health officials need to express how often a test reports that a person has a virus when they do not (false positive). Additionally, patients need to understand their chance of having the disease given a positive test (true positive) to make informed decisions about risks and potential treatment. Still, extensive research shows that understanding conditional probabilities is challenging for novices and experts alike, even with multimedia representations (Sandel et al., 2017; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). One of the most important guidelines proposed to improve Bayesian reasoning accuracy is to show information in the form of natural frequency formats (e.g., _8 out of 10_) instead of percentages (e.g., _80%_) (Sandel et al., 2017; Chen et al., 2018; Chen et al., 2018). However, further investigations examining whether including visualization can improve Bayesian reasoning have produced mixed results. Early studies found that adding visualizations such as icon arrays to text formats can prompt faster and more accurate responses than text-only formats (e.g., (Han et al., 2017)). More recent crowdsourced studies found that supplementing textual information with Euler diagrams increased accuracy only when numerical data were removed from the textual description (Sandel et al., 2017), suggesting a potential conflict when presenting numbers and visualization together. Researchers have examined interaction techniques that link the text to the visualization but found no measurable benefit compared to static multimedia formats (Sandel et al., 2017). Other studies have shown that spatial ability is a mediating factor for accuracy and advocate for considering individual differences in visualization evaluation (Sandel et al., 2017). The research on Bayesian reasoning presentation extends beyond the visualization community and is more expansive than the few papers we have highlighted here. Yet, despite the extensive research, our knowledge is limited, partly due to over-reliance on coarse performance measures such as reasoning accuracy. We propose that other factors, such as the cognitive load elicited by different presentation formats, might provide an additional window into the mechanisms underlying how people use text and visualization to support Bayesian reasoning. Cognitive load is a measure of the effect that a particular task has upon the user's cognitive system (Sandel et al., 2017). It can impact user experience under various conditions, such as making decisions under stress or emotional burden (Sandel et al., 2017), under divided attention (Han et al., 2017; Chen et al., 2018; Chen et al., 2018), or with limited mental resources (Chen et al., 2018; Chen et al., 2018). We evaluate the cognitive load elicited by the icon array (_visualization-only_), text (_text-only_), and a combination of icon array and text (_combined_) using three different methods: a working memory capacity test, a dual task, and self-reported effort. We posit that measuring working memory capacity will provide insight into individual differences in users' cognitive abilities. Additionally, by burdening cognitive resources, the dual-task paradigm is a more direct method of measuring the impact of format on cognitive load and simulates real-world conditions where attention is divided. Finally, we captured perceived effort via a nasa-tlx questionnaire. These three methods together provide a comprehensive view of cognitive load. By observing individual differences in working memory capacity, we found that individuals with low working memory made significantly fewer errors when using _visualization-only_ compared to _text-only_ formats. Furthermore, nasa-tlx scores show that users with low working memory capacity reported experiencing less temporal and physical demand using _visualization-only_ and _combined_ formats compared to text alone. Low working memory users also reported feeling less frustrated when using _combined_ compared to _text-only_. Together, these provide supportive evidence that visualization elicits less cognitive load compared to text alone. In summary, this paper documents the following contributions to the study of visualization-supported Bayesian reasoning: 1. Using cognitive load, our findings offer a new perspective on the role of visualization for Bayesian reasoning. In particular, we found that **showing repeated information across text and visualization in combined formats could be beneficial**. We provide suggestive evidence that this enables people to select which formats better fit their mental model. 2. We demonstrate that **individual differences in working memory capacity affect Bayesian reasoning** with different formats. This has implications for the use of visualization across a broad population (e.g. in medical decision-making) and adds a new dimension of complexity to the process of visualization recommendation. 3. We demonstrate **how to use varying measures of cognitive load for visualization evaluation**, adding to the literature that calls for the diversification of evaluation measures by expanding beyond traditional performance metrics such as accuracy. ## 2. Background People are notoriously bad at reasoning with conditional probabilities (Sandel et al., 2017; Chen et al., 2018). Consider, for example, the following scenario from (Sandel et al., 2017): _The probability of breast cancer in the population is 1% for a woman aged 40 who participate in a routine screening. If the woman has breast cancer, the probability is 80% that she will have a positive mammography. If a woman does not have breast cancer, the probability is 9.5% that she will also have a positive mammography. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?_. According to Bayes' theorem, \[P(H|D)=\frac{P(D|H)\times P(H)}{P(D)} \tag{1}\] where, in our scenario, \(D\) is the positive mammography and \(H\) is the hypothesis that the woman in question has breast cancer. It is common for people, including experts, to be subject to _base-rate neglect_, ignoring the base rate \(P(H)\) when reasoning about the true positive rate (Wasel and others, 2018). For decades, there have been efforts across various fields to devise ways to improve Bayesian reasoning by mitigating the base rate fallacy. Several studies have shown that frequency formats (e.g., 8 out of 10 instead of 80%) can facilitate Bayesian reasoning and significantly improve accuracy (Han and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018). Additionally, many researchers have investigated the effect of visualization on Bayesian reasoning (e.g., (Han and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018)), with the most prevalent designs being Euler diagrams (Han and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018). These designs represent two dominant theories behind Bayesian facilitation. Euler diagrams align with the _nested set theory_. They are useful to help the viewer reason about how subsets relate to each other(Han and others, 2018; Datta and others, 2018; Datta and others, 2018), while icon arrays, showing natural frequencies (i.e., 8 out of 10), align with the _ecological rationality framework_ positing based on evolutionary theories that humans are better at reasoning with countable objects (Datta and others, 2018; Datta and others, 2018). Our work uses icon arrays because of their popularity and the well-documented success of natural frequency formats for Bayesian reasoning (e.g., (Han and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018)). To investigate the potential benefit of visualization in Bayesian reasoning, researchers have typically compared responses to Bayesian problems presented in text format to formats that combine visualization and text. However, these studies have produced mixed findings. For example, Micallef et al. (Micallef et al., 2018) found no measurable difference in accuracy between text alone and a combination of text and visualization. Still, their follow-up study demonstrated that removing the numbers from the text significantly improved Bayesian accuracy. Ottley et al. (Ottley et al., 2018; Ottley et al., 2018) replicated this first study result and found no overall reliable differences in accuracy between the text alone versus a combined format. However, they found that participants with high spatial ability performed reliably better with visualization alone compared to text alone (Ottley et al., 2018). In another study, Ottley et al. (Ottley et al., 2018) used eye-tracking to examine how people extract information from text-only, visual, and combined formats in Bayesian reasoning problems. They found that users easily identify information with visualization but extract information more easily from the text. Additionally, their analysis found no differences in how the study participants used each format when they saw the combined presentation. Finally, Mosca et al. (Mosa et al., 2018) investigated the effect of linking the text and visualization via interaction. They found that adding interaction did not improve accuracy in Bayesian reasoning compared to static formats. We posit that the outstanding questions on whether visual designs can improve Bayesian reasoning could be due to a lack of understanding of underlying cognitive mechanisms. Investigations by Lesage et al. (Lesage et al., 2018) showed that performance in Bayesian reasoning is reliant upon available mental resources, regardless of presentation format. Although visualization researchers often seek to improve speed and accuracy measures, we know little about the impact of visualization on cognitive load. Moreover, speed and accuracy do not always correlate with cognitive load when reasoning about visualizations (Ottley et al., 2018; Ottley et al., 2018). Thus, there is a need to understand the processes that govern Bayesian reasoning with different presentation formats. In this paper, we expand the evaluation of Bayesian communication techniques by measuring cognitive load through individual differences in working memory capacity, a dual-task paradigm, and perceived cognitive load. We aim to develop a more nuanced understanding of the potential effect of presentation formats on Bayesian facilitation and provide more comprehensive visualization design guidelines. ### Measuring Cognitive Load Working memory consists of multiple components that can store a limited amount of information for a limited amount of time and is an essential resource in the reasoning process (Lesage et al., 2018). Cognitive load, typically defined as the amount of working memory required to process a task, is an important usability factor that indicates how easy or how hard it is to process information (Ottley et al., 2018). There exist numerous techniques to measure cognitive load, including self-reported measures (e.g. nasa-tlx), performance-based measures (e.g. dual-task paradigm, operation span tests) and physiological measures (e.g. pupillometry, fhips) (Han and others, 2018; Datta and others, 2018; Ottley et al., 2018; Ottley et al., 2018; Ottley et al., 2018; Ottley et al., 2018). Several researchers have leveraged these techniques to investigate the effect of visualization design on cognitive load (Han and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018; Datta and others, 2018), sometimes reexamining long-standing beliefs. For example, Matthews et al. highlight the importance of using several methods to cross-examine the effect of workload (Micha et al., 2018). In their work, Borgo et al. challenged traditional notions about chart junk and showed using a dual-task paradigm that visual embellishments do not prompt higher cognitive load compared to other visualizations (Datta and others, 2018). Peck et al. used fhips as well as nasa-tlx to evaluate visualization interfaces and found no difference in the cognitive load elicited by bar graphs and pie charts, contrarily to popular belief (Peck et al., 2018). While physiological measures have proven to be effective techniques for measuring cognitive load, their high intrusiveness makes them unsuitable for real-life implementation (Peck et al., 2018). Other measures are more accessible, facilitate longitudinal studies, and allow us to survey a diverse population. In our work, we chose to investigate the effect of presentation formats on cognitive load for Bayesian reasoning using three different methods: an operation task to observe individual differences in working memory capacity, a dual-task paradigm, and self-reported scores through a nasa-tlx questionnaire. #### 2.1.1. Individual Differences Approach to Cognitive Load Individual differences can impact how we reason with different formats (see (Micha et al., 2018) for a comprehensive review of individual differences in visualization), and there is strong evidence that cognitive traits can influence statistical reasoning (Lesage et al., 2018; Ottley et al., 2018; Ottley et al., 2018; Ottley et al., 2018). Some researchers showed evidence that when information was presented in the form of natural frequencies, participants with high working memory capacity performed significantly better than participants with low working memory capacity (Castro et al., 2017; Castro et al., 2018). Castro et al. (2018) have shown that visualization designs can elicit different levels of cognitive load when reasoning about uncertainty visualizations. A test that has shown high correlations with measures of working memory capacity is the Cognitive Reflection Test (CRT)(Castro et al., 2017). The CRT test measures one's ability to overcome heuristics and biases and trigger analytical thinking (Castro et al., 2017). A more direct way of measuring individual differences in working memory capacity is by using an operation span task (ospan) (Castro et al., 2018). In a typicalospan task, participants must simultaneously try to remember presented words in their correct order while solving simple math equations sequentially. In this paper, we use Castro et al.'s adapted onlineospan test to measure working memory capacity (Castro et al., 2018)1. To complement this method, we use a dual-task paradigm, which according to Lesage et al. (Castro et al., 2017), can be used to infer a causal role for cognitive resources in the performance of Bayesian reasoning tasks. Footnote 1: Link toospan test used in this work (developed by (Castro et al., 2018)): [https://bit.ly/2QHErlv](https://bit.ly/2QHErlv) #### 2.1.2. Dual-Task Paradigm Although it has not been prominently featured in visualization research, the _dual-task methodology_ is an effective way to assess the dependency of a task on cognitive resources and has been used to evaluate workload in psychology for decades (Castro et al., 2018; Castro et al., 2018). In a dual-task paradigm, the user conducts two tasks simultaneously, a primary task and a secondary task. This creates _divided attention_ and increases cognitive load, producing a decline in performance compared to the primary task alone. This decline is often referred to as the _dual-task cost_(Castro et al., 2018), which can be used to infer the cognitive load elicited by the task. Several researchers have investigated the impact of formats on cognitive load using a dual-task paradigm (Castro et al., 2018; Castro et al., 2018; Castro et al., 2018), one reason being that it is helpful to simulate real-life conditions where attention is often divided (Castro et al., 2018; Castro et al., 2018; Castro et al., 2018). Castro et al. (2018) have used a dual-task method to investigate how display dimensions and screen size of mobile devices influence attention. In their study, participants controlled the movements of a blue ball by tilting the mobile device on displays of different sizes (primary task) while performing a change detection task which consisted of vocally reporting which of 4 arrows changed directions on a fixed display (secondary task). Using this methodology, they found that larger displays are more mentally demanding under divided attention. Tintarev et al. (Tintarev et al., 2018) investigated the effect of presentational choices for _planning_ on cognitive load using a dual-task paradigm. Participants had to keep information about a list of words in memory while answering some questions about a plan, then had to recall the list of words in the correct order. The authors found no reliable differences in performance across different formats of the plan. In our work, we quantify differences in elicited cognitive load across presentation formats using a dual-task methodology inspired by (Castro et al., 2017), consisting of remembering a pattern of four dots on a grid while conducting the primary task. ## 3. Research Goals We designed two complementary studies to investigate whether cognitive load can shed light on the conflicting and sometimes puzzling findings around Bayesian reasoning and visualization. These findings collectively point to a potential relationship between cognitive resources and Bayesian facilitation -- adding visualization and interaction to an already cognitively challenging task might not produce the desired effects. There is a gap in our understanding of how cognitive load affects Bayesian reasoning across different formats. Motivated by this, the current work focuses on examining the potential differences in cognitive load elicited by visualization-only, text-only, and a combination of text and visualization format in the context of Bayesian reasoning. We use the icon array for our visualization condition because it is prominently used to communicate Bayesian information, especially in the context of medical risk, supporting ecological validity. When considering options for the experiment design, we weighed trade-offs between (1) controlling the framing and learning effects, (2) minimizing noise from individual variability, and (3) minimizing the overall length of the study. Unfortunately, no single experiment strikes the perfect balance. Thus, we present the results of two controlled user experiments. The first adopts a between-subject, 3 (_presentation format_) \(\times\) 2 (_load condition_), experiment design to mitigate the learning effects that a within-subject study would introduce. The second utilizes a mixed design, with 3 (_presentation format_) between-subject and \(\times\) 2 (_load condition_) within-subject protocol to better control for individual variability. Together, they tell a cohesive story about the relationship between cognitive load, Bayesian reasoning, and visualization. ## 4. Experiment 1: Between-Subject Study Design We assigned each participant randomly to one of three presentation conditions -- icon array (_vis_), text (_text_), and a combination of icon array and text (_visterat_) -- making the comparison of presentation between subject. We also assigned each user randomly to either a Single or Dual task, making the comparison of these tasks also between subjects. We chose a between-subject design to keep the Bayesian problem consistent across all conditions. Prior work has shown that different Bayesian scenarios can lead to different levels of accuracy (Sandel, 2017)2. Footnote 2: Link to Experiment 1 surveys, data, and analyses. [https://bit.ly/3BFwwkx](https://bit.ly/3BFwwkx) ### Presentation Formats and Bayesian Task We replicated Mosca et al.'s (Mosca et al., 2018) grouped icon array design, which had the highest accuracy among their tested visualization formats. The authors designed the icon array according to Bertin's(Bertin, 2017) guidelines, where background color was used to differentiate between members of the population who have disease versus do not have disease and icon color was used to differentiate between members of the population who test positive versus test negative. Participants in our _text_ condition saw the same data in textual format, and those in the _vister_ condition saw both the textual format and the icon array, vertically stacked. We showed participants data about the prevalence of a disease in a population, as well as the test results in the form of either _vis_, _text_ or _visterat_. We asked them to estimate i) the number of people who will test positive and ii) of those people, how many actually have the disease. This technique of prompting the user for the positive count followed by the true positive count is called _probing_. _Probing_ is a valid technique that evaluates Bayesian comprehension independently of mathematical skills through the retrieval of nested data (using the words "of those"). It has been shown to elicit more accurate responses compared to non-probed questions (Henderson et al., 2017; Goyal et al., 2018; Goyal et al., 2018). ### Load Conditions and Dual-Task Methodology Participants either saw the Bayesian probability estimation task alone or along with a secondary task. Participants who were randomly assigned the Dual condition were shown a pattern consisting of four dots on a grid for 850ms and were asked to complete the Bayesian Probability Estimation task while keeping the pattern in memory. Participants were then asked to reproduce the dot pattern as accurately as possible by selecting the appropriate cells on an empty grid. Figure 1 illustrates the dual task setup, inspired by Lesage et al.'s (Lesage et al., 2017) study of text-only formats and originally developed by Bethell et al. (Bethell et al., 2017). This task is appropriate as it taxes visuospatial working memory, which would possibly interfere with the primary task and cause the desired increase in cognitive load. ### Measures of Abilities and Surveys The survey also contained a NASA-TLX questionnaire, a spatial ability test, and a Cognitive Reflection Test (CRT). Participants then completed a working memory capacity questionnaire from (Henderson et al., 2017). **NASA-TLX.** We used the nasa-tlx(Sanderson, 2017; Lesage et al., 2017) to examine participants' subjective workload. Participants reported on the workload they believed the Bayesian task elicited on six subscales: mental demand, temporal demand, frustration, physical demand, performance, and effort. **Working Memory Capacity Test (OSPAN).** We asked participants to remember a series of objects sequentially while answering simple True or False math problems. The test consisted of 6 sequences of 4-, 5- or 6- spans, shown two times each in a randomized order (the term \(n\)-span refers to the sequence occurring \(n\) times). In each span, participants were shown an image for 1 second and were asked to keep it in memory while answering a simple math question in under 5 seconds. This sequence is repeated \(n\) number of times and at the end of the span, participants have to recall the images shown in the correct order. This version of the OSPAN has been designed by Castro et al. (Castro et al., 2018). **Cognitive Reflection Test.** The Cognitive Reflection Test (CRT) has been shown to be a valid measure of cognitive load (Lesage et al., 2017). In our work, we use a version of the CRT test that contains 3 questions. It tests for the ability to switch from Type 1 (intuitive) to Type 2 (strategic) reasoning. Since the latter requires using working memory (Sanderson, 2017), researchers posit that someone who is able to perform the switch has a high working memory capacity (Sanderson, 2018). **Spatial Ability Test.** A spatial ability test measures an individual's capacity to process visual and spatial information. In this study, we used the paper folding test (VZ-2) from Ekstrom, French, and Hardon (Ekstrom et al., 2017) consisting of two sessions of 3 minutes and 10 questions each. This test has been used as a standard technique to evaluate Bayesian reasoning performance across spatial ability in other studies (Sanderson, 2018; Goyal et al., 2018). **VisText Usage Report.** We asked participants in the _vistext_ condition what percentage of the visualization and the text they utilized to answer the Bayesian questions. They reported their Figure 2. An overview of the Bayesian survey for the Dual condition with the _vistext_ format 1) Users were shown for 850 ms a pattern consisting of four dots on a 3x3 grid that they were asked to memorize 2) This is an example of the Bayesian task for the _vistext_ condition. Users were asked to read the problem and then press a button when they were ready to answer questions 3) Once users submitted their answers to the Bayesian questions, they were asked to replicate the dot pattern on an empty 3x3 grid. preferred method by selecting the appropriate value on a scale ranging from _only text_ to _only visualization_ ### Hypotheses * We hypothesize that performance on the Bayesian reasoning task depends on available cognitive resources. Therefore, the Single condition will result in more accurate reasoning than the Dual condition. * Since available cognitive resources are mediated by working memory capacity, we expect that individuals with high working memory capacity will be more accurate than their low working memory counterparts, especially in the Dual condition. * Prior work that examined the impact of text-only, icon array, and the juxtaposition of text and icon array on Bayesian reasoning found no significant difference in accuracy between the three presentation formats (Wang et al., 2019; Wang et al., 2019). Therefore, we anticipate no significant difference in Bayesian reasoning accuracy across _vis_, _text_, and _vistext_ in the Single condition. The detailed analysis for pre-registered hypotheses H4a - H4d can be found in the supplementary material. ### Participants We recruited users via Amazon's Mechanical Turk that were from the United States, were English-speaking, and had a HIT acceptance rate of 100%. **Payment.** All participants were paid in accordance with minimum wage laws, on average receiving $4.84 and taking 25.2 minutes to complete both surveys. In the Bayesian task, participants won a bonus of $0.50 for each question answered. Participants in the Dual condition were assigned an additional task, increasing the amount of time spent on the task, and thus received an additional $0.25 for each dot correctly remembered (i.e. up to $1 additional bonus compared to the _single_ condition). The allocated bonus per dot remembered also served as an incentivization to remember the pattern. We conducted a statistical power analysis using the software G*Power on a mixed ANOVA and determined that the target sample size needed for a statistical power of 95% is 251. We recruited 450 participants due to the typically high number of exclusions in Mechanical Turk studies. Users were asked to complete two separate surveys: the Bayesian survey and the OSPAN survey. Our pre-registered exclusion criteria 3, determined based on prior work, required that users i) take the surveys only once ii) complete both surveys iii) score above chance in the math portion of the OSPAN test iv) score above 10% in the memory portion of the OSPAN test v) score over 2 standard deviations from the mean in the dot pattern task. After excluding data that did not fit the exclusion criteria, 316 participants remained. After preliminary data analysis, we noticed some additional fraudulent and invalid responses that we had not anticipated prior to the pre-registration. We decided to exclude users who entered more than 4 dots in the dot pattern recall test, thus biasing their odds of getting the correct pattern (n=13). We also excluded participants whose answers to the Bayesian questions were less than or equal to 0 (n=4), which shows a lack of attention and leads to an invalid error value upon data processing (see section 4.6). We conducted a post hoc sensitivity analysis that showed that the addition of the two exclusion criteria did not affect the study results (see supplementary material). After these non-pre-registered exclusions, 299 participants remained, of which 104 were assigned _text_, 100 were assigned _vis_, and 95 saw the _vistext_ (129 in the Dual condition and 170 in the Single ). Footnote 3: Link to Experiment 1 pre-registration: [https://bit.ly/3xtC1zX](https://bit.ly/3xtC1zX) ### Data Collection The independent variables for this experiment are: * **3 presentation formats:**{ _text_, _vis_, _vistext_} * **2 load conditions:**{ Single, Dual} To measure Bayesian performance we calculated the true positive rate from the participant's response as described in subsection 4.1. Our dependent variables were: * **exact** \(\in\{0,1\}\), binary value for whether the response was exact. * **bias** is the \(log_{10}\) ratio of the response and the ground truth. * **error** is the absolute value of bias. While exact evaluates verbatim comprehension, bias and error are proxies for gist (approximate) comprehension, which is more prominently used for reasoning and decision-making (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). The covariates and other computed measures were: * **ospan** \(\in[0..30]\), measures general cognitive capacity. * **wmc** \(\in\{low,high\}\), based on a median split ofospan scores. * **nasa-tlx** \(\in[0..20]\), measures combined subjective workload. * **Spatial Score** \(\in[-4..20]\), is the spatial ability test score. * **Spatial Level** \(\in\{low,high\}\), from a median split of spatial scores. * **crt** \(\in\{0,1,2,3\}\), is the cognitive reflection test score. * **Text-Vis Usage** \(\in[1...20]\), maps 0 to using primarily text and 20 to mostly visualization for those in the _vistext_ condition. ### Attrition Analysis There has been a growing body of work about the issue of high attrition rate in online studies (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). According to research by Zhou et al. (Zhou et al., 2019), studies that are cognitively taxing should be concerned if dropout rates are 20% or above. The authors also highlight the importance of checking for selective attrition by making sure \begin{table} \begin{tabular}{l r} \hline \hline Load condition & Dropout rates \\ \hline Single : Participants conducted the Bayesian Task & 3.98\% \\ Dual : Participants conducted the Bayesian Task and a secondary recall task & 7.18\% \\ \hline \hline \end{tabular} \end{table} Table 1. Experiment 1 condition-wise dropout rates for the Bayesian Task the dropout rates are not significantly different across experimental conditions. To provide transparency and encourage practices that improve internal validity, we conducted an attrition rate analysis as recommended by Zhou et al. (2017). Our experiment consists of two surveys, a Bayesian Task implemented by the authors and an OSPAN test from (Zhou et al., 2017) on Qualtrics. We conducted an attrition analysis for the Bayesian task, where participants were assigned either a single task (Single ) or a dual task (Dual ). We adapted our methodology from Zhou et al. (2017) and only took into account participants who consented to the study and discarded fraudulent responses where participants took the survey more than once by using their recorded IP addresses. Table 1 shows the condition-wise dropout rates, computed according to (Zhou et al., 2017) by dividing the number of participants who were assigned to a given condition and completed the entirety of their task 4 by the number of those who were assigned to the same condition who at least gave their consent and only took the task once. We observe low dropout rates for both conditions that have no significant difference (\(\chi^{2}(2)=1.88,p=0.1704,d=0.1406\)). Footnote 4: our server recorded an end-of-experiment timestamp when a participant completed the entire survey ### Findings Out of 299 participants, 104 were assigned _text_, 100 were assigned _vis_, and 95 saw the _vistext_. Each participant completed a single Bayesian problem depicting the _disease_ scenario in subsection 4.1. Further, 170 were assigned to the single task (Single ) condition and 129 were assigned the dual-task condition with an added load (Dual ). #### 4.8.1. **Single Task: Establishing a baseline** We begin our analysis by inspecting how participants performed under the single task (Single ) condition and testing whether format influence performance. The existing literature has produced mixed results on the effect of visualization on reasoning accuracy (Zhou et al., 2017; Zhou et al., 2017; Zhou et al., 2017), and our **H3** posits no significant difference in Bayesian reasoning accuracy. Bias. We conducted an exploratory analysis by examining how much participants' responses deviated from the exact answer and the effect of format on their discrepancy. We observe an overall median bias of.10 for the single task condition with varying median bias of 0.22 for _text_, 0.00 for _vis_, and 0.00 for _vis_. From Figure 3, we can observe that participants' bias are not normally distributed. Thus, we use non-parametric tests for our analysis. Additionally, participants in the _vis_ and _vistext_ conditions were marginally more likely to produce the exact answer (bias \(=0\)) than those who used _text_. When we ran a 3-way Kruskal-Wallis test with presentation format as a between-subject factor we found a significant difference in bias across the three conditions (\(H(2)=12.87,p=.0016,\eta^{2}(H)=0.065\)). Follow-up Mann-Whitney Wilcoxon tests with an adjusted alpha \(\alpha=0.0167\) revealed significant differences in bias between _vis_ and _text_ (\(W=2379.5,p=0.0006,\eta^{2}(H)=0.092\)) as well as _vistext_ and _text_ (\(W=1772.5,p=0.0087,\eta^{2}(H)=0.057\)). ExactWe examined our first measure of accuracy, exact, to investigate whether the presentation format influences the proportion of correct answers. Overall, 40.9% of participants correctly answered both Bayesian questions for the single task condition, with _text_, _vis_, and _vistext_ yielding 31.5%, 43.08%, and 49.02% exact answers respectively. Our omnibus proportion z-test shows no significant effect of presentation format on accuracy (\(\chi^{2}(2)=3.4899,p=.1795\)). Thus, _the proportion of successful exact reasoning did not depend on presentation format_. Error. For a more fine-grained measure of accuracy, we examined error to assess how far participants' responses deviated from the exact answer and whether presentation format mediated this effect. The median error was 0.097 overall and 0.097 in the _vis_ condition, 0.22 in the _text_ condition, and 0.021 _vistext_ condition. A Kruskal-Wallis non-parametric test revealed significant differences in error between conditions (\(H(2)=8.43,p=0.0148,\eta^{2}(H)=0.037\)). Post-hoc Mann-Whitney Wilcoxon tests with an adjusted alpha (\(\alpha=.0167\)) revealed significant differences between _vis & text_ (\(W=2227.5,p=.0093,\eta^{2}(H)=0.0047\)) and _vistext & text_ (\(W=1738.5,p=.0164,\eta^{2}(H)=0.046\)). We found no significant difference between _vis_ and _vistext_ (\(W=1610.5,p=0.675\)). These findings suggest that _reasoning with text-only led to significantly higher errors compared to other formats_. Altogether, these findings show evidence that presentation format can impact reasoning errors. However, the observed effects Figure 3. Single task exact (95% CI), error and bias across presentation formats. ) indicates a significant difference between the two formats (\(\alpha=0.0167\)). We found significant differences in error between _vis_and _text_, and _vistext_ and _text_. were small and there was no significant impact on exact response rates. Thus, **our results only partially support H3**. More specifically, they suggest that visualization, even when combined with text, can have benefits on Bayesian accuracy compared to text alone. It is noteworthy that these results also partially contradict the visualization literature that compared Bayesian formats. On one hand, our findings are similar to Ottley et al. (2019); Ottley et al. (2019) who found no difference in exact between _text_, _vis_, and _vistext_, but did not examine error. On the other hand, our results differ from Micallef et al. (2019) who examined _text_ and _vistext_ and found no measurable effect of these formats on exact or error. However, the discrepancies between our results and prior work could be attributed to the type of visualization used and differences in the experiment design. #### 4.8.2. **Individual Differences in Working Memory Capacity** A primary goal of this project is to examine whether cognitive resources can explain Bayesian reasoning results. Specifically, with **H2**, we hypothesized that accuracy in Bayesian reasoning will depend on available cognitive resources. To this end, we examine the effect of working memory capacity on accuracy in the Single task. To examine whether working memory mediates accuracy in participants' exact and error measures. We first performed a binary logistic regression to test for the effect of organ on exact and found that correctly answering the Bayesian questions is 1.49 times more likely to occur for every 5-point increase in the working memory test (95% CI [.04,.12]). Analyzing error, a generalized linear model also revealed a significant impact of organ on error (\(t(169)=-3.326,p=0.00108\)). Thus, _the higher their working memory capacity, the more accurate participants were in their answers_. Following prior work (Han et al., 2019), we split participants into Low and High working memory groups based on a median split of their organ scores. Figure 4 summarizes the accuracy of each working memory group across presentation formats, showing their respective proportions of exact answers and error distribution. Overall, in the Single task, 52.29% of those in the High group produced exact answers compared to 34.59% in the Low group. Additionally, Low had a median error of 0.176 and High had a median error of 0. Consistent with the regression analysis, we show a statistically significant difference between the Low and High groups when we compared exact (\(\chi^{2}(1)=6.4762,p=0.0109,d=0.3980\)) and error (Kruskal-Wallis, \(H(1)=6.8535,p=0.0088,\eta^{2}(H)=0.0348\)). Together, these results support **H2**, showing _suggestive evidence that participants' working memory mediated Bayesian reasoning accuracy_. #### 4.8.3. **Working Memory Capacity & Presentation Formats** In light of our previous finding that successful reasoning might depend on cognitive resources, we conducted further analysis to examine the effect of presentation format on reasoning accuracy within the Low and High groups. Specifically, we ran separate 3-way proportion tests to compare the frequencies of exact answers and found no difference between presentation formats for both Low (\(\chi^{2}(2)=4.8401,p=0.0889\)) and High (\(\chi^{2}(2)=0.6504,p=0.7224\)) groups. Further, a Kruskal-Wallis test comparing error for _text_, _vis_, and _vistext_ within the Low group revealed a statistically significant difference between the three presentation formats (\(H(2)=10.086,p=0.006453,\eta^{2}(H)=0.0817\)). We ran pairwise Mann-Whitney Wilcoxon tests with an adjusted alpha (\(\alpha=0.0167\)) and found significant differences in error between _text_ and _vis_ (\(W=874,p=.0017,\eta^{2}(H)=0.1291\)), but failed to reject the null hypothesis for the _text_ & _vistext_ and _vis_ & _vistext_ comparisons. Examining the High group, a Kruskal-Wallis test found no overall significant differences between presentation formats (\(H(2)=1.7301,p=0.4210\)). These analyses suggest that _presentation choices can impact users with low working memory capacity_, with _text_ eliciting significantly higher error rates compared to _vis_. However, the _high working memory capacity participants were less impacted by the format they used_. Our final analysis here investigates how Low and High groups performed within each presentation condition. Our analysis revealed that the Low and High groups had similar proportions of exact (\(\chi^{2}(1)=1.2013,p=0.2731\)) answers and error (\(W=428.5,p=0.4381\)) rates when reasoning with _vis_. The two working memory groups also did not differ in exact (\(\chi^{2}(1)=1.5875,p=0.2077\)) and error when using _vistext_ (\(W=230,p=0.1000\)). However, we observed a statistically significant difference in exact (\(\chi^{2}(1)=5.889,p=0.01524,d=0.6997\)) and error with the _text_ condition (\(W=235.5,p=0.02481,\eta^{2}(H)=0.0776\)). Thus, _text_ is marginally more likely to elicit a deviation in accuracy between Low and High compared to _vis_ or _vistext_. Figure 4. Single task exact (95% CI) and error across presentation format and working memory group. \({}^{**}\) indicates a significant difference between groups. We found significant differences in exact and error between Low and High working memory groups (\(\alpha=0.05\)) in the condition. Among Low working memory capacity individuals, error was significantly higher in _text_ compared to _vis_ (\(\alpha=0.0167\)) #### 4.8.4. **Dual Task: Reasoning Under Divided Attention** In **H1**, we posit that if we can experimentally manipulate executive capacity by adding a secondary task, we will incur a decline in performance, known as the **dual-task cost**. As a result, formats that require high cognitive resources will have a significant dual-task cost. ExactWe observed a near-identical proportion of exact answers for the Single and Dual conditions. Participants in the Dual condition produced the exact answer 41.86% of the time, compared to 41.17% in the Single condition. We compared the proportion of exact answers in the Single and Dual task and found no overall significant differences (\(\chi^{2}(2)=0.0141,p=0.9054\)). The analysis revealed 34% of exact answers for _text_, 57.14% for _vis_, and 38.64% for _vis_. A 3-sample proportion test found no significant difference in exact between the presentation formats in the Dual group (\(\chi^{2}(2)=4.816,p=.09\))._Thus, manipulating load had no significant effect on our participants' exact responses._ biasA Kruskal-Wallis test found no significant difference in bias between presentation formats in the Dual group (\(H(2)=4.44,p=0.1084\)). We compared overall bias for the Single and Dual conditions using a Mann-Whitney Wilcoxon test and found no significant difference between the two conditions (\(W=11130,p=0.8175\)). ErrorFinally, we also observed an identical overall median error of 0.097 for both the Single and Dual task. An overall comparison with a Mann-Whitney Wilcoxon test found no significant difference in error between Single and Dual (\(W=11232,p=0.709\)). The median error was 0.27 for _text_, 0 for _vis_, and 0.097 for _vistext_ in the Dual task. Similar to the Single condition, an omnibus Kruskal-Wallis test revealed an overall effect of presentation format on error in the Dual condition (\(H(2)=7.7344,p=.0207,\eta^{2}(H)=0.0455\)). Follow-up Mann-Whitney Wilcoxon tests with an adjusted alpha ((\(\alpha=.0167\)) revealed significant differences in error between _text_ and _vis_ (\(W=1162.5,p=.0073,\eta^{2}(H)=0.0747\)). We found no significant difference between _text_ and _vist_ (\(W=1277.5,p=0.1674\)) and _vis_ and _vist_ (\(W=612.5,p=0.1002\)). **Considering differences in working memory capacity.** In section 4.8.2, we showed evidence that working memory capacity impacts Bayesian reasoning. Here, we examine the difference in performance between the Single and Dual conditions by taking into account individual differences in working memory capacity. For individuals in the High group, we found no significant difference between those in the Single and Dual task conditions when examining exact (\(\chi^{2}(2)=0.4258,p=0.5141\)) and error (\(W=3104,p=0.3264\)). Similarly, we found no measurable difference between the Single and Dual conditions for participants in the Low groups when examining exact(\(\chi^{2}(2)=0.0701,p=0.7912\)) and error (\(W=2467,p=0.5253\)) Taken together, the secondary task did not elicit the expected results and the evidence for **H1** is inconclusive. Although in section 4.7 we found no significant difference in attrition rate between the Single and Dual tasks, we conducted a Kruskal-Wallis test to investigate whether the distribution of ospan scores varied between the two tasks after all data quality exclusions. We found a significant difference in ospan scores between Single and Dual (\(W=13688,p=0.0002,\eta^{2}(H)=0.0422\)), with higher ospan scores in the Dual condition. This could be due to selective attrition or bias in our sample. Participants in the Dual condition had significantly higher ospan scores compared to the Single task. This could also explain why we did not observe a significant decline in performance between the two tasks. We will consider this confounding factor in our interpretation of Experiment 1's results. #### 4.8.5. **nasa-tlx Self-Reported Effort** When looking at self-reported effort in the Single task, we found an overall significant difference in perceived **frustration** across presentation formats (Kruskal-Wallis, \(H(2)=11.72,p=0.003,\eta^{2}(H)=0.0582\)). We conducted separate Mann-Whitney Wilcoxon tests with an adjusted alpha \(\alpha=0.0167\) for pairwise comparisons that revealed a significant difference in **frustration** between _vistext_ and _text_ (\(W=1918.5,p=0.0005,\eta^{2}(H)=0.1078\)). Since working memory capacity is likely to affect reported NASA-TLX scores, we observed differences between presentation formats for each working memory group separately. We found no significant difference between presentation formats across any of the nasa-tlx subscales in the High working memory group. Within the Low group, we conducted separate Kruskal-Wallis tests and found significant differences in accuracy between presentation formats in the following: * _temporal demand:_ (\(H(2)=10.305,p=0.006,\eta^{2}(H)=0.0839\)) * _physical demand:_ (\(H(2)=8.95,p=0.0114,\eta^{2}(H)=-0.0045\)) * _frustration:_ (\(H(2)=10.825,p=0.004,\eta^{2}(H)=0.089\)) As a follow-up, we conducted Mann-Whitney Wilcoxon tests with an adjusted alpha (\(\alpha=0.0167\)) within the Low group and found significant differences between _vistext_ and _text_ in the following: * _temporal demand (\(W=644,p=0.004,\eta^{2}(H)=0.1262\)), Figure 5. exact (95% CI) and error for Single and Dual task * _physical demand_ (\(W=638,p=0.005,\eta^{2}(H)=0.1175\)) * _frustration_ (\(W=672.5,p=0.0009,\eta^{2}(H)=0.1712\)) Within the Low group, we also found differences between _vis_ and _vis_text in the following: * _temporal demand_ (\(W=894.5,p=0.006,\eta^{2}(H)=0.0904\)), * _physical demand_ (\(W=878,p=0.011,\eta^{2}(H)=0.08296\)) We investigated differences in reported nasa-tlx scores across High and Low groups for each presentation format. In the _vis_ condition, we found differences in the following: * _temporal demand_ (\(W=307,p=0.01525,\eta^{2}(H)=0.0673\)) * _frustration_ (\(W=332.5,p=0.0379,\eta^{2}(H)=0.0525\)) Finally, we found no differences in reported scores between working memory groups in the _text_ and the _vis_ conditions. #### 4.8.6. **Additional Analyses** Spatial AbilityWe conducted a generalized linear model with a logit link and found that spatial ability score had a significant impact on exact (\(z(298)=4.670,p=3.00e-06\)). We also examined the effect of spatial ability score on error through a generalized linear model and found significant effects (\(t(298)=-4.003,p=7.91e-05\)). These findings replicate prior work showing that spatial ability mediates Bayesian reasoning(Srivastava et al., 2017; Wang et al., 2018). Completion TimeKruskal-Wallis tests revealed no significant effect of presentation format (\(H(2)=1.2454,p=0.5365\)) or load condition (\(H(2)=2.03,p=0.1546\)) on the completion time of the Bayesian task. Moreover, we found no significant difference in completion time between the Low and High working memory groups (\(H(2)=0.0797,p=0.7778\)). Cognitive ReflectionTestOur crt results largely replicated the ospan findings. We found an overall significant impact of crt score on exact (\(\chi^{2}(3)=20.502,p=0.0001,\eta^{2}(H)=0.5246\)). Further, the Kruskal-Wallis test shows a statistically significant effect of crt on bias (\(H(3)=20.566,p=0.0001,\eta^{2}(H)=0.0595\)) and error (\(H(3)=24.986,p=1.555e-05,\eta^{2}(H)=0.0745\)), showing evidence that _individuals with a higher crt score were significantly more likely to enter the exact answers and made smaller reasoning errors_. ## 5. Experiment 2: Mixed Design Study Experiment 1 used a between-subject design to control for learning effects and ensured consistency by comparing responses to the same Bayesian problem. However, we found no significant effect of the dual task on accuracy. This could be due to the differences in working memory capacity between the two groups, or high individual variability due to the study design. We conducted a second mixed design study5 to 1) control for individual variability in the single and dual tasks and 2) test whether the lack of replication is due to population or methodological differences. Footnote 5: Link to Experiment 2 surveys, data, and analyses: [https://bit.ly/3IfgXCZ](https://bit.ly/3IfgXCZ) We made the following changes to the experiment design to reduce the overall difficulty of the task and better control for individual variability. * **Improve Study Preparation with a Practice Round**: We added a pre-study trial to familiarize participants with the task and study structure. Participants saw and attempted a sample Bayesian reasoning task before continuing to the main task. * **Control Individual Variability**: We used a mixed factorial design with the load condition (Single, Dual) as a within-subject factor and presentation format (_vis, text, vistext_) as a between-subject factor. * **Remove CRT Test**: We removed the Cognitive Reflection Test from the survey as we found in Experiment 1 that it is positively correlated with the OSPAN test (\(r(297)=0.27,p=2.051e^{-06}\)), which is more widely recognized (Srivastava et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). This shortened the survey. ### Task & Procedures Our tasks were similar to Experiment 1 except that in the Bayesian survey, the users conducted both a single and dual task in no particular order where the Bayesian problems were presented using two scenarios, a _cab_ and a _class_ scenario. After solving the Bayesian Figure 6. Distribution of nasa-tlx scores in the Single task for the Low and High wmc groups. \({}^{**}\) indicates a statistically significant difference between working memory groups (\(\alpha<0.05\)) and \(]\) indicates differences across formats for the corresponding wmc group (\(\alpha<0.0167\)). problems, users completed a NASA-TLX and a spatial ability test. In this experiment, we did not conduct the Cognitive Reflection Test. Similarly to Experiment 1, users also completed an OSPAN test. **Practice Round.** In the practice round, users practiced the dot pattern recall task, the Single Bayesian task, as well as both tasks together as part of the Dual condition. **Payment.** Participants received a base pay of $2 and could win a total bonus of up to $2.5, comprising of $0.5 for each correct Bayesian question and $0.5 for a correctly reproduced dot pattern. Participants received an average bonus of $1.51 and completed the Bayesian and OSPAN surveys in an average time of 26.8 minutes. ### Experimental Design Similarly to Experiment 1, we assigned each user randomly to one of three presentation conditions (_vis_, _text_ or _vistext_), making the comparison of presentation between subjects. Each user completed both the Single and Dual tasks and saw either the _cab_ or _car_ scenario, making load condition a within-subject condition. ### Presentation Conditions Our presentation formats remained a between-factor condition and were the same as Experiment 1: _text_, _vis_, and _vistext_. We utilized the _disease_ scenario for the pre-task tutorial, and each participant saw two Bayesian problems narrating two different scenarios: _cab_ and _class_(Shen et al., 2017; Wang et al., 2018; Wang et al., 2018). The _cab_ scenario involves eye-witness testminoises of a hit-and-run scenario, while the _class_ scenario presents the career prospects of college students. We randomly assigned one scenario to the Single task and the other to the Dual condition, and the order of the conditions was counterbalanced. ### Participants As per our pre-registration 6, we conducted a power analysis based on a three-way mixed ANOVA and determined the ideal sample size to be 168. We recruited 240 participants via Amazon's Mechanical Turk to account for a 30-40% exclusion rate. Participants were English-speaking from the United States and had a HIT acceptance rate of 100%. After excluding 88 participants based on the same pre-registered criteria determined in Experiment 1 (see section 4.5), 152 participants remained (_text_= 46_, _vis_=55, _vistext_=51). Footnote 6: Link to Experiment 2 pre-registration: [https://bit.by/3qli2/lta](https://bit.by/3qli2/lta) #### 5.4.1. Attition Rate Using the same methodology as Experiment 1, we conducted an attrition rate analysis for Experiment 2. Table 3 shows the condition-wise dropout rate, showing participants who first saw the Dual task or the Single task. We found no significant difference in dropout rate between the two conditions (\(\chi^{2}(2)=1.6824,p=0.1946,d=0.1293\)). When looking at performance in the ospan test after exclusions, we found no significant difference in scores. This suggests that the population who completed the experiment was consistent across both conditions (\(H(2)=0.34157,p=0.5589,\eta^{2}(H)=-0.0048\)). ### Results In this experiment, our aim is to uncover differences in dual-task costs elicited by each presentation format through a mixed-design study. First, we establish a baseline for accuracy in the single task and compare our findings to Experiment 1. Then, we examine and compare the decline in performance elicited by the dual task (**dual-task cost**) between presentation formats. #### 5.5.1. Single Task _exact_. Overall, 38.2% of participants correctly answered both Bayesian questions in the Single task, with _text_, _vis_, and _vistext_ leading 21.7%, 47.3% and 42.3% exact answers respectively. Contrarily to Experiment 1, our analysis shows a significant effect of presentation format on exact (\(\chi^{2}(2)=7.58,p=.023,d=0.4584\)). Follow-up pairwise 2-sample proportion tests with an adjusted alpha (\(\alpha=0.0167\)) revealed a significant difference in exact between _vis_ and _text_ (\(\chi^{2}(2)=7.1195,p=0.0076,d=0.5537\)). _BIAS_. We found no significant difference in bias between the presentation formats (Kruskal-Wallis, \(H(2)=1.0826,p=0.582,\eta^{2}(H)=-0.0063\)). \begin{table} \begin{tabular}{c p{284.5pt}} \hline \hline Scenario & Description \\ \hline _cab_ & There is a total of 100 witnesses to the car accident. Out of the 100 witnesses, 15 claimed that the car which caused the accident was a cab. Out of these 15 witnesses, 12 claimed the car was blue and 3 claimed the car was green. On the other hand, 85 witnesses claimed that the car which caused the accident was not a cab. Out of these 85 witnesses, 3 claimed the car was blue and 82 claimed the car was green. \\ _class_ & There is a total of 100 college freshmen in the population. Out of these 100 freshmen, 30 are enrolled in an introductory entrepreneurship course. Out of these 30 freshmen, 20 plan on going into business after graduation, and 10 do not. On the other hand, 70 freshmen are not enrolled in an introductory entrepreneurship course. Out of these 70 freshmen, 10 plan on going into business after graduation, and 60 do not. \\ \hline \hline \end{tabular} \end{table} Table 2. Scenarios use in the Bayesian task in Experiment 2 \begin{table} \begin{tabular}{l c} \hline \hline Task Order & Dropout rates \\ \hline Single, Dual : Participants saw the dual task followed by the single task & 11.23\% \\ Dual, Single : Participants saw the single task followed by the dual task & 15.67\% \\ \hline \hline \end{tabular} \end{table} Table 3. Experiment 2 condition-wise dropout rates for the Bayesian Task error.The median error was 0.097 overall and 0.097 in the _vis_ condition, 0.194 in the _text_ condition and 0.076 in the _vis_ factor condition. A Kruskal-Wallis non-parametric test found a significant difference in error between the presentation formats \((H(2)=7.61,p=0.0223,\eta^{2}(H)=0.0376)\). Post-hoc Mann-Whitney Wilcoxon tests with an adjusted alpha (\(\alpha=0.0167\)) revealed a significant difference between _vis_ and _text_ (\(W=1619.5,p=0.01329,\eta^{2}(H)=0.05182\)). Overall, the general trends are in line with Experiment 1 and demonstrate that _participants were the least accurate with text compared to visualization._ However, the differences are more pronounced in Experiment 2. Prior work has shown that different Bayesian scenarios can have a different impact on accuracy (Zhou et al., 2017). We found no significant difference in accuracy between the _class_ and _cab_ scenarios when looking at exact \((\chi^{2}(2)=0.0251,p=0.8741,d=0.0257)\), bias \((W=2870.5,p=0.9727,\eta^{2}(H)=-0.0067)\) or error \((W=2952.5,p=0.7844,\eta^{2}(H)=-0.00616)\). #### 5.5.2. Single vs Dual Task. Dual-task.We found that in the Dual task, the mean number of dots recalled was 3.56 (\(\sigma=0.77\)). 40.8% of participants correctly answers both Bayesian questions, with 26.1% of exact answers for _text_, 41.8% for vis and 51.9% for vistext. Overall differences in exact between presentation formats were significantly different (\(\chi^{2}(2)=6.8197,p=0.0331,d=0.4335\)). Follow-up pairwise comparisons revealed a significant difference in exact between _text_ and _visext_ (\(\chi^{2}(2)=6.8003,p=0.0091,d=0.5461\)), _suggesting that the combination of visualization and text leads to fewer errors than text alone under divided attention._ We found no significant difference in bias (Kruskal-Wallis, \(H(2)=2.3795,p=0.3043\)) or error (Kruskal-Wallis, \(H(2)=4.451,p=0.108\)) between presentation formats. Dual-task cost.For each participant, we observed the decline in performance in the dual task compared to the single task by computing the difference in error, a measure known as the **dual-task cost**. By comparing dual-task costs across presentation formats, we can infer differences in cognitive load. We conducted a Kruskal-Wallis test and found no overall difference in dual-task costs between presentation formats \((H(2)=1.0314,p=0.5971,\eta^{2}(H)=-0.0065)\). Calibrating Dual-task cost.In section 4.8.2, we showed evidence that working memory capacity impacts Bayesian reasoning. To this end, we observe the effect of dual-task cost for High and Low working memory capacity groups. We found that dual-task costs were not significantly different between presentation formats within the High (Kruskal-Wallis, \(H(2)=1.4324,p=.4886,\eta^{2}(H)=-0.0089\)) or Low (Kruskal-Wallis, \(H(2)=.14215,p=.9314,\eta^{2}(H)=-0.0226\)) group. _Therefore, we conclude that even when considering individual differences in working memory capacity, the effect of the dual-task was consistent across presentation formats._ ## 6. Discussion Our work leveraged cognitive theory to understand the conflicting findings about the effect of combining text and visualization in the context of Bayesian communication. Analyzing general trends in accuracy alone seldom paints the complete picture in an evaluation study, as there is ample research on the impact of individual differences on Bayesian reasoning and beyond (Kruskal-Wallis, 2016; Kruskal-Wallis, 2016; Kruskal-Wallis, 2016). Our results suggest that combining visualization and text does not increase cognitive load and in some cases improves subjective workload. We present our main takeaways from these studies. We analyzed accuracy to compare our findings to the prior work and to provide context for our cognitive load results. At a high level, our produced results are similar to prior studies in the visualization community (Zhou et al., 2017; Kruskal-Wallis, 2016; Kruskal-Wallis, 2016; Kruskal-Wallis, 2016). _Presentation format alone had little to no effect on Bayesian reasoning, but the inclusion of visualization improved Bayesian reasoning._ In Experiment 1, we found that users' proportion of correct answers in the baseline Single condition was not significantly different across the three formats, replicating Ottley et al.'s findings (Kruskal-Wallis, 2016). While user error rates were significantly lower using visualization-only compared to text-only, the effect size was small for the statistical test. In Experiment 2, we saw a significantly greater proportion of correct answers with the combined presentation format than with text alone, with a small effect. Figure 8. Dual-task cost across presentation formats. Figure 7. Experiment 2. Single task exact, and error across presentation formats. indicates a significant difference between the two formats (\(\alpha=0.0167\)). Still, although our accuracy analysis does a good job of uncovering differences, it does not explain the phenomena. ### Implications of Cognitive Load We leveraged three different but complementary techniques for evaluating cognitive load: a working memory capacity test, self-reported effort, and a dual-task. Our investigations into working memory capacity were influenced by prior work, primarily focusing on text-only formats, and showed a positive correlation between working memory capacity and reasoning performance (Sutton et al., 2017; Sutton et al., 2018). In our work, we found that the effect of working memory capacity held, with high working memory individuals generally outperforming their low working memory counterparts. This effect was especially salient in the text-only condition. These findings help contextualize our findings on Bayesian task accuracy, which suggest that visualization and multimedia formats may be superior to text-only. We also saw that participants with low working memory capacity performed better when using visualization alone than text alone. In line with Castro et al.'s work (Castro et al., 2018), this difference in performance in the low working memory group is indirect evidence that text-only elicited more cognitive load than visualization-only. Expanding this argument, we can deduce that the combination of text and visualization did not elicit more cognitive load than visualization-only. Given the prior findings that removing numbers from the text in the combined presentation positively affects reasoning performance(Sutton et al., 2018), we expected to find evidence that combining text and visualization increases cognitive load, but our data does not support this notion. These findings have practical implications for visualization recommendation and accessibility. Visualizations can benefit populations with lower cognitive abilities and be beneficial in situations of high cognitive burden. Notably, in Experiment 1, participants with low working memory capacity reported experiencing significantly lower frustration and temporal demand when using the combination format compared to text alone. This finding further supports the use of the multimedia format. However, it is noteworthy that Experiment 1 also showed no significant difference in the accuracy rates between the combination format and text for individuals with low working memory, highlighting the deficiency of analyzing accuracy alone. In general, our findings somewhat support the notion of a multimedia effect. In particular, there may be some benefit to having both text and visualization available to facilitate reasoning, especially for people with low working memory capacity. One potential explanation might be that visualization allows the viewer to offload items from memory, but the text is familiar and easy to process. This hypothesis corroborates the results of prior work that captured eye-gaze data as people solved Bayesian tasks (Sutton et al., 2018). Their results suggest that visualization makes it easy to identify relevant information, but the text may be easier to process compared to the visual format (Sutton et al., 2018). Another plausible explanation for our results is that participants with low working memory might prefer the flexibility of the combined format, which enables them to choose the format that best aligns with their mental model or preference. Further investigation is needed to better understand this phenomenon. ### On The Failure of the Dual-Task Paradigm The dual-task paradigm did not reveal differences in cognitive load across formats, even when accounting for individual differences in working memory capacity. Specifically, asking participants to hold a dot pattern in memory did not influence their reasoning performance. We hypothesized in Experiment 1 that this effect could be due to individual variability in the between-subject design. However, the within-subject Experiment 2 revealed similar findings, which contradicts H3 and prior work (Sutton et al., 2018) possibly due to differences in experimental design. For example, Lesage et al.(Lesage et al., 2018) performed a laboratory experiment with 179 first-year psychology students who participated in the previous study for course credits. Our study used a more diverse crowdsourced study population. Another possible explanation is that the secondary task was too easy, or our study participants may have written down the pattern instead of holding it in memory. Alternatively, the observed disparity may also be due to differences in the demographic makeup of our study populations. Several researchers have developed guidelines for choosing an adequate secondary task, which includes considerations for task difficulty and similarity (Sutton et al., 2018; Sutton et al., 2018). However, it can be challenging to strike the perfect balance between the primary and secondary task as the latter has to be hard enough to increase cognitive load but not to the point of cognitive overload. Although the exact reason for the failed replication is unknown, we encourage researchers to consider modifying the dual-task methodology when conducting crowdsourced evaluations. One alternative study design could be to calibrate the secondary task's difficulty based on participants' abilities. For example, Castro et al. (Castro et al., 2018) used calibration in a study investigating the impact of divided attention on driving. Their participants performed a pre-test to identify the level of difficulty that elicited a 75% accuracy on the secondary task. For our choice of secondary task, one option would be to calibrate the size of the dot pattern to memorize based on participants' performance. Alternatively, Borgo et al.'s study on the impact of visual embellishment on engagement and working memory used a word selection secondary task (Borgo et al., 2018), where users identified fruits among a crawling list of words. Researchers could calibrate the secondary task by tailoring the crawl speed of the words to each participant. Further, Borgo et al.'s (Borgo et al., 2018) dual-task setup would be less susceptible to violations of the study tasks since it does not involve a recall task. ## 7. Conclusion Our work expands the understanding of the relationship between working memory capacity and Bayesian reasoning by examining and comparing three presentation formats. By examining more granular accuracy measures, we showed that visualization-only and combination formats lead to less error in Bayesian reasoning than text-only formats. Moreover, we showed that working memory capacity mediates Bayesian reasoning accuracy, particularly in the text format. Finally, we showed that users with low working memory capacity are more accurate when using visualization alone compared to text alone. We discuss how these findings can impact visualization design guidelines, especially for low working memory capacity users. To this end, we argue for more diversified evaluation metrics and encourage the visualization community to leverage and apply existing research in cognitive science and related fields to better understand how people perceive and reason with visualizations. ## Acknowledgments The authors wish to thank Lace Padilla for her insights into the use of cognitive methods and Shayan Monadjemi for his valuable feedback on the manuscript. This material is based upon work supported by the National Science Foundation under grant number 2142977.
2308.15437
Existence of Pauli-like stabilizers for every quantum error-correcting code
The Pauli stabilizer formalism is perhaps the most thoroughly studied means of procuring quantum error-correcting codes, whereby the code is obtained through commutative Pauli operators and ``stabilized'' by them. In this work we will show that every quantum error-correcting code, including Pauli stabilizer codes and subsystem codes, has a similar structure, in that the code can be stabilized by commutative ``Paulian'' operators which share many features with Pauli operators and which form a \textbf{Paulian stabilizer group}. By facilitating a controlled gate we can measure these Paulian operators to acquire the error syndrome. Examples concerning codeword stabilized codes and bosonic codes will be presented; specifically, one of the examples has been demonstrated experimentally and the observable for detecting the error turns out to be Paulian, thereby showing the potential utility of this approach. This work provides a possible approach to implement error-correcting codes and to find new codes.
Jhih-Yuan Kao, Hsi-Sheng Goan
2023-08-29T17:01:17Z
http://arxiv.org/abs/2308.15437v1
# Existence of Pauli-like stabilizers for every quantum error-correcting code ###### Abstract The Pauli stabilizer formalism is perhaps the most thoroughly studied means of procuring quantum error-correcting codes, whereby the code is obtained through commutative Pauli operators and "stabilized" by them. In this work we will show that every quantum error-correcting code, including Pauli stabilizer codes and subsystem codes, has a similar structure, in that the code can be stabilized by commutative "Paulian" operators which share many features with Pauli operators and which form a **Paulian stabilizer group**. By facilitating a controlled gate we can measure these Pauli operators to acquire the error syndrome. Examples concerning codeword stabilized codes and bosonic codes will be presented; specifically, one of the examples has been demonstrated experimentally and the observable for detecting the error turns out to be Paulian, thereby showing the potential utility of this approach. This work provides a possible approach to implement error-correcting codes and to find new codes. ## I Introduction Quantum information is stored as quantum states. Due to defects in the devices or executions, and the inevitable interaction of the quantum system with the environment, the state of the quantum system can be changed in a nondeterministic manner, which is an error; consequently, error correction is vital for the information to stay hygienic. Using quantum error-correcting codes, states are prepared in specific subspaces such that if certain errors occur, we can detect and correct them [1; 2; 3; 4; 5]. Even though quantum devices without error correction may serve certain purposes such as simulating physical systems [6; 7], a universal quantum computer that is scalable still requires error correction [8; 9]. Pauli stabilizer codes [1; 10; 11] are an extremely important class of quantum error-correcting codes. Some of the most promising codes, such as topological codes [12; 13; 14; 15; 16] which include surface codes [17; 18; 19; 20; 21; 22; 23; 24; 25; 26], and quantum low-density parity-check (LDPC) codes [27; 28; 29; 30], are based on Pauli stabilizer codes. An advantage of Pauli stabilizer formalism is that it informs us of which measurements to implement to detect the errors, namely the stabilizer generators. There are several ways of generalizing the Pauli stabilizer formalism, for example, by generalizing Pauli groups, or nice error bases to nonbinary cases [31; 32; 33; 34; 35], or by considering noncommutative groups on binary codes [36]. In this work, instead of defining a certain group and constructing an error-correcting code from it, we will do the opposite: We investigate the structure of any error-correcting code, including subsystem codes [37; 38; 39; 40; 41; 42; 43], to show that every code can be stabilized by a "Paulian" stabilizer group (Proposition 1 and Corollary 1), the exact meaning of being Paulian to be explained in Sec. II.2. Identifying the Paulian stabilizer group of an error-correcting code may give us a guideline on how to implement such a code: The error syndrome can be obtained by measuring these Paulian operators, which can be conducted via controlled operations (Sec. III.4). We will also show how to obtain the Paulian stabilizer group for a concatenated binary code (Sec. IV) [1; 2; 44], and in Sec. V we will demonstrate some examples. For conciseness, details of some topics can be found in the appendixes. ## II Preliminaries \(\mathbb{A}\subseteq\mathbb{B}\) means \(\mathbb{A}\) is a subset of \(\mathbb{B}\), while \(\subset\) indicates it is a proper subset. A map \(f:\mathbb{X}\to\mathbb{Y}\) to the restriction of \(\mathbb{X}^{\prime}\subseteq\mathbb{X}\), denoted by \(f|_{\mathbb{X}^{\prime}}\), is a map from \(\mathbb{X}^{\prime}\) to \(\mathbb{Y}\) with \(f|_{\mathbb{X}^{\prime}}(x)=f(x)\ \forall x\in\mathbb{X}^{\prime}\)[45; 46; 47], for which we will often shrink the codomain to the image \(f|_{\mathbb{X}^{\prime}}(\mathbb{X}^{\prime})=f(\mathbb{X}^{\prime})\). The _span_ of a set of vectors is the set of all linear combinations thereof, which is a subspace. We will use shorthand to label sets obtained from others in a sensible way, e.g. \(\mathcal{H}^{\otimes 3}\) is \(\mathcal{H}\otimes\mathcal{H}\otimes\mathcal{H}\). The subscript beside an identity operator, denoted by \(I\), or orthogonal projection, denoted by \(\Pi\), indicates the (sub)space the operator acts on or projects onto; e.g. \(\Pi_{\mathrm{C}}\) projects onto \(\mathcal{H}_{\mathrm{C}}\). The code space \(\mathcal{H}_{\mathrm{C}}\) of a quantum error-correcting code is a subspace of the entire space \(\mathcal{H}\) where the encoded state is stored [1; 48; 4]; sometimes we simply refer to the code space as the code. With \(\mathbb{C}^{n}\) denoting a generic \(n\)-dimensional complex vector space, a code is called an \([[n,k]]\)-code if \(\mathcal{H}\cong\mathbb{C}^{2^{n}}\) and \(\mathcal{H}_{\mathrm{C}}\cong\mathbb{C}^{2^{k}}\) for some integers \(n\) and \(k\), where \(A\cong B\) indicates that \(A\) and \(B\) are isomorphic; such codes are said to be **binary**--We use the term binary codes in a stricter sense than, e.g., Ref. [49], as we require the code space to be binary too. Also, an \(((n,k,d))\)-code has \(n\) qubits, a code space of dimension \(k\) and distance \(d\)[50]. For a qubit system, \(|\pm 1\rangle\) instead of \(\ket{0}\) and \(\ket{1}\) will denote the \(\pm 1\)-eigenstates. An operator is said to _stabilize_ a subspace \(\mathcal{H}^{\prime}\) if \(\mathcal{H}^{\prime}\) is a subspace of the operator's \(1\)-eigenspace. We will refer to the subspace spanned by all simultaneous eigenvectors with the same simultaneous eigenvalues as a _simultaneous eigenspace_. \(\mathsf{P}^{n}\) will denote the Pauli group on \((\mathbb{C}^{2})^{\otimes n}\cong\mathbb{C}^{2}\), and its members will be called _Pauli operators_[1; 31; 32]; in this work we will use \(X_{i}\), \(Y_{i}\) and \(Z_{i}\) to denote Pauli \(X\), \(Y\), \(Z\) operators on the \(i\)-th site. If the code space of a code is the \((1,\ldots,1)\)-simultaneous eigenspace of commutative Pauli operators, the code is called a _Pauli stabilizer code_, and the abelian group generated by these operators is the _stabilizer group_[1; 11; 10]. A representation of a group \(\mathsf{G}\) on a space \(\mathcal{V}\) is a homomorphism \(\Phi\) from \(\mathsf{G}\) to the general linear group of \(\mathcal{V}\), and it is said to be faithful if \(\Phi\) is one-to-one [51]. Abusing the language, we will call the image \(\Phi(\mathsf{G})\) "a representation." Two representations \(\mathsf{G}_{1}\) on \(\mathcal{H}_{1}\) and \(\mathsf{G}_{2}\) on \(\mathcal{H}_{2}\) of \(\mathsf{G}\) are said to be unitarily equivalent if there exits a unitary map \(V:\mathcal{H}_{1}\to\mathcal{H}_{2}\) such that \(\mathsf{G}_{1}=V^{-1}\mathsf{G}_{2}V\)[52; 53; 54]. ### Involutions An operator is said to be an involution if it is its own inverse i.e., if it squares to \(I\)[45]; for instance, Pauli \(X\), \(Y\), \(Z\) are all involutions. By definition, the spectrum of an involution can only contain \(\pm 1\), which by the spectral theorem leads to **Lemma 1**.: _An involution on a Hilbert space is normal if and only if it is self-adjoint and if and only if it is unitary._ Self-adjoint involutions are of great physical interest, because they correspond to both physical observables (self-adjoint) and evolution of a system (unitary). A Pauli group is composed of unitary involutions and operators that square to \(-I\), which we call counterinvolutions. We can easily see that a counterinvolution is an involution multiplied by \(i\), and vice versa. If a pair of involutions or counterinvolutions \(A\) and \(B\) anticommute, for an \(a\)-eigenvector \(\ket{v}\) of \(A\), \(BA\ket{v}=aB\ket{v}=-AB\ket{v}\), and since they are by definition automorphisms, \(B\ket{v}\neq 0\) for all nonzero \(\ket{v}\); therefore, \(B\) maps the \(a\)-eigenspace of \(A\) to the \(-a\)-eigenspace, and the \(\pm a\)-eigenspaces are thus isomorphic. ### Paulian operators An operator will be called **Paulian** if 1. it is either an involution or counterinvolution; 2. it is unitary; and 3. it has two isomorphic eigenspaces unless it has a single eigenspace. Accordingly, all Pauli operators are Paulian. A Paulian operator is self-adjoint if and only if it is an involution, and it is skew-self-adjoint if and only if it is a counterinvolution. When the space is finite-dimensional, we could simply require Paulian operators, except for those proportional to \(I\), to be traceless. As the eigenvalues have opposite signs, the two eigenspaces have the same dimension and hence are isomorphic. However, a unitary operator on an infinite-dimensional space is not trace class [55; 54] and in general it does not have a well-defined trace, so we simply demand the eigenspaces be isomorphic. Having isomorphic eigenspaces, the unitary map between them will play the role of Pauli \(Z\) [cf. (12) and the proof for Proposition 1 (Sec. III.1)]; besides, this makes it possible to find anticommuting Paulian operators, cf. the previous subsection. To appreciate the significance of Paulian operators in physics, we remark 1. By Lemma 1, a Paulian involution is unitary and self-adjoint at the same time, so it can not only describe the evolution but also be an observable. 2. Because a Paulian operator (except for those that are proportional to \(I\)) is traceless or has two isomorphic eigenspaces, very roughly speaking, if an observable has two possible outcomes, and if both outcomes are equally likely on average with all states considered, then it is Paulian. Finally, in this work when we refer to an operator as Paulian, it may not necessarily be Paulian on the entire domain, but only Paulian to the restriction of a specific subspace, which subspace has to do with the errors the operator can detect or correct. This will be explained in more detail later. ### Condition for error correction The necessary and sufficient condition for a set of errors \(\mathbb{E}\) to be correctable is [31; 32] \[\Pi_{\mathrm{C}}E^{\dagger}F\Pi_{\mathrm{C}}\propto\Pi_{\mathrm{C}}\;\forall E,F\in\mathbb{E}. \tag{1}\] There are other expressions for this condition, for example, \(\Pi_{\mathrm{C}}E^{\dagger}F\Pi_{\mathrm{C}}=\alpha_{E,F}\Pi_{\mathrm{C}}\) where \(\alpha\) is a Hermitian matrix [1; 2; 4; 48], or in terms of inner product and basis [1; 56; 2]. It is worth mentioning that the common requirement that \(\alpha\) is Hermitian is somewhat superfluous: If two operators \(A\) and \(B\) satisfy \[\Pi A^{\dagger}B\Pi=c\Pi\] for some constant \(c\) and orthogonal projection \(\Pi\), then it must be true that \[\Pi B^{\dagger}A\Pi=\left(\Pi A^{\dagger}B\Pi\right)^{\dagger}=c^{*}\Pi.\] Hence the matrix \(\alpha\) above is naturally Hermitian. If a code can correct \(\mathbb{E}\), it can correct any error in the span of \(\mathbb{E}\). From [4, 48], we can find a maximal subset \(\mathbb{F}\) of \(\mathrm{span}\mathbb{E}\) whose elements obey \[\Pi_{\mathrm{C}}E^{\dagger}F\Pi_{\mathrm{C}}=\begin{cases}0,&E\neq F\\ \Pi_{\mathrm{C}},&E=F\end{cases}\;\forall E,F\in\mathbb{F}, \tag{2}\] and we call correctable errors in \(\mathbb{F}\)_orthonormal_; the set \(\mathbb{F}\) is maximal in the sense that \[\sum_{E\in\mathbb{E}}E\mathcal{H}_{\mathrm{C}}=\bigoplus_{F\in\mathbb{F}}F \mathcal{H}_{\mathrm{C}}, \tag{3}\] where \(\oplus\) denotes an orthogonal direct sum, is satisfied. Note \[E\mathcal{H}_{\mathrm{C}}\cong\mathcal{H}_{\mathrm{C}}\;\forall E\in\mathbb{E}, \tag{4}\] so \(\bigoplus_{F\in\mathbb{F}}F\mathcal{H}_{\mathrm{C}}\) is an orthogonal direct sum of isomorphic spaces. On the other hand, if we have a set of errors or operators such that the operators in it are "orthogonal" but not necessarily "normalized," i.e., \(\Pi_{\mathrm{C}}E^{\dagger}E\Pi_{\mathrm{C}}=c_{E}\Pi_{\mathrm{C}}\) for some scalar \(c_{E}\) that is not necessarily \(1\), then the set is referred to as _orthogonal_. ## III Paulian stabilizer group Here is the main result of this work, which will be explained in detail soon after; \(\dim\) below refers to the dimension of a vector space: **Proposition 1**.: _Consider an error-correcting code, with the code space \(\mathcal{H}_{C}\) belonging in \(\mathcal{H}\). There exist operators which stabilize \(\mathcal{H}_{\mathrm{C}}\) and satisfy the following properties:_ 1. _To the restriction of a_ \(2^{m}k^{\prime}\)_-dimensional subspace_ \(\mathcal{H}^{\prime}\) _for some positive integer_ \(m\) _with_ \(\mathcal{H}_{\mathrm{C}}\subseteq\mathcal{H}^{\prime}\subseteq\mathcal{H}\) _and_ \(k^{\prime}\geq\dim\mathcal{H}_{\mathrm{C}}\)_, these operators are mutually commutative Paulian operators, forming an abelian group_ \(\mathsf{S}\) _called the_ _Paulian stabilizer group__, which is generated by_ \(m\) _operators. If_ \(\mathcal{H}\) _is infinite-dimensional,_ \(\mathcal{H}^{\prime}\) _can be as well._ 2. \(\mathsf{S}\) _is an abelian subgroup of a group of Paulian operators_ \(\mathsf{P}_{\mathrm{S}}^{m}\)_, which is a faithful representation of_ \(\mathsf{P}^{m}\)_._ 3. _A subset of all correctable errors can be detected by measuring these operators and corrected by applying proper inverses._ ### The minimal stabilizer group First, we will prove a "minimal" version of this proposition, which yields a "minimal" Paulian stabilizer group. The reader may skim over the proof and come back later when necessary. Proof.: With \(\mathbb{F}\) defined in (2), we choose a subset of \(\mathbb{F}^{\prime}\subseteq\mathbb{F}\) whose cardinality is a positive integral power of \(2\), \(m\), with \(I\in\mathbb{F}^{\prime}\). As long as the code is nontrivial, such a subset always exists. We want \(\mathbb{F}^{\prime}\) to be as large as possible, so we choose \[m=\lfloor\log_{2}\left|\mathbb{F}\right|\rfloor, \tag{5}\] where \(\lfloor\cdot\rfloor\) is the floor function; we thus have \(\left|\mathbb{F}^{\prime}\right|=2^{m}\). Let \(\mathbb{T}\) be the set of all tuples of \(\pm 1\) with length \(m\) and \((t)\) be the symbol for elements in \(\mathbb{T}\), which we will use for indexing. For each \(F\in\mathbb{F}^{\prime}\), choose a a unique tuple \((t)\in\mathbb{T}\); to put it another way, we define a bijective "syndrome map" \(f_{\mathrm{sym}}:\mathbb{F}^{\prime}\to\mathbb{T}\) such that \(f_{\mathrm{sym}}(F)\in\mathbb{T}\) is the tuple corresponding to \(F\), which, as we will see, is the syndrome of \(F\). \(F_{(t)}\) will denote the error \((t)\in\mathbb{T}\) refers to: \[F_{(t)}:=f_{\mathrm{sym}}^{-1}\left((t)\right), \tag{6}\] and likewise1 Footnote 1: Later \(\mathcal{H}_{(t)}\) will be defined as the \((t)\)-simultaneous eigenspace of the stabilizers. Hence (7) is not the definition of \(\mathcal{H}_{(t)}\), but it is true here. \[\mathcal{H}_{(t)}=F_{(t)}\mathcal{H}_{\mathrm{C}}. \tag{7}\] Among all such binary tuples \((t)\), \[(I):=(1,\ldots,1) \tag{8}\] will serve as a convenient abbreviation; in particular we require \[F_{(I)}=I, \tag{9}\] namely \(f_{\mathrm{sym}}(I)\) is selected to be \((I)=(1,\ldots,1)\). We also define \[\overline{\mathcal{H}}:=\bigoplus_{F\in\mathbb{F}^{\prime}}F\mathcal{H}_{ \mathrm{C}}=\bigoplus_{(t)\in\mathbb{T}}\mathcal{H}_{(t)}\subseteq\mathcal{H}. \tag{10}\] Here let \(\mathcal{H}^{\prime}\) of this proposition be \(\overline{\mathcal{H}}\). With \[\dim\overline{\mathcal{H}}=2^{m}\dim\mathcal{H}_{\mathrm{C}}, \tag{11}\] it means \(k^{\prime}\) is \(\dim\mathcal{H}_{\mathrm{C}}\). We have the following isomorphism: \[\mathcal{H}_{\mathrm{B}}:=\mathcal{H}_{\mathrm{C}}\otimes\bigotimes_{i=1}^{m} \mathbb{C}_{i}^{2}\cong\overline{\mathcal{H}}, \tag{12}\] where the subscript \(i\) of \(\mathbb{C}_{i}^{2}\) is for indexing. Let's construct a unitary map \(U:\overline{\mathcal{H}}\to\mathcal{H}_{\mathrm{B}}\) as follows: Since \(\mathcal{H}_{(t)}\)'s are isomorphic, there exist unitary maps \[V_{(t)}:\mathcal{H}_{(t)}\to\mathcal{H}_{\mathrm{C}}\;\forall(t)\in\mathbb{T}, \tag{13}\] among which we let \(V_{(I)}:\mathcal{H}_{\mathrm{C}}\rightarrow\mathcal{H}_{\mathrm{C}}\) be \(I_{\mathrm{C}}\). Let's also choose an orthonormal basis \(\left\{\left|\pm 1\right\rangle_{i}\right\}\) for each \(\mathbb{C}_{i}^{2}\). For any \((t)=(i_{1},\ldots,i_{m})\in\mathbb{T}\) and any \(\left|v\right\rangle\in\mathcal{H}_{(t)}\), let \[U\left|v\right\rangle:=\left(V_{(t)}\left|v\right\rangle\right)\otimes\left| (t)\right\rangle, \tag{14}\] where \[\left|(t)\right\rangle:=\left|i_{1}\right\rangle_{1}\otimes\cdots\otimes \left|i_{m}\right\rangle_{m}\in\bigotimes_{i=1}^{m}\mathbb{C}_{i}^{2}. \tag{15}\] By definition (10), \(\overline{\mathcal{H}}\) is the direct sum of \(\mathcal{H}_{(t)}\)'s, so \(U\) of (14) is defined on the entirety of \(\overline{\mathcal{H}}\). \(U\) is unitary because \(V_{(t)}\)'s are unitary and \(\left\{\left|\pm 1\right\rangle_{i}\right\}\)'s are orthonormal bases. Now, for every \(i=1,\ldots,m\) let \(X_{i}\) and \(Z_{i}\) denote the operators on \(\mathcal{H}_{\mathrm{B}}\) that apply Pauli \(X\) and \(Z\) on \(\mathbb{C}_{i}^{2}\) and act trivially on the other subsystems including \(\mathcal{H}_{\mathrm{C}}\). Their counterparts on \(\overline{\mathcal{H}}\) via \(U\) are \[Z_{i}^{\mathrm{S}}:=U^{-1}Z_{i}U,\;X_{i}^{\mathrm{S}}:=U^{-1}X_{i}U; \tag{16}\] that is, \(X_{i}\) and \(X_{i}^{\mathrm{S}}\), and \(Z_{i}\) and \(Z_{i}^{\mathrm{S}}\), are unitarily similar, and in the language of group theory, this is conjugation by \(U\)[57]. The group generated by \(X_{i}\) and \(Z_{i}\) is \(I_{\mathrm{C}}\otimes\mathsf{P}^{m}\), which is a faithful representation of \(\mathsf{P}^{m}\), so the group generated by \(Z_{i}^{\mathrm{S}}\)'s and \(X_{i}^{\mathrm{S}}\)'s, denoted by \(\mathsf{P}_{\mathrm{S}}^{m}\), is also a faithful representation of \(\mathsf{P}^{m}\): \(I_{\mathrm{C}}\otimes\mathsf{P}^{m}\) and \(\mathsf{P}_{\mathrm{S}}^{m}\) are unitarily equivalent representations, i.e., \[\mathsf{P}_{\mathrm{S}}^{m}:=U^{-1}\left(I_{\mathrm{C}}\otimes\mathsf{P}^{m} \right)U. \tag{17}\] With these observations, the proposition is proved: 1. \(I_{\mathrm{C}}\otimes\mathsf{P}^{m}\) is a group of Pauli operators, so is \(\mathsf{P}_{\mathrm{S}}^{m}\)--Note unless \(\mathcal{H}_{\mathrm{C}}\) is (isomorphic to) \(\mathbb{C}^{2^{p}}\) for some integer \(p\), \(I_{\mathrm{C}}\otimes\mathsf{P}^{m}\) is not a group of Pauli operators. Besides, \(Z_{i}\)'s, \(m\) in total, generate a maximal linearly independent and abelian subgroup2 of \(I_{\mathrm{C}}\otimes\mathsf{P}^{m}\); by unitary equivalence, \(Z_{i}^{\mathrm{S}}\)'s, \(m\) in total, also generate a maximal linearly independent and abelian subgroup of \(\mathsf{P}_{\mathrm{S}}^{m}\): Footnote 2: Please see the discussion near the end of Sec. C. 2. \[\mathsf{S}:=\langle Z_{1}^{\mathrm{S}},\ldots,Z_{m}^{\mathrm{S}}\rangle.\] (18) 3. Because \(\mathcal{H}_{\mathrm{C}}\otimes\left|(t)\right\rangle\) are the \((t)\)-simultaneous eigenspaces of \(Z_{i}\)'s, \(\mathcal{H}_{(t)}\) are the \((t)\)-simultaneous eigenspaces of \(Z_{i}^{\mathrm{S}}\)'s. \(\mathcal{H}_{\mathrm{C}}=\mathcal{H}_{(I)}\) is hence stabilized by \(Z_{i}^{\mathrm{S}}\)'s. 4. For any \(\left|\psi\right\rangle\in\mathcal{H}_{\mathrm{C}}\), if \(F_{(t)}\in\mathbb{F}^{\prime}\) occurs, \(\left|\psi\right\rangle\in\mathcal{H}_{\mathrm{C}}\) becomes \(F_{(t)}\left|\psi\right\rangle\in\mathcal{H}_{(t)}\) and it is a \((t)\)-simultaneous eigenvector of \(Z_{i}^{\mathrm{S}}\)'s; performing the syndrome measurement by measuring \(Z_{i}^{\mathrm{S}}\)'s we obtain the simultaneous eigenvalues \((t)\), which are the **error syndrome**[2], and we can correct the error by inverting \(F_{(t)}\). Hence, any correctable error \(E\) for which \[E\mathcal{H}_{\mathrm{C}}\subseteq\overline{\mathcal{H}}\] (19) can be detected and corrected by measuring \(Z_{i}^{\mathrm{S}}\). In a nutshell, via the isomorphism (12) and \(U\) of (14), we borrow the structure from \(\mathcal{H}_{\mathrm{B}}\) and apply it to \(\mathcal{H}^{\prime}=\overline{\mathcal{H}}\subseteq\mathcal{H}\): \(Z_{i}^{\mathrm{S}}\) and \(X_{i}^{\mathrm{S}}\) are essentially Pauli \(Z\) and \(X\) on different subsystems or sites, and such a structure can be established for any quantum error-correcting codes. Treating \(\overline{\mathcal{H}}\) and \(\mathcal{H}_{\mathrm{B}}\) as identical, \(\mathbb{C}_{i}^{2}\)'s of \(\mathcal{H}_{\mathrm{B}}\) are the stabilizer qubits [38]. For Pauli stabilizer codes if we consider \(\overline{\mathcal{H}}=\mathcal{H}\) and \(\mathcal{H}_{\mathrm{B}}\) as the same space, the unitary map \(U\), which becomes an operator now, is in the Clifford group [2, 11, 38, 58]. We will call members of \(\mathsf{S}\)**Paulian stabilizers**. Like Pauli stabilizer codes, we can choose any generating set of \(\mathsf{S}\) for syndrome measurements. From now on, rather than (7), \(\mathcal{H}_{(t)}\) will refer to the \((t)\)-simultaneous eigenspace of \(Z_{i}^{\mathrm{S}}\)'s, and we will call it a \((t)\)**-syndrome space**. Defining them this way will help us extend the Paulian stabilizers later. ### A larger stabilizer group The stabilizers depicted in Sec. III.1 are the minimal version of Proposition 1 with \(\mathcal{H}^{\prime}=\overline{\mathcal{H}}\), as the procedures laid out above are applicable to every code; however, when \(\log_{2}\left|\mathbb{F}\right|\) is not an integer, \(\mathbb{F}^{\prime}\subset\mathbb{F}\), and there are correctable errors that cannot be detected by \(Z_{i}^{\mathrm{S}}\)'s. Now suppose the code obeys \[2^{\left\lceil\log_{2}\left|\mathbb{F}\right|\right\rceil}\dim\mathcal{H}_{ \mathrm{C}}\leq\dim\mathcal{H}, \tag{20}\] where \(\left\lceil\cdot\right\rceil\) is the ceiling function. Let \[m=\left\lceil\log_{2}\left|\mathbb{F}\right|\right\rceil, \tag{21}\] and we can consider a larger family of orthogonal operators \(\mathbb{F}^{\prime\prime}\) such that \(\mathbb{F}\subseteq\mathbb{F}^{\prime\prime}\), and that in addition to errors in \(\mathbb{F}\) obeying (2) we require \[E\mathcal{H}_{\mathrm{C}}\cong\mathcal{H}_{\mathrm{C}}\text{ and }E\mathcal{H}_{ \mathrm{C}}\perp F\mathcal{H}_{\mathrm{C}}\;\forall E\neq F\in\mathbb{F}^{\prime\prime}. \tag{22}\] Like before, for each element in \(\mathbb{F}^{\prime\prime}\) we will associate with it a unique binary tuple of length \(m\), i.e., a bijection between \(\mathbb{F}^{\prime\prime}\) and \(\mathbb{T}\); cf. Sec. III.1. This way, the Paulian stabilizers associated with \(\mathbb{F}^{\prime\prime}\) covers all errors in \(\mathbb{F}\). The operators in \(\mathbb{F}^{\prime\prime}\setminus\mathbb{F}\) may be uncorrectable as they may not satisfy (2), but they are instrumental in constructing a larger Paulian stabilizer group. In short, we would like the Paulian stabilizers to cover all correctable errors, hence choosing \(m=\left\lceil\log_{2}\left|\mathbb{F}\right|\right\rceil\) if possible; if (20) cannot be satisfied, we resort to \(m=\left\lfloor\log_{2}\left|\mathbb{F}\right|\right\rfloor\). In particular, given a code with distance \(d\), we have \[\left|\mathbb{F}\right|\leq\sum_{j=0}^{\left\lfloor(d-1)/2\right\rfloor}\binom{ n}{j}3^{j}, \tag{23}\] which serves as an upper bound for \(\left|\mathbb{F}\right|\) and is exact when the code is nondegenerate [1, 2], cf. the quantum Hamming bound [2, 48]. Expression (23) combined with (20) is a sufficient condition to judge whether it is possible to find Paulian stabilizers to correct all the errors for this code, explicitly, \[2^{\left\lceil\log_{2}\sum_{j=0}^{\left\lfloor(d-1)/2\right\rfloor} \binom{n}{j}^{3^{j}}\right\rceil}\dim\mathcal{H}_{\mathrm{C}}\leq\dim\mathcal{H}, \tag{24}\] which is necessary and sufficient if the code is nondegenerate. ### Extending the domain If \(2^{m}\dim\mathcal{H}_{\mathrm{C}}<\dim\mathcal{H}\), we may extend the domains of \(Z_{i}^{\mathrm{S}}\)'s, and they should remain self-adjoint so that they are measurable. The following fact may be utilized [59]: **Theorem 1**.: _For a self-adjoint/unitary operator \(A\) with an invariant subspace \(\mathcal{H}^{\prime}\), \(A|_{\mathcal{H}^{\prime}}\) and \(A|_{\mathcal{H}^{\prime\perp}}\) are both self-adjoint/unitary operators._ Hence, to extend a self-adjoint operator, we can define another self-adjoint operator on the space orthogonal to the domain and add them. In particular, let's extend \(Z_{i}^{\mathrm{S}}\)'s as follows: We enlarge all syndrome spaces \(\mathcal{H}_{(t)}\)'s while keeping them isomorphic and orthogonal to each other, and let their dimension be \(k^{\prime}\), which would be no smaller than \(\dim\mathcal{H}_{\mathrm{C}}\). Following how \(Z_{i}^{\mathrm{S}}\)'s were originally constructed on \(\overline{\mathcal{H}}\) in Sec. III.1, we reach the final form of Proposition 1, where \(\mathcal{H}^{\prime}\) is the direct sum of all syndrome spaces and \(Z_{i}^{\mathrm{S}}\)'s are commutative Paulian operators to the restriction of \(\mathcal{H}^{\prime}\). Thus, the proof in Sec. III.1 and the discussion in Sec. III.2 and this subsection together illustrate the complete picture of Proposition 1. We remark 1. \(F_{(t)}\mathcal{H}_{\mathrm{C}}\) is a subspace of the corresponding syndrome space \(\mathcal{H}_{(t)}\), in particular \[\mathcal{H}_{\mathrm{C}}\subseteq\mathcal{H}_{(I)}:\] (25) The code space is stabilized by \(Z_{i}^{\mathrm{S}}\)'s, but it is not necessarily the \((I)\)-syndrome space, but a subspace thereof. 2. If \(\mathcal{H}\) is infinite-dimensional, syndrome spaces can be made infinite-dimensional while keeping them isomorphic and mutually orthogonal; an example will be given in Sec. V.2. 3. When \(\dim\mathcal{H}/\dim\mathcal{H}_{\mathrm{C}}\) is an integral power of 2, it is always possible to construct a Paulian stabilizer group that uses the space to its full capacity for error correction; specifically, this is true for binary codes. ### Measuring Paulian operators Suppose the state is currently in \(\mathcal{H}^{\prime}\). To measure a \(Z_{i}^{\mathrm{S}}\), we can make use of a generalized CNOT: Consider an ancilla qubit \(\mathcal{H}_{\mathrm{A}}\cong\mathbb{C}^{2}\) initialized at \(\left|1\right\rangle_{\mathrm{A}}\). To the restriction of \(\mathcal{H}^{\prime}\otimes\mathcal{H}_{\mathrm{A}}\), define \[\mathrm{GCNOT}|_{\mathcal{H}^{\prime}\otimes\mathcal{H}_{\mathrm{ A}}}:=\Pi_{-}\otimes X_{\mathrm{A}}+\Pi_{+}\otimes I_{\mathrm{A}} \tag{26}\] \[=I_{\mathcal{H}^{\prime}}\otimes\left|+\right\rangle_{\mathrm{A} }\left\langle+\right|_{\mathrm{A}}+Z_{i}^{\mathrm{S}}\otimes\left|-\right\rangle _{\mathrm{A}}\left\langle-\right|_{\mathrm{A}}, \tag{27}\] where \(\Pi_{\pm}\) project onto the \(\pm 1\)-eigenspaces of \(Z_{i}^{\mathrm{S}}\) and \(\left|\pm\right\rangle_{\mathrm{A}}\) are \(\pm 1\)-eigenstates of \(X_{\mathrm{A}}\); in Appendix A it will be explained why (26) and (27) are equal and how unique the generalized CNOT is. On the entire space \(\mathcal{H}\otimes\mathcal{H}_{\mathrm{A}}\) it is thus \[\mathrm{GCNOT}=\Pi_{-}\otimes X_{\mathrm{A}}+\Pi_{+}\otimes I_{ \mathrm{A}}+\Pi_{\mathcal{H}^{\prime\perp}}\otimes U_{\mathrm{A}}, \tag{28}\] where \(U_{\mathrm{A}}\) can be any unitary operator; \(\Pi_{\mathcal{H}^{\prime\perp}}\otimes U_{\mathrm{A}}\) is there for GCNOT to be unitary, cf. Theorem 1. If the system is in a \(-1\)-eigenstate of \(Z_{i}^{\mathrm{S}}\), the state of the qubit will be mapped to \(\left|-1\right\rangle_{\mathrm{A}}\), else it remains at \(\left|1\right\rangle_{\mathrm{A}}\), so measuring \(Z_{\mathrm{A}}\) on the ancilla afterwards is equivalent to measuring \(Z_{i}^{\mathrm{S}}\). This operator derives from the generalized CNOT or controlled-\(X\) in Refs. [60; 61; 62; 63], but we do not require that the system and ancilla have the same dimension. From now on for simplicity we will ignore the restriction. Quite many implementations of Pauli measurements involve these controlled operations implicitly [3; 13; 20]: For example, to measure \(Z_{1}\cdots Z_{j}\), with the (regular) CNOT on the \(i\)-th (data) qubit with the ancilla as the target denoted by \(\mathrm{CNOT}_{i}\), it can be found \[\bigotimes_{i=1}^{j}\mathrm{CNOT}_{i}=\Pi_{-}\otimes X_{\mathrm{A}}+\Pi_{+} \otimes I_{\mathrm{A}}: \tag{29}\] \(\Pi_{\pm}\) are the \(\pm 1\)-eigenspaces of \(Z_{1}\cdots Z_{j}\), so the composition is a generalized CNOT. Using a non-qubit system as the control may not be as intuitive, so let's instead consider the controlled-\(Z_{i}^{\mathrm{S}}\): \[CZ_{i}^{\mathrm{S}}=Z_{i}^{\mathrm{S}}\otimes\left|-1\right\rangle_{\mathrm{A }}\left\langle-1\right|_{\mathrm{A}}+I_{\mathcal{H}^{\prime}}\otimes\left|1 \right\rangle_{\mathrm{A}}\left\langle 1\right|_{\mathrm{A}}, \tag{30}\] which is a controlled-\(U\) operation with \(U\) being the Paulian operator \(Z_{i}^{\mathrm{S}}\)[4]. Compared with (27), we have \[\mathrm{GCNOT}=\left(I_{\mathrm{H}^{\prime}}\otimes H_{\mathrm{A}}\right)CZ_{i }^{\mathrm{S}}\left(I_{\mathrm{H}^{\prime}}\otimes H_{\mathrm{A}}\right), \tag{31}\] where \(H_{\mathrm{A}}\) is the Hadamard gate. In other words, if the ancilla is initialized at \(\left|1\right\rangle_{\mathrm{A}}\), we can perform an inverse Hadamard gate to map it to \(\left|+\right\rangle_{\mathrm{A}}\), and apply the \(CZ_{i}^{\mathrm{S}}\) gate. Measuring \(X_{\mathrm{A}}\) on the ancilla, if the result is \(\pm 1\), then it means the system was in a \(\pm 1\)-eigenstate of \(Z_{i}^{\mathrm{S}}\), so the overall effect is identical to measuring \(Z_{i}^{\mathrm{S}}\). Equation (31) is in the same vein as exchanging the target and control qubits of a (regular) CNOT by composing with Hadamard gates or change of basis [4]: Indeed, (27) can be understood as using the \(\left|\pm\right\rangle_{\mathrm{A}}\) states of the ancilla qubit to determine whether to perform \(Z_{i}^{\mathrm{S}}\), so the ancilla qubit is the control in this sense; with (31) we simply change the "control states" from \(\ket{\pm}_{\mathrm{A}}\) to \(\ket{\pm 1}_{\mathrm{A}}\). Regarding the ancilla qubit as the control as in (27) or (30) also brings the following benefit: Suppose the system is composed of qubits, and that we have a quantum circuit for \(Z_{i}^{\mathrm{S}}\) using fundamental gates, comprising single-qubit gates and CNOT or two-qubits controlled gates; let \(Z_{i}^{\mathrm{S}}=\prod_{i}U_{i}\), and we have \[CZ_{i}^{\mathrm{S}}=\prod_{j}CU_{j}. \tag{32}\] If \(U_{i}\) is a single-qubit gate, then we can again decompose \(CU_{j}\) as single-qubit gates and CNOT's; if \(U_{j}\) is a CNOT or a controlled-\(V_{j}\) for some \(V_{j}\), then \(CU_{j}\) is a Toffoli gate or \(C^{2}(V_{j})\) and we can again decompose it as fundamental gates [4; 64; 65]. Thus, if we are able to carry out \(Z_{i}^{\mathrm{S}}\) as an operation then we are also able to measure it. Equation (29) can also be better comprehended with the ancilla as the control. We discussed extending Paulian-ness to a larger space \(\mathcal{H}^{\prime}\) in Sec. III.3. From a measurement point of view, how the Paulian stabilizers should be extended depends on whether the corresponding controlled operations are natural, that is whether we can couple the system with the ancilla via the controlled operations relatively easily. We can tweak (27) to use an ancilla qudit (which may be composed of several qubits) for the setup to be less error-prone [1; 2; 58]: Omitting \(\Pi_{\mathcal{H}^{\prime}}\otimes U_{\mathrm{A}}\) again, let the system be coupled with the qudit initialized at \(\ket{1}_{\mathrm{A}}\) through this generalized CNOT \[\sum_{i_{+},i_{-}}\left(\ket{i_{-}}\bra{i_{-}}\otimes X_{i_{-}}+\ket{i_{+}} \bra{i_{+}}\otimes X_{i_{+}}\right), \tag{33}\] where 1. \(\{\ket{i_{\pm}}\}\) are orthonormal bases of the \(\pm 1\)-eigenspaces of \(Z_{i}^{\mathrm{S}}\), 2. each \(X_{i_{\pm}}\) is an X operator between the states \(\ket{1}_{\mathrm{A}}\) and \(\ket{i_{\pm}}_{\mathrm{A}}\) of the qudit [60], and 3. we have an observable \(Z_{\mathrm{A}}^{\prime}\) on the qudit, with \(\ket{i_{\pm}}_{\mathrm{A}}\) being \(\pm 1\)-eigenstates of \(Z_{\mathrm{A}}^{\prime}\); \(\ket{i_{-}}_{\mathrm{A}}\)'s may not be orthogonal with or even different from each other, likewise for \(\ket{i_{+}}_{\mathrm{A}}\)'s. \(\ket{1}_{\mathrm{A}}\) is a \(+1\)-eigenstate of \(Z_{\mathrm{A}}^{\prime}\), and some of the \(\ket{i_{+}}_{\mathrm{A}}\)'s may be \(\ket{1}_{\mathrm{A}}\). Afterwards, we measure \(Z_{\mathrm{A}}^{\prime}\), and all this combined is equivalent to measuring the Paulian operator. Expression (33) is again an adaption of the generalized CNOT from Refs. [60; 61; 62; 63]. ### Subsystem codes Subsystem codes can be considered a generalization of regular error-correcting codes [37; 38; 39; 40; 41; 42; 43]: The code space becomes \[\mathcal{H}_{\mathrm{C}}=\mathcal{H}_{\mathrm{L}}\otimes\mathcal{H}_{\mathrm{G }}\subseteq\mathcal{H}. \tag{34}\] The information is stored in the logical subsystem \(\mathcal{H}_{\mathrm{L}}\) and the state of the gauge subsystem \(\mathcal{H}_{\mathrm{G}}\) does not matter. Proposition 1 is also applicable to subsystem codes, explicitly: **Corollary 1**.: _The code space of every subsystem code can be stabilized by operators with properties identical to those listed in Proposition 1. Hence, after obtaining the Paulian stabilizer group \(\mathsf{S}\), for any nonzero \(A\in\mathcal{L}\left(\mathcal{H}_{\mathrm{G}}\right)\), all elements in \((I_{\mathrm{L}}\otimes A)\mathsf{S}\) leave the encoded state intact._ Let's provide a simple argument as to why this is true: According to Ref. [66], with \(\mathbb{E}\) denoting the set of correctable errors, for a subsystem code it is possible to find a set \(\mathbb{F}\) of "orthogonal" correctable errors that obeys (3), just like an ordinary error-correcting code. Utilizing \(\mathbb{F}\) and the corresponding syndrome spaces, we can obtain Paulian stabilizers by following the steps in Sec. III.1. ## IV Concatenation of binary codes Binary codes with appropriate parameters can be concatenated, and we will illustrate how to acquire the Paulian stabilizer group of the new code, in a similar fashion to Pauli stabilizer codes [48]. A symbol with sub- or superscript in, out, or \(+\) (\(+\) for "adding") indicates it belongs to the inner, outer, or concatenated codes respectively, and the sub- or superscript \(w\) means it can be one of those three. Let them be \([[n_{w},k_{w}]]\)-codes, and the inner and outer codes have Paulian stabilizers \(Z_{i}^{w}\)'s. To concatenate them, \[q:=n_{\mathrm{out}}/k_{\mathrm{in}} \tag{35}\] should be an integer. Let \(\mathcal{H}_{w}\) be the space each code belongs in, and \[\mathcal{H}_{+}=\mathcal{H}_{\mathrm{in}}^{\otimes q}. \tag{36}\] Define the following operators on \(\mathcal{H}_{+}\): \[Z_{i,j}^{+}:=\underbrace{I_{\mathrm{in}}\otimes\cdots\otimes I_{\mathrm{in}} }_{j-1\ \mathrm{subsystems}}\otimes\underbrace{Z_{i}^{\mathrm{in}}}_{j\ \text{-th}\ \mathcal{H}_{\mathrm{in}}}\otimes I_{\mathrm{in}}\otimes\cdots\otimes I_{ \mathrm{in}}, \tag{37}\] which are independent and commute with each other, and which stabilize \[\mathcal{H}_{\mathrm{in,C}}^{\otimes q}\subseteq\mathcal{H}_{+}, \tag{38}\] where \(\mathcal{H}_{\mathrm{in,C}}\) is the code space of the inner code. Next, let the logical Pauli operators on the inner code commute with all \(Z_{i}^{\mathrm{in}}\)'s, and expand every \(Z_{i}^{\mathrm{out}}\) in terms of Pauli operators. Since \(\mathcal{H}_{\mathrm{in,C}}\) and \(\mathcal{H}_{\mathrm{out}}\) are composed of \(k_{\rm in}\) and \(qk_{\rm in}\) qubits respectively, regarding every \(k_{\rm in}\) qubits of \(\mathcal{H}_{\rm out}\) as \(\mathcal{H}_{\rm in,C}\) we can replace each Pauli operator in every \(Z_{i}^{\rm out}\) by the corresponding logical Pauli operator on the inner code, and the resultant operator will be denoted by \(Z_{i}^{+}\). \(Z_{i}^{+}\)'s together with \(Z_{j,k}^{+}\)s are mutually commutative and independent Paulian operators, composing the stabilizer generators of the concatenated code. How to get \(Z_{i}^{+}\)'s may be a little hard to comprehend, so here's a quick demonstration: Suppose \(n_{\rm out}=4\) and \(k_{\rm in}=2\). As \(q=4/2=2\), \[\mathcal{H}_{+}=\mathcal{H}_{\rm in}\otimes\mathcal{H}_{\rm in}. \tag{39}\] If \(Z_{1}^{\rm out}=(X\otimes X\otimes Z\otimes Y+Z\otimes Z\otimes X\otimes I_{ 2})/2\), then \[Z_{1}^{+}=\left(\overline{X}_{1}\overline{X}_{2}\otimes\overline{Z}_{1} \overline{Y}_{2}+\overline{Z}_{1}\overline{Z}_{2}\otimes\overline{X}_{1}I_{ \rm in}\right)/2, \tag{40}\] where \(\overline{L}_{i}\) denotes the logical \(L\) operator of the \(i\)-th logical qubit. In (40), \(\overline{X}_{1}\overline{X}_{2}\) and \(\overline{Z}_{1}\overline{Z}_{2}\) act on the first \(\mathcal{H}_{\rm in}\) of (39), and \(\overline{Z}_{1}\overline{Y}_{2}\) and \(\overline{X}_{1}I_{\rm in}\) on the second one. Notice logical operators acting on different logical qubits commute with one another, so e.g. \(\overline{X}_{1}\overline{X}_{2}=\overline{X}_{2}\overline{X}_{1}\). Also a logical identity operator is simply the identity operator on the system, so in (40) instead of \(\overline{X}_{1}\overline{I}_{2}\) we had \(\overline{X}_{1}I_{\rm in}=\overline{X}_{1}\); we spelled \(I_{\rm in}\) out for clarity. The methods to find the parameters and codewords of concatenated codes are well established [1; 44; 67; 68], on which we will provide a short discussion in Appendix B. ## V Examples Here we will show the Paulian stabilizers of some codes, or how to find them. ### Transformed from Pauli stabilizer codes Given a Pauli stabilizer code that can correct a set of operators \(\mathbb{E}\) with stabilizers \(Z_{i}^{\rm S}\), we can perform a unitary transformation \(U\) on the system, which can be seen as a change of orthonormal basis. The transformed stabilizer generators \[Z_{i}^{\rm S}{}^{\prime}=UZ_{i}^{\rm S}U^{-1} \tag{41}\] will be Paulian, and they can correct a set of operators \(U\mathbb{E}U^{-1}\). If the unitary transformation is local in each (physical) qubit, the distance shall stay the same. For illustration, consider an \(n\)-qubit repetition code with stabilizers [69; 48] \[Z_{1}Z_{2},\ldots,Z_{n-1}Z_{n}, \tag{42}\] which can fix an \(X\) error on every qubit. We can construct a generalized repetition code for normal operators with **Lemma 2**.: _An operator on \(\mathbb{C}^{2}\) is normal if and only if it can be a linear combination of the identity and a Paulian operator that is not proportional to the identity. Note as the Paulian operator is not proportional to the identity, it has two eigenvalues._ The proof can be found at Appendix E. From the discussion in Secs. II.1 and II.2, the Paulian operator in Lemma 2 can be chosen to be self-adjoint so that it has eigenvalues \(1\) and \(-1\). By this lemma, consider any normal operator \(E=aI+bV\) on \(\mathbb{C}^{2}\), where \(a,b\in\mathbb{C}\) and \(V\) is self-adjoint and Paulian. After obtaining \(V\), as \(X\) and \(V\) have the same spectrum, \(X\) and \(V\) are unitarily similar via some unitary \(U\); in other words, we can perform local unitary transformations \(V_{i}\) such that \(V_{i}=U_{i}X_{i}U_{i}^{-1}\) for all the (physical) qubits, where the subscript \(i\) indicates which qubit the operator act on nontrivially. This way, we acquire Paulian stabilizers that can correct \(E\) on a single qubit: \[\left(U_{i}Z_{i}U_{i}^{-1}\right)\left(U_{i+1}Z_{i+1}U_{i+1}^{-1}\right),\;i=1,\ldots,n-1. \tag{43}\] Hence, we can correct any error that is a normal operator on a single qubit; specifically, the normal operator can be any unitary operator. ### Bosonic codes Let's first consider this bosonic binomial code [70; 71; 72]: \[\left|\overline{1}\right\rangle:=\left|2\right\rangle,\;\left|\overline{-1} \right\rangle:=\left(\left|4\right\rangle+\left|0\right\rangle\right)/\sqrt{2}, \tag{44}\] which was experimentally demonstrated in Ref. [71] and can correct orthogonal errors \(I\) and the annihilation operator \(a\). The space can be partitioned according to parity [70; 71; 72; 73]--The code space \(\mathcal{H}_{\rm C}\subset\mathcal{H}_{(1)}\) has even parity while \(a\mathcal{H}_{\rm C}\subset\mathcal{H}_{(-1)}\) has odd parity. With \(N:=a^{\dagger}a\), the parity operator \[Z^{\rm S}=e^{i\pi N} \tag{45}\] is actually Paulian (see Sec. II.2): 1. It is clear that (45) is unitary; 2. Because its eigenvalues are \(\pm 1\), it is an involution; 3. Finally, because \(\left\{\left|2n-2\right\rangle\right\}_{n\in\mathbb{N}}\) and \(\left\{\left|2n-1\right\rangle\right\}_{n\in\mathbb{N}}\) are the bases of its \(\pm 1\)-eigenspaces, their orthonormal bases have the same cardinality--Namely we can establish a bijection between \(\left\{\left|2n-2\right\rangle\right\}_{n\in\mathbb{N}}\) and \(\left\{\left|2n-1\right\rangle\right\}_{n\in\mathbb{N}}\)--The two eigenspaces are thus isomorphic. The parity operator is Paulian on the whole space, i.e., \(\mathcal{H}^{\prime}=\mathcal{H}\), which is infinite-dimensional. To measure the syndrome, the controlled phase gate \(I\otimes\left|-1\right\rangle\left\langle-1\right|+e^{i\pi N}\otimes\left|1 \right\rangle\left\langle 1\right|\)[73; 74] is the controlled operations (27) and (30) along with appropriate rotation on the ancilla. The next one is the bosonic code from Ref. [75]: \[\ket{\overline{1}}:=\ket{22},\;\ket{\overline{-1}}:=\left(\ket{40}+\ket{04} \right)/\sqrt{2}, \tag{46}\] which protects up to one photon loss, and we have the following orthogonal correctable errors [72; 75]: \[\mathbb{E}=\left\{I,\,A_{1,1},A_{1,2}\right\}, \tag{47}\] where \(A_{i,j}\) is the damping operator for which the \(j\)-th mode losing \(i\) photons (hence \(I\) corresponds to \(A_{0,1}\) and \(A_{0,2}\)). We can again choose parity operators as the Paulian stabilizers: \[Z_{1}^{\mathrm{S}}=e^{i\pi N_{1}},\;Z_{2}^{\mathrm{S}}=e^{i\pi N_{2}}. \tag{48}\] The correctable errors \(I\), \(A_{1,1}\) and \(A_{1,2}\) will have syndromes \((I)=(1,1)\), \((-1,1)\) and \((1,-1)\) respectively. Note in this case, we not only extend the domains to the entire space but also enlarge the Paulian group as \(2=\lceil\log_{2}3\rceil>\lfloor\log_{2}3\rfloor=1\); see Secs. III.2 and III.3. Photon loss for the bosonic four-legged cat code can also be detected by parity [72; 76; 77; 78], so we also have a Paulian stabilizer for such a code. Finally, in Appendix G we will have a brief discussion about Gottesman-Kitaev-Preskill codes [72; 79; 80], where we will show a way to construct commutative Paulian stabilizers for these codes and issues with them. ### Codeword stabilized code For an \(n\)-qubits system, a codeword stabilized code [49; 50; 81] is obtained in the following way: 1. We start with a maximally linearly independent and abelian subgroup of a Pauli group (please refer to Appendix D for the exact meaning), called the **word stabilizer**. 2. We also need a set of Pauli operators \(\{W_{i}\}\), called the **word operators**. 3. As the word stabilizer is maximally linearly independent and abelian, each of its simultaneous eigenspace is one-dimensional, i.e., it stabilizes a unique quantum state; let it be \(\ket{\psi}\). 4. The codewords are then \(W_{i}\ket{\psi}\)'s; that is, the code space is \(\mathrm{span}\left\{W_{i}\ket{\psi}\right\}\). The following result can be utilized to construct Paulian stabilizers of a codeword stabilized code: **Corollary 2**.: _For a codeword stabilized code:_ 1. _If_ \(P_{1}\) _and_ \(P_{2}\) _are correctable Pauli errors, they are either orthonormal or act identically on the code space bar a multiplication factor._ 2. _Assuming the code has distance_ \(d\)_, it is nondegenerate_ _[_1; 2_]_ _if and only if every operator in the word stabilizer except_ \(I\) _has distance no smaller than_ \(d\)_._ In Appendix F, specifically Sec. F.2, we provide a procedure to construct Paulian stabilizers that is applicable to every codeword stabilized code; here is the essence: We first determine whether there exist Paulian stabilizers that can correct all the relevant errors by (20) or (24); then according to Corollary 2 we can choose linearly independent Pauli errors as orthonormal correctable errors, and to be definite we can check whether the code is nondegenerate again by Corollary 2. We then use the simultaneous eigenspaces of the word stabilizer to build syndrome spaces, which lead to Paulian stabilizers. An example would be the \(((9,12,3))\)-code from Refs. [50; 82]: Each element of the word stabilizer except \(I\) has at least weight \(3\), so the code is nondegenerate according to Corollary 2, and we can choose all linearly independent weight-\(1\) Pauli errors, along with \(I\) as the orthonormal errors \(\mathbb{F}\) of (2). By (23), \[\ket{\mathbb{F}}=3\times 9+1=28,\] so (20) or [(24)] is satisfied, and we can construct a Paulian stabilizer group to correct all the relevant errors, generated by \(\log_{2}\ket{\mathbb{F}}=5\) Paulian operators. Furthermore, we can extend the stabilizers so that each is Paulian on the whole space. In fact, "Paulian stabilizers" for this code have already been found in Ref. [82], among which some are Pauli.3 Note, however, that the "Paulian stabilizers" from Ref. [82] possess a different structure from those presented in this work: The Paulian stabilizers of Proposition 1 are elements of a faithful representation of the Pauli group, so they are reminiscent of Pauli stabilizers of a Pauli stabilizer code. On the other hand, those from Ref. [82] are not, so different sequences of measurements are needed for different errors, and more than five observables are needed to detect all the errors, whereas with the Paulian stabilizers of Proposition 1 we require only five commutative observables for measurement. Even though Paulian stabilizers like those in Ref. [82] are interesting and useful _per se_, we will not delve into them. More details about this code can be found in Appendix F.4. Footnote 3: That this code can be stabilized by nontrivial Pauli operators can also be verified with Corollary 3 in the Appendix. Now let's consider the \(((5,6,2))\)-code from Refs. [50; 83]. Due to its distance, this code is an error-detecting code. It can be found that, with \(\mathbb{P}_{1}\) denoting the set of all weight-\(1\) Pauli errors, we have \[\mathcal{H}_{\mathrm{C}}^{\perp}=\sum_{P\in\mathbb{P}_{1}}P\mathcal{H}_{ \mathrm{C}}, \tag{49}\] which implies for the stabilizers to detect all errors in \(\mathbb{P}_{1}\), we must have \[\mathcal{H}_{\mathrm{C}} =\mathcal{H}_{(I)} \tag{50}\] \[\mathcal{H}_{\mathrm{C}}^{\perp} =\bigoplus_{(t)\in\mathbb{T}\setminus\{(I)\}}\mathcal{H}_{(t)}, \tag{51}\] where \(\mathbb{T}\) is the set of all syndromes; see Sec. III.1. For the stabilizers to be Paulian and commutative, each syndrome space must have the same dimension, so (50) and (51) together imply \[\dim\mathcal{H}=2^{m}\dim\mathcal{H}_{\text{C}} \tag{52}\] for some positive integer \(m\), which is impossible for this system as \(\dim\mathcal{H}_{\text{C}}=6\) and \(\dim\mathcal{H}=2^{5}=32\). Hence, we cannot find commutative Paulian stabilizers for the \(((5,6,2))\)-code to detect all weight-1 errors--This is one of the cases where Paulian stabilizer groups may not be suitable for error correction or detection, cf. the discussion in Sec. III.2. Regardless, because this code has low dimensions, it is easier to demonstrate how to find its Paulian stabilizers as every step can be made explicit without being too clumsy; in addition, we can show how to adapt our approach to error-detecting codes. Details can be found in Appendix F.3. ## VI Discussion and Conclusion We showed that every quantum error-correcting code, including the subsystem code, can be stabilized by operators which are Paulian and commutative to the restriction of a subspace \(\mathcal{H}^{\prime}\), which may or may not be the entire system \(\mathcal{H}\) (Proposition 1 and Corollary 1), with examples given in Sec. V. In addition, we showed that the error syndrome can be obtained by measuring the Paulian stabilizers \(Z_{i}^{\text{S}}\)'s, which can be achieved by performing controlled operations \(CZ_{i}^{\text{S}}\)'s, so the quantum circuits for conducting \(Z_{i}^{\text{S}}\)'s can be transferred to those for measuring them (Sec. III.4). In terms of tensor product structure [84; 85; 86], \(\mathcal{H}^{\prime}\) is composed of \(m\) stabilizer qubits [38] generated by the Paulian operators, and a subsystem isomorphic to the syndrome spaces, whose dimension \(k^{\prime}\) is no less than \(\dim\mathcal{H}_{\text{C}}\), so we can embed \(\mathcal{H}_{\text{C}}\) into them. This generalizes the observation made in Refs. [38; 86], that for a system composed of qubits, commutative Paulian operators can partition the system into virtual qubits; if the Paulian operators are Pauli it becomes a Pauli stabilizer code. Paulian stabilizers may be employed to realize codes that are not Pauli stabilizer codes, showcased in Sec. V.2. As discussed in Sec. III.2, (20) is the condition for Paulian stabilizers to cover all correctable errors. Hence, binary codes may in particular benefit from the existence of Paulian stabilizers, because (20) is always satisfied; the same is true in the case where the code space is finite-dimensional while the entire system is infinite-dimensional, such as the bosonic codes in Sec. V.2. Furthermore, as we have demonstrated how to obtain the Paulian stabilizer group of a binary concatenated code in Sec. IV, it may help us obtain a code with higher distance along with the means to realize it. There are questions still left unanswered that may be worthy of further investigation: There is no unique Paulian stabilizer group for a code, and the ideal Paulian stabilizers are those that are easy to measure or conduct--With a universal set of quantum gates, we can in theory approach them [4; 48], but it may need many gates to implement. Hence, for Paulian stabilizers to be useful, how to find the ideal ones is a key issue, which depends on the physical system in question. Also, we showed the existence of Paulian stabilizers for error-correcting codes, but knowing this, can it help us find nontrivial new codes by using Paulian operators that are not Pauli as the stabilizers or correctable errors? ###### Acknowledgements. J.-Y. K. would like to thank Prof. Chung-Hsien Chou for introducing him to this area of research, and we thank Dr. Tanmay Singal for very fruitful discussion about various aspects of quantum error correction. H.-S.G. acknowledges support from the National Science and Technology Council, Taiwan under Grants No. NSFC 112-2119-M-002-014, No. NSTC 111-2119-M-002-006-MY3, No. NSTC 111-2119-M-002-007, No. NSFC 110-2627-M-002-002, No. NSTC 111-2627-M-002-001, and No. NSFC 111-2627-M-002-006, from the US Air Force Office of Scientific Research under Award Number FA9550-23-S-0001, and from the National Taiwan University under Grant No. NTU-CC-112L893404. H.-S.G. is also grateful for the support from the "Center for Advanced Computing and Imaging in Biomedicine (NTU-112L900702)" through The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE), Taiwan, and the support from the Physics Division, National Center for Theoretical Sciences, Taiwan. ## Appendix A Generalized CNOT Consider any self-adjoint Paulian operator \(P\) on \(\mathcal{H}\), and an ancilla qubit \(\mathcal{H}_{\text{A}}\cong\mathbb{C}^{2}\). We define the corresponding generalized CNOT as the following unitary operator on \(\mathcal{H}\otimes\mathcal{H}_{\text{A}}\): \[\text{GCNOT}:=\Pi_{-}\otimes X_{\text{A}}+\Pi_{+}\otimes I_{\text{A}}, \tag{53}\] where \(\Pi_{\pm}\) are the orthogonal projections onto the \(\pm 1\)-eigenspaces of \(P\) and A refers to the ancilla qubit. Here let's have a quick discussion about why GCNOT is equal to \[I_{\mathcal{H}}\otimes\Pi_{+}^{\text{A}}+P\otimes\Pi_{-}^{\text{A}}, \tag{54}\] where \[\Pi_{\pm}^{\text{A}}:=\ket{\pm}_{\text{A}}\bra{\pm}_{\text{A}} \tag{55}\] are orthogonal projections onto the \(\pm 1\)-eigenspaces of \(X_{\text{A}}\). If \(P\) has only one eigenvalue, then \(P\) is either \(I_{\mathcal{H}}\) or \(-I_{\mathcal{H}}\). For the former, it is fairly easy to see both (53) and (14) are \(I_{\mathcal{H}}\otimes I_{\Lambda}\), so they are identical. For the latter, (13) becomes \(I_{\mathcal{H}}\otimes X_{\Lambda}\), whereas (14) becomes \[I_{\mathcal{H}}\otimes\Pi_{+}^{\mathrm{A}}-I_{\mathcal{H}}\otimes \Pi_{-}^{\mathrm{A}} =I_{\mathcal{H}}\otimes\left(\Pi_{+}^{\mathrm{A}}-\Pi_{-}^{ \mathrm{A}}\right)\] \[=I_{\mathcal{H}}\otimes X_{\mathrm{A}}, \tag{15}\] so they are again the same. If \(P\) has two eigenvalues, namely \(\pm 1\), then we have \[\Pi_{-}\otimes X_{\mathrm{A}}+\Pi_{+}\otimes I_{\Lambda}= \Pi_{-}\otimes\left(\Pi_{+}^{\mathrm{A}}-\Pi_{-}^{\mathrm{A}}\right)\] \[+\Pi_{+}\otimes\left(\Pi_{+}^{\mathrm{A}}+\Pi_{-}^{\mathrm{A}}\right)\] \[= \left(\Pi_{+}+\Pi_{-}\right)\otimes\Pi_{+}^{\mathrm{A}}\] \[+\left(\Pi_{+}-\Pi_{-}\right)\otimes\Pi_{-}^{\mathrm{A}}\] \[= I_{\mathcal{H}}\otimes\Pi_{+}^{\mathrm{A}}+P\otimes\Pi_{-}^{ \mathrm{A}}, \tag{16}\] which is (14). In fact, as the relations above do not depend on the dimension of the eigenspaces of \(X_{\mathrm{A}}\), we can replace the ancilla qubit with a system with even dimension and \(X_{\mathrm{A}}\) with another Paulian operator, and identical results will hold. Second, let's try to answer this question: Given a Paulian operator \(P\), is the generalized CNOT of (13) the only [besides a global phase factor or a phase difference between the two terms on the right hand side of (13)] unitary operator that can achieve what we want of it? Specifically, let \(\overline{\mathrm{GCNOT}}\) denote the "most general" \(\mathrm{GCNNOT}\) for \(P\); the property we desire is \[\left(I\otimes\Pi_{+}^{\mathrm{A}}\right)\overline{\mathrm{GCNOT }}\left(\left|\psi\right\rangle\otimes\left|1\right\rangle\right) =e^{i\theta_{+}}\left(\Pi_{+}\left|\psi\right\rangle\right) \otimes\left|1\right\rangle,\] \[\left(I\otimes\Pi_{-}^{\mathrm{A}}\right)\overline{\mathrm{GCNOT }}\left(\left|\psi\right\rangle\otimes\left|1\right\rangle\right) =e^{i\theta_{-}}\left(\Pi_{-}\left|\psi\right\rangle\right) \otimes\left|-1\right\rangle, \tag{17}\] for every \(\left|\psi\right\rangle\in\mathcal{H}\), where \(\theta_{\pm}\in[0,2\pi)\). Thus, \[\overline{\mathrm{GCNOT}}\left(\left|\psi\right\rangle\otimes \left|1\right\rangle\right)= \left(I\otimes\Pi_{+}^{\mathrm{A}}+I\otimes\Pi_{-}^{\mathrm{A}} \right)\overline{\mathrm{GCNOT}}\] \[(\left|\psi\right\rangle\otimes\left|1\right\rangle)\] \[= e^{i\theta_{+}}\left(\Pi_{+}\left|\psi\right\rangle\right) \otimes\left|1\right\rangle\] \[+e^{i\theta_{-}}\left(\Pi_{-}\left|\psi\right\rangle\right) \otimes\left|-1\right\rangle. \tag{18}\] This defines the action of \(\overline{\mathrm{GCNOT}}\) on \(\mathcal{H}\otimes\left|1\right\rangle\),4 so we can complete it by defining it on the orthogonal complement, namely \(\mathcal{H}\otimes\left|-1\right\rangle\). Note Footnote 4: \(\mathcal{H}\otimes\left|1\right\rangle\) and \(\mathcal{H}\otimes\mathrm{span}(\left|1\right\rangle)\) are identical, so the former is a space as well. \[\overline{\mathrm{GCNOT}}\left(\mathcal{H}\otimes\left|1\right\rangle\right)= \mathcal{H}_{+}\otimes\left|1\right\rangle\oplus\mathcal{H}_{-}\otimes\left|-1 \right\rangle, \tag{19}\] where \(\mathcal{H}_{\pm}\) are the \(\pm 1\)-eigenspaces of the Paulian operator \(P\). As \(\overline{\mathrm{GCNOT}}\) is unitary, \[\overline{\mathrm{GCNOT}}\left(\mathcal{H}\otimes\left|1\right\rangle\right) \perp\overline{\mathrm{GCNOT}}\left(\mathcal{H}\otimes\left|-1\right\rangle \right), \tag{20}\] which suggests \[\overline{\mathrm{GCNOT}}\left(\mathcal{H}\otimes\left|-1\right\rangle\right) =\mathcal{H}_{-}\otimes\left|1\right\rangle\oplus\mathcal{H}_{+}\otimes \left|-1\right\rangle. \tag{21}\] \[\overline{\mathrm{GCNOT}}_{\left|\mathcal{H}\otimes\left|-1\right\rangle} \tag{22}\] where \(U_{\pm}\) are any unitary maps from \(\mathcal{H}_{\pm}\) to themselves (i.e., operators); we may also choose \[\overline{\mathrm{GCNOT}}\left(\left|\psi\right\rangle\otimes \left|-1\right\rangle\right):= \left(U_{+}^{\prime}\Pi_{+}\left|\psi\right\rangle\right) \otimes\left|1\right\rangle\] \[+\left(U_{-}^{\prime}\Pi_{-}\left|\psi\right\rangle\right) \otimes\left|-1\right\rangle, \tag{23}\] where \(U_{\pm}^{\prime}\) are any unitary maps from \(\mathcal{H}_{\pm}\) to \(\mathcal{H}_{\mp}\). (13) is a special case of (22) with \(U_{\pm}\) being identities and \(\theta_{\pm}\) of (17) both being \(0\). ## Appendix B Code parameters and codewords of a concatenated binary code Because \(\mathcal{H}_{+}\) is \(\mathcal{H}_{\mathrm{in}}^{\otimes q}\) and there are \(q(n_{\mathrm{in}}-k_{\mathrm{in}})\) of \(Z_{i,j}^{+}\)'s and \(n_{\mathrm{out}}-k_{\mathrm{out}}\) of \(Z_{i}^{+}\)'s, \[n_{+} =n_{\mathrm{in}}q, \tag{24}\] \[k_{+} =n_{\mathrm{in}}q-q(n_{\mathrm{in}}-k_{\mathrm{in}})-(n_{\mathrm{ out}}-k_{\mathrm{out}})=k_{\mathrm{out}}. \tag{25}\] When \(k_{\mathrm{out}}=k_{\mathrm{in}}=1\), \(n_{+}=n_{\mathrm{in}}n_{\mathrm{out}}\) and \(k_{+}=1\), as expected [2; 48]. That \(k_{+}=k_{\mathrm{out}}\) is also obvious from the way the codewords are obtained, as will be shown below. To find the distance, in the simple case of \(k_{\mathrm{in}}=1\), to change the logical state of the concatenated code an operator has to act nontrivially on at least \(d_{\mathrm{out}}\) inner logical qubits; for the logical state of the inner code to change, the operator needs to act nontrivially on at least \(d_{\mathrm{in}}\) physical qubits of \(\mathcal{H}_{\mathrm{in}}\), so the distance is at least \(d_{\mathrm{out}}d_{\mathrm{in}}\)[1; 2; 67]. More generally: 1. To change the logical state of the concatenated code at least \(d_{\mathrm{out}}\) inner logical qubits should be acted upon nontrivially. 2. Each \(\mathcal{H}_{\mathrm{in}}\) subsystem contains \(k_{\mathrm{in}}\) inner logical qubits. 3. To change the logical state of an \(\mathcal{H}_{\mathrm{in}}\) system, i.e. the state of its logical qubits, at least \(d_{\mathrm{in}}\) physical qubits need to be acted on nontrivially. Therefore, the distance of the concatenated code satisfies [1; 2; 67] \[d_{+}\geq\lceil d_{\mathrm{out}}/k_{\mathrm{in}}\rceil d_{\mathrm{in}}. \tag{26}\] We can obtain the codewords given those of the outer and inner codes. For example, say that \(|-1,-1,-1,-1)\rangle/\sqrt{2}\) is a codeword of the outer code, and the inner code has two logical qubits. This codeword will become \[\left(\overline{|1,-1\rangle}\otimes\overline{|-1,1\rangle}+\overline{|-1,-1 \rangle}\otimes\overline{|-1,-1\rangle}\right)/\sqrt{2},\] where \(\overline{|i,j\rangle}\in\mathcal{H}_{\text{in,C}}\) are logical states of the inner code, \(i\) for the first logical qubit and \(j\) for the second one. So far we have taken the outer code as composed of \(n_{\text{out}}\) qubits, but we can treat it as composed of \(q\) subsystems each with dimension \(\dim\mathcal{H}_{\text{in,C}}\) instead. We can then follow the standard procedure for finding the codewords and parameters of a concatenated code as in, e.g., Refs. [1, 44, 67, 68]: Replacing each \(\dim\mathcal{H}_{\text{in,C}}\)-dimensional subsystem of the outer code by \(\mathcal{H}_{\text{in}}\), the concatenated code hence has \(\dim\mathcal{H}_{+}=(2^{n_{\text{in}}})^{q}\) and \(\dim\mathcal{H}_{+,C}=2^{k_{\text{out}}}\). The distance is no smaller than the product of that of the outer code and that of the inner one; note as compared to when it is regarded as composed of qubits, the distance of the outer code now should be reduced by a factor of \(k_{\text{in}}\) because each of its \(q\) subsystems is composed of \(k_{\text{in}}\) qubits. ## Appendix C Phaseless group For a group of operators \(\mathsf{G}\) containing \(\{I,-I,iI,-iI\}\) as a subgroup, we define \[\hat{\mathsf{G}}:=\mathsf{G}/\{I,-I,iI,-iI\}, \tag{10}\] As \(\{I,-I,iI,-iI\}\) is clearly normal, \(\hat{\mathsf{G}}\) is a quotient group [87]. Specifically, for the Pauli group \(\hat{\mathsf{P}}^{n}\) can be regarded as a "phaseless" version of the Pauli group: Abusing the language, we will regard the coset representatives as elements of \(\hat{\mathsf{P}}^{n}\); consequently, we will also call \(\hat{\mathsf{P}}^{n}\) a Pauli group and its elements _Pauli operators_. By removing the phases, \(\hat{\mathsf{P}}^{n}\) becomes linearly independent; in particular, \(\hat{\mathsf{P}}^{n}\) is a basis of \(\mathcal{L}\left(\mathbb{C}^{2^{n}}\right)\)[1, 31, 32]. The phaseless Pauli group \(\hat{\mathsf{P}}^{1}\) is isomorphic to the Klein four-group and \(\hat{\mathsf{P}}^{n}\) is isomorphic to the direct product of \(n\)-copies of the Klein four-group [88]. For the same reason as why we introduced the phaseless Pauli group, given a subgroup of a Pauli/Paulian group, it is convenient to consider the phaseless version of it. As \(\{I,-I,iI,-iI\}\) may not always be in such a subgroup, we define: 1. If \(\mathsf{G}\) has \(\{I,-I,iI,-iI\}\) as a subgroup, then its phaseless counterpart is defined like before, i.e., (10). 2. If \(iI\notin\mathsf{G}\) but \(-I\in\mathsf{G}\), then \[\hat{\mathsf{G}}:=\mathsf{G}/\{I,-I\}.\] (11) 3. Finally, if \(-I\notin\mathsf{G}\), \[\hat{\mathsf{G}}:=\mathsf{G}.\] (12) Like before, we take the coset representatives as elements of such a quotient group, which is the reason why we "defined" \(\hat{\mathsf{G}}\) as \(\mathsf{G}\) in (12). Correspondingly there is arbitrariness in the choice of its elements: Indeed, saying \(P\in\hat{\mathsf{G}}\) is no different from saying \(P\in\mathsf{G}\). It is only when we compare sets does it make a difference: For example, if we say a set \(\mathbb{S}\) is equal to \(\hat{\mathsf{G}}\), then there should exist no two elements in \(\mathbb{S}\) that differ by a nontrivial multiplication factor, cf. (12) and later (13). ## Appendix D Commutativity of Pauli subgroups Here is a property concerning subgroups of Pauli groups: **Lemma 3**.: _For any subgroup \(\mathsf{G}\) of a Pauli group, \(-I\notin\mathsf{G}\) if and only if \(\mathsf{G}\) is linearly independent, and only if \(\mathsf{G}\) is composed wholly of involutions and is abelian._ Proof.: Suppose \(-I\in\mathsf{G}\). Being a subgroup of the Pauli group, every element of \(\mathsf{G}\) is either an involution or a counterinvolution. If \(A\in\mathsf{G}\) were a counterinvolution, then \(A^{2}=-I\) would also be in \(\mathsf{G}\), a contradiction, so every element is involutory. Next, a pair of Pauli operators either commute or anticommute. If \(A,B\in\mathsf{G}\) anticommuted, \(ABA^{-1}=-B=(-I)B\in\mathsf{G}\), so \(ABA^{-1}B^{-1}=-I\in\mathsf{G}\), a contradiction. Hence every element in \(\mathsf{G}\) commutes with one another, meaning \(\mathsf{G}\) is abelian. Note that because \(-I\) is an involution and commutes with all operators, \(\mathsf{G}\) being abelian and comprising purely involutions does not imply \(-I\in\mathsf{G}\) Because \(\{I,-I\}\) is linearly dependent, clearly a linearly independent subgroup should not contain \(-I\). The other way around, assume \(-I\notin\mathsf{G}\). Since the phaseless Paulian group (or the collection of its coset representatives) is a basis [1, 31, 32], any subset of it is also linearly independent--In other words, any subset of a Pauli group, if no element differs from another by a multiplication factor, is linearly independent. As \((iI)^{2}=-I\), \(-I\notin\mathsf{G}\) implies that \(iI\) and \(-iI\) are not in \(\mathsf{G}\) either; if \(A\in\mathsf{G}\) and \(aA\in\mathsf{G}\) for some nontrivial multiplication factor \(a\) (namely \(a\) is \(-1\) or \(i\) or \(-i\)), then \(aI\in\mathsf{G}\) because \(A^{-1}(aA)=aI\), a contradiction. Hence \(\mathsf{G}\) is linearly independent. As \(-I\) and \(\pm iI\) commute with all operators, an abelian subgroup of \(\mathsf{P}^{n}\) can contain the subgroup \(\{I,-I\}\) or \(\{I,-I,iI,-iI\}\), in which case the abelian subgroup is linearly dependent. To get rid of these extra factors, we can take the phaseless group of it, as we did in (10). When we refer to a subgroup \(\mathsf{S}\) of a Pauli, or Paulian, group as **maximally linearly independent and abelian**, it means that we cannot add any more Pauli or Paulian operators to it while keeping the subgroup both linearly independent and abelian; to put it another way, \(\mathsf{S}\) is abelian and \[\hat{\mathsf{S}}=\mathsf{S}. \tag{13}\] Some properties of such a group are revealed by Lemma 3; in particular this lemma shows that for a Pauli or Paulian5 subgroup to be linearly independent, it must be abelian, so calling it abelian is actually redundant, but it helps show off this important attribute. Footnote 5: In this work a Paulian group is unitarily similar to a “Pauli” group (see Sec. III.1), so Lemma 3 also holds for Paulian subgroups. ## Appendix E Proof for Lemma 2 If: Let \(A=aI+bU\) be an operator for which \(a\) and \(b\) are scalars and \(U\) is a unitary operator. Since \(I\) and \(U\) commute, apparently \(A^{\dagger}A=AA^{\dagger}\). Note this holds true whether \(A\) is on \(\mathbb{C}^{2}\) and whether \(U\) is Paulian. Only if: Let \(A\) be a normal operator on \(\mathbb{C}^{2}\). Because it is normal, it is unitarily diagonalizable and the eigenspaces are orthogonal. If there is only one eigenvalue, then \(A\) is proportional to \(I\); if it has two different eigenvalues \(c_{1}\) and \(c_{2}\), we can solve the equations \(c_{1}=a+b\) and \(c_{2}=a-b\). Suppose \(A\) becomes diagonal under a unitary \(V\), which in terms of a matrix means \[VAV^{-1}=\begin{pmatrix}a+b&0\\ 0&a-b\end{pmatrix}. \tag{12}\] Consider the matrix of Pauli \(Z\): \[Z=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}; \tag{13}\] we have \(VAV^{-1}=aI+bZ\), so \(A=aI+bV^{-1}ZV\). Because \(V^{-1}ZV\) is unitarily similar to \(Z\), it has the same spectrum \(\{1,-1\}\) and is Paulian. ## Appendix F Codeword stabilized codes First, we will establish some properties of codeword stabilized codes that are essential in constructing Paulian stabilizers in Sec. F.1, and then we will provide the steps to do so, and details of the two codes from Sec. V.3. In the following discussion we will assume the system is composed of \(n\) qubits; please refer to Sec. V.3 for the relevant terminologies and how codeword stabilized codes are constructed. In this section we will express Pauli operators in the following way, e.g., \[XIZ=X\otimes I\otimes Z, \tag{14}\] and suppose the system is composed of three qubits: \[X_{2}=I\otimes X\otimes I=IXI. \tag{15}\] In addition, \(I\) in this subsection will refer to the identity operator on a single qubit. To distinguish the identity operator on the whole system from those on individual qubits, we will label the former as \(I_{n}\), assuming the whole system is composed of \(n\) qubits, so for example \[I_{3}=III. \tag{16}\] ### Preliminaries Let \(\mathsf{S}_{w}\) denote the word stabilizer. In general we will consider a particular generating set \(g\) of \(\mathsf{S}_{w}\), which will be taken as a tuple of generators, so we can associate each simultaneous eigenspace of \(\mathsf{S}_{w}\) with a unique \(n\)-tuple of \(\pm 1\) that is the simultaneous eigenvalues with respect to \(g\), just like the error syndromes in relation to Paulian stabilizers. Such a tuple of simultaneous eigenvalues will be denoted by \(\hat{t}\), and the set of all these tuples by \(\mathbb{W}\). Like the tuples of syndromes \((t)\), we will use \(\hat{t}\) to label spaces and the like, e.g. \(\mathcal{H}_{\hat{t}}\) is the \(\hat{t}\)-simultaneous eigenspace of \(g\), which, to put another way, is a bijective map from \(\mathbb{W}\) to the collection of all simultaneous eigenspaces of \(\mathsf{S}_{w}\) or \(g\): \[\mathbb{W}\ni\hat{t}\mapsto\mathcal{H}_{\hat{t}}, \tag{17}\] cf. the syndrome map in Sec. III.1. Let the state stabilized by the word stabilizer be \(\ket{s}\), and \(W\) be any Pauli operator in \(\mathsf{P}^{n}\). Due to commutativity and anticommutativity, for any simultaneous eigenvector of a set or group of commutative Pauli operators, after acted upon by a Pauli operator it is still a simultaneous eigenvector of the same set or group of Pauli operators, so \(W\ket{s}\) is also a simultaneous eigenvector of \(\mathsf{S}_{w}\), which implies: **Lemma 4**.: _Given a codeword stabilized code, for two Pauli operators \(P_{1},P_{2}\in\mathsf{P}^{n}\), either \(P_{1}\ket{s}\propto P_{2}\ket{s}\) or \(P_{1}\ket{s}\perp P_{2}\ket{s}\). Hence, either \(P_{1}\mathcal{H}_{C}\perp P_{2}\mathcal{H}_{C}\) or \(P_{1}\mathcal{H}_{C}\cap P_{2}\mathcal{H}_{C}\neq\{0\}\), i.e., \(P_{1}\mathcal{H}_{C}\perp P_{2}\mathcal{H}_{C}\) if and only if \(P_{1}\mathcal{H}_{C}\cap P_{2}\mathcal{H}_{C}=\{0\}\)._ Proof.: Since \(P_{1}\ket{s}\) and \(P_{2}\ket{s}\) are both simultaneous eigenvectors of \(\mathsf{S}_{w}\), and because each simultaneous eigenspace is one-dimensional, they are either in the same eigenspace, i.e., proportional, or orthogonal. Likewise, as \(\mathcal{H}_{C}\) is an orthogonal direct sum of simultaneous eigenspaces of \(\mathsf{S}_{w}\), so are \(P_{1}\mathcal{H}_{C}\) and \(P_{2}\mathcal{H}_{C}\). Again, because the simultaneous eigenspaces of \(\mathsf{S}_{w}\) are one-dimensional, either \(P_{1}\mathcal{H}_{C}\perp P_{2}\mathcal{H}_{C}\) or \(P_{1}\mathcal{H}_{C}\cap P_{2}\mathcal{H}_{C}\neq\{0\}\). This lemma leads to **Corollary 3**.: _For a codeword stabilized code, a Pauli operator \(P\) obeys \(P\ket{{}_{C}}\propto I_{C}\) if and only if \(P\in\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\) and \(P\) commutes, or anticommutes, with all the word operators at the same time. In other words, a Pauli operator \(P\) stabilizes the code if and only if_ 1. _either_ \(P\in\mathsf{S}_{w}\) _and_ \(P\) _commutes with all the word operators,_ 2. _or_ \(P\in-\mathsf{S}_{w}\) _and_ \(P\) _anticommutes with all the word operators._ _Clearly, if \(I_{n}\) is a word operator, the only possibility is \(P\) commuting with all the word operators._ As we discussed in Sec. D, because \(\mathsf{S}_{w}\) is a maximal linearly independent and abelian subgroup of \(\mathsf{P}^{n}\), \(\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\) is a maximal abelian subgroup. A Pauli stabilizer for a codeword stabilizer code that is guaranteed to exist is \(I\). As shown in Ref. [50], Pauli stabilizer codes are a special case of codeword stabilizer codes, which of course have nontrivial Pauli stabilizers. Proof.: Given a Pauli operator \(W\in\mathsf{P}^{n}\), for \(W\left|s\right\rangle\) to be an eigenvector of another Pauli operator \(P\), \(P\) must commute with all elements in \(\mathsf{S}_{w}\) as \(W\left|s\right\rangle\) is a simultaneous eigenvector of \(\mathsf{S}_{w}\); because \(\mathsf{S}_{w}\) is a maximal linearly independent and abelian subgroup of \(\mathsf{P}^{n}\), it implies \(P\in\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\). Furthermore, if \(PW=\pm WP\), \(PW\left|s\right\rangle=\pm WP\left|s\right\rangle\), which leads to the requirement on commutation relations between \(P\) and the word operators. The other direction is pretty obvious and hence omitted. An error \(E\) is detectable if and only if \(E\) satisfies [48, 50] \[\Pi_{\mathrm{C}}E\Pi_{\mathrm{C}}\propto\Pi_{\mathrm{C}}. \tag{101}\] The following lemma considers a property of detectable errors: **Lemma 5**.: _Let \(E\) be a unitary operator which obeys \(\Pi_{\mathrm{C}}E\Pi_{\mathrm{C}}=a\Pi_{\mathrm{C}}\) for some scalar \(a\); note \(\left|a\right|\leq 1\) by unitarity of \(E\). For such an operator we can find:_ 1. \(\left|a\right|=0\) _if and only if_ \(E\mathcal{H}_{\mathrm{C}}\perp\mathcal{H}_{\mathrm{C}}\)_, in which case_ \(E\mathcal{H}_{\mathrm{C}}\cap\mathcal{H}_{\mathrm{C}}=\{0\}\)_._ 2. \(\left|a\right|=1\) _if and only if_ \(E|_{\mathrm{C}}=e^{i\theta}I_{\mathrm{C}}\) _for some real_ \(\theta\)_, in which case_ \(E\mathcal{H}_{\mathrm{C}}=\mathcal{H}_{\mathrm{C}}\)_._ 3. \(\left|a\right|\in(0,1)\) _if and only if_ \(E\mathcal{H}_{\mathrm{C}}\) _and_ \(\mathcal{H}_{\mathrm{C}}\) _are not orthogonal and_ \(E\mathcal{H}_{\mathrm{C}}\cap\mathcal{H}_{\mathrm{C}}=\{0\}\)_._ _Correspondingly, the following conditions are equivalent:_ 1. \(\left|a\right|=1\)_._ 2. \(E|_{\mathrm{C}}=e^{i\theta}I_{\mathrm{C}}\) _for some real_ \(\theta\)_._ 3. \(E\mathcal{H}_{\mathrm{C}}=\mathcal{H}_{\mathrm{C}}\)_._ 4. \(E\mathcal{H}_{\mathrm{C}}\cap\mathcal{H}_{\mathrm{C}}\neq\{0\}\)_._ Proof.: (i) is obvious. (ii): It is apparent that \(E|_{\mathrm{C}}=e^{i\theta}I_{\mathrm{C}}\) implies \(a=e^{i\theta}\) and thus \(\left|a\right|=1\). To show the converse, we first note \[\left|\left|\Pi_{\mathrm{C}}\left|w\right\rangle\right|\right| \leq\left|\left|w\right\rangle\right| \tag{102}\] which becomes an equality if and only if \(\left|w\right\rangle\in\mathcal{H}_{\mathrm{C}}\). Now, consider any \(\left|v\right\rangle\in\mathcal{H}_{\mathrm{C}}\) and let \(a=e^{i\theta}\) we have \[\left|\left|v\right\rangle\right| \leq\left|\left|\Pi_{\mathrm{C}}E\Pi_{\mathrm{C}}\left|v\right\rangle \right|\right|\] \[\leq\left|\left|E\left|v\right\rangle\right|\right|=\left|\left| \left|v\right\rangle\right|\right|, \tag{103}\] implying \(E\left|v\right\rangle\in\mathcal{H}_{\mathrm{C}}\). As this is true for all \(\left|v\right\rangle\in\mathcal{H}_{\mathrm{C}}\) and \(E\) is unitary, we have \(E\mathcal{H}_{\mathrm{C}}=\mathcal{H}_{\mathrm{C}}\). Next, since \(e^{i\theta}\left|v\right\rangle=\Pi_{\mathrm{C}}E\left|v\right\rangle=E\left|v\right\rangle\) for all \(\left|v\right\rangle\in\mathcal{H}_{\mathrm{C}}\), we obtain \(E|_{\mathrm{C}}=e^{i\theta}I_{\mathrm{C}}\). (iii): If \(E\mathcal{H}_{\mathrm{C}}\cap\mathcal{H}_{\mathrm{C}}=\{0\}\), for any nonzero \(\left|v\right\rangle\in\mathcal{H}_{\mathrm{C}}\), because \(E\Pi_{\mathrm{C}}\left|v\right\rangle=E\left|v\right\rangle\notin\mathcal{H}_{ \mathrm{C}}\), \[\left|a\right|\left|\left|v\right\rangle\right| \leq\left|\left|\Pi_{\mathrm{C}}E\Pi_{\mathrm{C}}\left|v\right\rangle \right|\right|\] \[<\left|\left|E\left|v\right\rangle\right|\right|\] \[=\left|\left|\left|v\right\rangle\right|\right|, \tag{104}\] implying \(\left|a\right|<1\), and because \(E\mathcal{H}_{\mathrm{C}}\) and \(\mathcal{H}_{\mathrm{C}}\) are not orthogonal, \(\left|a\right|>0\). On the contrary, when \(\left|a\right|\in(0,1)\), since \(a\neq 0\), \(E\mathcal{H}_{\mathrm{C}}\) must not be orthogonal to \(\mathcal{H}_{\mathrm{C}}\). Should \(E\mathcal{H}_{\mathrm{C}}\) and \(\mathcal{H}_{\mathrm{C}}\) have nontrivial intersection, there exist nonzero \(\left|v\right\rangle\in\mathcal{H}_{\mathrm{C}}\) such that \(E\left|v\right\rangle\in\mathcal{H}_{\mathrm{C}}\), and for such \(\left|v\right\rangle\), we have \(\left|\left|\left|E\Pi_{\mathrm{C}}\left|v\right\rangle\right|\right|\right|= \left|\left|\left|E\left|v\right\rangle\right|\right|=\left|\left|\left|v\right\rangle \right|\right|\), so \[\left|a\right|\left|\left|v\right\rangle\right|\left|\right|=\left|\Pi_{ \mathrm{C}}E\Pi_{\mathrm{C}}\left|v\right\rangle\right|\right|=\left|\left|\left|v \right\rangle\right|\left|\right|, \tag{105}\] a contradiction; therefore \(E\mathcal{H}_{\mathrm{C}}\cap\mathcal{H}_{\mathrm{C}}=\{0\}\). Let's go on to show why conditions 1 to 4 are equivalent: * By 2, 1 and 2, and 2\(\rightarrow\)3. * Clearly 3\(\rightarrow\)4. * Because only when \(\left|a\right|=1\) do \(E\mathcal{H}_{\mathrm{C}}\) and \(\mathcal{H}_{\mathrm{C}}\) intersect nontrivially, 4 implies 1. This completes the proof. **Corollary 4**.: _For a codeword stabilized code:_ 1. _Every detectable Pauli error_ \(P\) _obeys either_ \(P\mathcal{H}_{\mathrm{C}}\perp\mathcal{H}_{\mathrm{C}}\) _or_ \(P\in\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\)_; in the latter case_ \(P|_{\mathrm{C}}\propto I_{\mathrm{C}}\) _and it is hence also correctable._ 2. _For every pair of correctable Pauli errors_ \(P_{1}\) _and_ \(P_{2}\)_, either_ \(P_{1}\) _and_ \(P_{2}\) _are orthonormal or there exists_ \(S\in\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\) _such that_ \(P_{2}=P_{1}S\)_; in the latter case_ \(S|_{\mathrm{C}}\propto I_{\mathrm{C}}\) _so_ \(P_{2}|_{\mathrm{C}}\propto P_{1}|_{\mathrm{C}}\)_._ Note that a correctable unitary operator is "normalized" according to our definition, so a correctable Pauli error is normalized. Proof.: (a): As the Pauli error \(P\) is detectable, we will make use of Lemma 5: If \(P\mathcal{H}_{\mathrm{C}}=\mathcal{H}_{\mathrm{C}}\), it obeys condition 2 of Lemma 5, and according to Corollary 3 it must be in \(\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\); if not, it satisfies either 1 or 3 of Lemma 5. As Lemma 4 shows, 3 cannot occur, so only 1 and 2 are possible, completing the proof for 1. (b): For correctable Pauli operators \(P_{1}\) and \(P_{2}\), they satisfy \(\Pi_{\mathrm{C}}P_{1}^{\dagger}P_{2}P_{\mathrm{C}}\propto\Pi_{\mathrm{C}}\). By Lemma 4, either \(P_{2}\mathcal{H}_{\mathrm{C}}\) or \(P_{1}\mathcal{H}_{\mathrm{C}}\cap P_{2}\mathcal{H}_{\mathrm{C}}\neq\{0\}\). If it is the former, \(\Pi_{\mathrm{C}}P_{1}^{\dagger}P_{2}\Pi_{\mathrm{C}}=0\), i.e., they are orthonormal. If the latter, \(\Pi_{\mathrm{C}}P_{1}^{\dagger}P_{2}\Pi_{\mathrm{C}}\neq 0\), in which case we must have \(P_{1}\mathcal{H}_{\mathrm{C}}=P_{2}\mathcal{H}_{\mathrm{C}}\) and they must act identically except for a multiplication factor on \(\mathcal{H}_{\mathrm{C}}\), else one could not invert the action of the other on \(\mathcal{H}_{\mathrm{C}}\). As \(S:=P_{1}^{-1}P_{2}\) is also a Pauli operator and \(S|_{\mathrm{C}}\propto I_{\mathrm{C}}\), \(S\in\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\) by Corollary 3. Corollary 2 is (iv) of the following corollary combined with (a) of Corollary 4; below "wt" refers to the weight of an operator: **Corollary 5**.: _Consider an \(n\)-qubit codeword stabilized code with distance \(d\), for which \(\mathrm{wt}S\geq d\) for all \(S\in\mathsf{S}_{w}\left\backslash\left\{I_{n}\right\}\right\)._ 1. _If_ \(P\in\mathsf{P}^{n}\) _has_ \(\mathrm{wt}P<d\) _and is not proportional to_ \(I_{n}\)_, then_ \(I_{n}\) _and_ \(P\) _are orthogonal._ 2. _For_ \(P_{1},P_{2},P_{1}P_{2}\in\mathsf{P}^{n}\)_, suppose that their weights are all less than_ \(d\) _and that they are linearly independent; then elements in_ \(\left\{I_{n},P_{1},P_{2},P_{1}P_{2}\right\}\) _are mutually orthogonal._ 3. _For a pair of operators in_ \(\mathsf{P}^{n}\)_, if their weights are both no higher than_ \(\lfloor(d-1)/2\rfloor\) _and if they are not proportional to each other, they are orthonormal._ 4. _The code is nondegenerate_ _[_1, 2_]__: Indeed, for a codeword stabilized code with distance_ \(d\)_, it is non-degenerate if and only if_ \(\mathrm{wt}S\geq d\) _for all_ \(S\in\mathsf{S}_{w}\setminus\left\{I_{n}\right\}\)_._ Note this corollary does not imply that a codeword stabilized code whose word stabilizers obey the condition on weights above will have distance \(d\)--The code having distance \(d\) is part of the assumption. Also, it may seem weird at first sight that word operators did not show up, even though they are essential in formulating a codeword stabilizer code. Their roles here are implicit: As we have assumed the code has distance \(d\), Pauli errors of certain weights must obey specific conditions with the word stabilizer and word operators as listed in Ref. [50]. Proof.: (i): Because \(P\) is not in \(\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\) (due to its weight) and is detectable, Corollary 4 implies \(P\) and \(I_{n}\) are orthogonal. (ii): First, due to their weights and linear independence,6\(P_{1},P_{2},P_{1}P_{2}\) are not in \(\mathsf{S}_{w}\left\{I_{n},-I_{n},iI_{n},-iI_{n}\right\}\). Let \(\mathsf{G}\) be the group generated by \(P_{1}\), \(P_{2}\), and their adjoints. By linear independence and the fact that the adjoint of a Pauli operator differs at most by a multiplication factor, we have Footnote 6: By linear independence none of them can be proportional to \(I_{n}\). \[\mathsf{\hat{G}}=\left\{I_{n},P_{1},P_{2},P_{1}P_{2}\right\}. \tag{121}\] By Corollary 4 or (i), for all \(O\in\mathsf{\hat{G}}\) except \(I_{n}\), \(\Pi_{\mathrm{C}}O\Pi_{\mathrm{C}}=0\), so for all \(O_{1},O_{2}\in\mathsf{\hat{G}}\) with \(O_{1}\neq O_{2}\) \[\Pi_{\mathrm{C}}O_{1}^{\dagger}O_{2}\Pi_{\mathrm{C}}=0, \tag{122}\] because \(O_{1}^{\dagger}O_{2}\) is also in \(\mathsf{G}\) and is not proportional to \(I_{n}\); \(O_{1}\) and \(O_{2}\) are therefore orthogonal. (iii): Suppose we have \(P_{1},P_{2}\in\mathsf{P}^{n}\) that are not proportional to each other and whose weights are no higher than \(\lfloor(d-1)/2\rfloor\). If one of them is proportional to \(I_{n}\), then they are orthonormal by (i); else, \(P_{1}P_{2}\) will not be proportional to \(I_{n}\) and \(\mathrm{wt}P_{1}P_{2}<d\), so by (ii) \(P_{1}\) and \(P_{2}\) are orthonormal. (iv): By (iii), the condition on weights is a sufficient condition for nondegeneracy. To show it is a necessary condition, suppose there exists \(S\in\mathsf{S}_{w}\setminus\left\{I_{n}\right\}\) whose weight is less than \(d\). From its weight, \(S\) is detectable, and because it is in \(\mathsf{S}_{w}\), it must act like an identity on the code space (according to Corollary 4). Then there would exist nontrivial Pauli operators \(P_{1}\neq P_{2}\) for which \(\mathrm{wt}P_{i}\leq\lfloor(d-1)/2\rfloor\), \(i=1,2\) such that \(P_{1}=P_{2}S\), so \(P_{1}\) and \(P_{2}\) act the same on the code space, implying the code is degenerate. ### Constructing Paulian stabilizers In this part we will discuss how to construct Paulian stabilizers for codeword stabilized codes, where we will make heavy use of \(\mathcal{H}_{\hat{t}}\)'s, i.e., the simultaneous eigenspaces of \(\mathsf{S}_{w}\). Unlike the proof for Proposition 1, we will not attempt to build the minimal group first and expand upon it. A quick reminder: \(g\) denotes a tuple of generators of \(\mathsf{S}_{w}\) and \(\mathbb{W}\) is the collection of all the tuples of simultaneous eigenvalues with respect to \(g\); please refer back to the start of Sec. F.1. Let's lay out the procedure: 1. First, check if (20) is satisfied to see whether it is possible to correct all errors with Paulian stabilizers--Here we will assume this is true; we then choose a set \(\mathbb{F}\) of orthonormal correctable Pauli errors. For a code with a given distance, we can use (24), and select linearly independent Pauli errors with weight no higher than \(\lfloor(d-1)/2\rfloor\) as orthonormal correctable errors; specifically, if the code is nondegenerate, which can be easily checked via Corollary 5, we can choose all of them. 2. As discussed in the proof for Lemma 4, \(P\mathcal{H}_{\mathrm{C}}\) is a direct sum of simultaneous eigenspaces of \(\mathsf{S}_{w}\) for any Pauli operator \(P\); hence for \(F\in\mathbb{F}\), let \(\mathbb{W}_{F}\) denote the subset of \(\mathbb{W}\) such that \[F\mathcal{H}_{\mathrm{C}}=\bigoplus_{\hat{t}\in\mathbb{W}_{F}}\mathcal{H}_{\hat{t }}.\] (123) \(\mathbb{W}_{F}\) can be found out by making use of the commutation relations between the word operators and \(g\), and those between the word operators and \(F\). A.3 Since \[\bigoplus_{F\in\mathbb{F}}F\mathcal{H}_{\mathrm{C}}=\bigoplus_{F\in\mathbb{F}} \bigoplus_{\ell\in W_{F}}\mathcal{H}_{\hat{t}},\] (113) and since for all \(F_{1},F_{2}\in\mathbb{F}\) \[\mathbb{W}_{F_{1}}\cap\mathbb{W}_{F_{2}}=\varnothing\text{ if }F_{1}\neq F_{2},\] (114) where \(\varnothing\) refers to the empty set, with \[\mathbb{W}_{\perp}:=\mathbb{W}\setminus\bigcup_{F\in\mathbb{F}}\mathbb{W}_{F},\] (115) we have \[\left(\bigoplus_{F\in\mathbb{F}}F\mathcal{H}_{\mathrm{C}}\right)^{\perp}= \bigoplus_{\hat{t}\in\mathbb{W}_{\perp}}\mathcal{H}_{\hat{t}}.\] (116) The set \(\mathbb{W}_{\perp}\) and the associated simultaneous eigenspaces will be used as "spares." A.4 Let \(m=\lceil\log_{2}|\mathbb{F}|\rceil\), and \(\mathbb{T}\), as before, be the collection of all \(m\)-tuples of \(\pm 1\), i.e., syndromes. Choose a unique syndrome for each error in \(\mathbb{F}\), namely, a one-to-one map \(f_{\mathrm{sym}}:\mathbb{F}\to\mathbb{T}\), the "syndrome map,"7 and we require \(f_{\mathrm{sym}}(I_{n})=(I)\), where \((I)\) is the tuple whose components are all \(1\). If \(m>\log_{2}|\mathbb{F}|\), there will be "excess" syndromes that do not correspond to correctable errors; i.e., they are members of Footnote 7: The syndrome map in Sec. III.1 was defined on \(\mathbb{F}^{\prime}\) instead of \(\mathbb{F}\), so it was bijective besides injective. \[\mathbb{T}\setminus f_{\mathrm{sym}}(\mathbb{F}). \tag{117}\] The total number of excess syndromes is \[|\mathbb{T}\setminus f_{\mathrm{sym}}(\mathbb{F})|=|\mathbb{T}|-|f_{\mathrm{ sym}}(\mathbb{F})|=2^{m}-|\mathbb{F}|. \tag{118}\] If \(m=\log_{2}|\mathbb{F}|\), then this already gives us the "minimal" Paulian stabilizers; see Sec. III.1 and A.6 on how to define Paulian stabilizers given the syndrome spaces. * Now let's designate all the syndrome spaces. The properties we desire of them are (cf. Sec. III.3): 1. The syndrome space associated with each error \(F\in\mathbb{F}\) should contain \(F\mathcal{H}_{\mathrm{C}}\): \[F\mathcal{H}_{\mathrm{C}}\subseteq\mathcal{H}_{f_{\mathrm{sym}}(F)}\;\forall F \in\mathbb{F}.\] (119) 2. The syndrome spaces are orthogonal: For any two distinct syndromes \((s)\) and \((t)\), \[\mathcal{H}_{(s)}\perp\mathcal{H}_{(t)}.\] (120) 3. All syndrome spaces are isomorphic: \[\mathcal{H}_{(s)}\cong\mathcal{H}_{(t)}\;\forall(s),(t)\in\mathbb{T}.\] (121) To achieve them, for every \((t)\in\mathbb{T}\) we choose a subset \(\mathbb{W}_{(t)}\) of \(\mathbb{W}\) and demand the syndrome spaces be \[\mathcal{H}_{(t)}:=\bigoplus_{\hat{s}\in\mathbb{W}_{(t)}}\mathcal{H}_{\hat{s}};\] (122) \(\mathbb{W}_{(t)}\)'s shall satisfy the following conditions: 1. To comply with (119), for all \((t)\in f_{\mathrm{sym}}(\mathbb{F})\) we require \[\mathbb{W}_{F_{(t)}}\subseteq\mathbb{W}_{(t)};\] (123) see the definition of \(\mathbb{W}_{F}\) for all \(F\in\mathbb{F}\) in (121). 2. To satisfy (120), \[\mathbb{W}_{(s)}\cap\mathbb{W}_{(t)}=\varnothing\;\forall(s)\neq(t).\] (124) 3. To obey (121), \[|\mathbb{W}_{(s)}|=|\mathbb{W}_{(t)}|\;\forall(s),(t)\in\mathbb{T}.\] (125) Since \(\dim\mathcal{H}_{t}=1\), \(|\mathbb{W}_{(t)}|\) is the dimension of each syndrome space, and \(|\mathbb{W}_{(t)}|-\dim\mathcal{H}_{\mathrm{C}}\) can show how much we extend the domain of the Paulian stabilizers. Note \[\mathbb{W}_{(t)}\setminus\mathbb{W}_{F_{(t)}} \subseteq\mathbb{W}_{\perp}\;\forall(t)\in f_{\mathrm{sym}}( \mathbb{F}),\] (126) \[\mathbb{W}_{(t)} \subseteq\mathbb{W}_{\perp}\;\forall(t)\in\mathbb{T}\setminus f_{ \mathrm{sym}}(\mathbb{F}),\] (127) so the spares--\(\mathbb{W}_{\perp}\) of (125) and the associated eigenspaces--are used to fill in each syndrome space. Finally, if the stabilizers are to be Paulian on the entire space \(\mathcal{H}\), the dimension of each syndrome space is \[\dim\mathcal{H}_{(t)}=2^{n}/2^{m}=2^{n-m},\] (128) i.e., each is composed of \(n-m\) qubits. A.6 With the syndrome spaces specified, we have the corresponding Paulian stabilizers: For all \(i=1,\ldots,m\), \[Z_{i}^{\mathrm{S}}:=\Pi_{\bigoplus_{(t)_{i}=1,(t)\in\mathbb{T}},\;\mathcal{H}_{ (t)}}-\Pi_{\bigoplus_{(t)_{i}=-1,(t)\in\mathbb{T}}\mathcal{H}_{(t)}},\] (129) where \((t)_{i}\) is the \(i\)-th component of \((t)\). The domain of the Paulian stabilizers, \(\mathcal{H}^{\prime}\) of Proposition 1, is thus \[\mathcal{H}^{\prime}=\bigoplus_{(t)\in\mathbb{T}}\mathcal{H}_{(t)}.\] (130) Defined this way, each \(Z_{i}^{\mathrm{S}}\) is clearly Paulian to the restriction of \(\mathcal{H}^{\prime}\). They commute, with \((t)\)-simultaneous eigenspaces \(\mathcal{H}_{(t)}\); i.e., \((t)\)'s are the error syndromes and \(\mathcal{H}_{(t)}\)'s are the corresponding syndrome spaces. Because we have demanded \(I_{n}\) have syndrome \((I)\), \(Z_{i}^{\mathrm{S}}\)'s are stabilizers. In the examples to come, we will demonstrate how to put them into practice. ### \(((5,6,2))\)-code Let's start off with the \(((5,6,2))\)-code from Refs. [50; 83]. As discussed in Sec. V.3, it is impossible to find a Paulian stabilizer group that can detect all the weight-1 errors for this code, but due to its low dimensions, it is easier to demonstrate the procedure shown in Sec. F.2 with this code, and we can also show how to adapt the methods for error-detecting codes. The word stabilizer of this code is generated by \(ZXZII\) and all its cyclic shifts, i.e., \[g=(ZXZII,XZIIZ,ZIIZX,\] \[IIZXZ,IZXZI), \tag{111}\] and the word operators are \[IIIII,ZZIZI,IZZIZ,\] \[ZIZZI,IZIZ,ZIZIZ. \tag{112}\] Now let's follow the steps listed in Sec. F.2: * A.1: As this code is an error-detecting code, how do we choose orthonormal Pauli errors? In fact, because the code has distance \(2\), we infer from (ii) of Corollary 5 that \(\mathbb{F}=\{X_{i},Y_{i},Z_{i},I_{5}\}\) is orthonormal for a fixed \(i=1,\ldots,5\), and we will use them as the orthonormal "correctable" errors. * A.2: We should find \(\mathbb{W}_{F}\) for each \(F\in\mathbb{F}\). As a demonstration, we will show how to find \(\mathbb{W}_{X_{1}}\). First, consider the word operator \(ZZIZI\). Its commutation relation with \(g\) of (111), if expressed as a tuple of \(\pm 1\) with \(+1\) for commuting and \(-1\) for anticommuting, is \[(-1,-1,1,-1,1),\] (113) which is exactly the simultaneous eigenvalues of the vector \(ZZIZI\ket{s}\) with respect to \(g\). Repeating for all the word operators, we obtain \(\mathcal{H}_{\text{C}}\) as the direct sum of simultaneous eigenspaces of \(g\) or \(\mathsf{S}_{w}\). To obtain \(\mathbb{W}_{X_{1}}\), we check how \(X_{1}\) commutes with \(g\): The commutation relation is \[(-1,1,-1,1,1),\] (114) which means \(X_{1}(ZZIZI\ket{s})\) has simultaneous eigenvalues \[(-1\times(-1),-1\times 1,1\times(-1),-1\times 1,1\times 1)\] \[= (1,-1,-1,-1,1),\] (115) namely multiplications component by component between (113) and (114); \((1,-1,-1,-1,1)\) from (115) is therefore an element of \(\mathbb{W}_{X_{1}}\). Doing this all over again for all the word operators gives us \(\mathbb{W}_{X_{1}}\). * A.3: After obtaining each \(\mathbb{W}_{F}\), \(\mathbb{W}_{\perp}\) should have \(2^{5}-\dim\mathcal{H}_{\text{C}}\times 4=8\) elements. When \(i=1\), they are \[\hat{a} :=(-1,-1,-1,1,-1),\] \[\hat{b} :=(-1,-1,-1,1,1),\] \[\hat{c} :=(-1,1,1,1,-1),\] \[\hat{d} :=(1,-1,-1,-1,-1),\] \[\hat{e} :=(-1,1,1,1,1),\] \[\hat{f} :=(1,-1,-1,-1,1),\] \[\hat{g} :=(1,1,1,-1,-1),\] \[\hat{h} :=(1,1,1,-1,1).\] (116) * A.4: Since \(\log_{2}|\mathbb{F}|=2\) is an integer, in this case we do not have any excess syndromes. Let's choose the syndrome for each element of \(\mathbb{F}\), e.g. \[F_{(1,1)}=I_{5},\ F_{(1,-1)}=X_{i},\] \[F_{(-1,1)}=Y_{i},\ F_{(-1,-1)}=Z_{i};\] (117) a quick reminder: \((I)=(1,1)\) in this case. They give us the minimal Paulian stabilizers: \[Z_{1}^{\text{S}}|_{\overline{\mathcal{H}}}=\Pi_{\mathcal{H}_{ \text{C}}\oplus X_{i}\mathcal{H}_{\text{C}}}-\Pi_{Y_{i}\mathcal{H}_{\text{C }}\oplus Z_{i}\mathcal{H}_{\text{C}}},\] \[Z_{2}^{\text{S}}|_{\overline{\mathcal{H}}}=\Pi_{\mathcal{H}_{ \text{C}}\oplus Y_{i}\mathcal{H}_{\text{C}}}-\Pi_{X_{i}\mathcal{H}_{\text{C }}\oplus Z_{i}\mathcal{H}_{\text{C}}}.\] (118) * A.5: As addressed in the previous point, we have already had the minimal Paulian stabilizers, and we would like to extend their domain to the whole space while keeping them Paulian. We can choose \[\mathbb{W}_{(1,1)}\setminus\mathbb{W}_{F_{(1,1)}} =\big{\{}\hat{a},\hat{b}\big{\}},\] \[\mathbb{W}_{(1,-1)}\setminus\mathbb{W}_{F_{(1,-1)}} =\big{\{}\hat{c},\hat{d}\big{\}},\] \[\mathbb{W}_{(-1,1)}\setminus\mathbb{W}_{F_{(-1,1)}} =\big{\{}\hat{e},\hat{f}\big{\}},\] \[\mathbb{W}_{(-1,-1)}\setminus\mathbb{W}_{F_{(-1,-1)}} =\big{\{}\hat{g},\hat{h}\big{\}},\] (119) so \[\mathcal{H}_{(1,1)} =\mathcal{H}_{\text{C}}\oplus\mathcal{H}_{\hat{a}}\oplus\mathcal{ H}_{\hat{b}},\] \[\mathcal{H}_{(1,-1)} =X_{i}\mathcal{H}_{\text{C}}\oplus\mathcal{H}_{\hat{c}}\oplus \mathcal{H}_{\hat{d}},\] \[\mathcal{H}_{(-1,1)} =Y_{i}\mathcal{H}_{\text{C}}\oplus\mathcal{H}_{\hat{e}}\oplus \mathcal{H}_{\hat{f}},\] \[\mathcal{H}_{(-1,-1)} =Z_{i}\mathcal{H}_{\text{C}}\oplus\mathcal{H}_{\hat{g}}\oplus \mathcal{H}_{\hat{h}}.\] (120) * A.6: Now we have commutative stabilizers that are Pauliian on the whole space: \[Z_{1}^{\text{S}}=\Pi_{\mathcal{H}_{(1,1)}\oplus\mathcal{H}_{(1,-1 )}}-\Pi_{\mathcal{H}_{(-1,1)}\oplus\mathcal{H}_{(-1,-1)}}\] \[Z_{2}^{\text{S}}=\Pi_{\mathcal{H}_{(1,1)}\oplus\mathcal{H}_{(-1,1 )}}-\Pi_{\mathcal{H}_{(1,-1)}\oplus\mathcal{H}_{(-1,-1)}}.\] (121) With \(X_{i}\), \(Y_{i}\), \(Z_{i}\), and \(I_{5}\) chosen as the orthonormal correctable errors, they have distinct syndromes with respect to the Paulian stabilizers, and we can correct their linear combinations, i.e., all errors occurring on the \(i\)-th qubit. As discussed earlier, the Paulian stabilizers for this code cannot detect all weight-1 errors; however, it can be found that with our choice of the syndrome spaces all single \(X\) errors can be detected: Each single \(X\) error maps \(\mathcal{H}_{\mathrm{C}}\) to a subspace of \(\mathcal{H}_{(I)}^{\perp}\), so the syndrome is different from \((I)\) and is detectable. ### \(((9,12,3))\)-code Now let's consider the \(((9,12,3))\)-code from Refs. [50; 82], which, unlike the previous example, is a legitimate error-correcting code. Since we have by and large demonstrated the methods in our previous example, we will only focus on the key points, and since the dimension is too large we will not give explicit forms of the Paulian stabilizers. 1. A.1: The word stabilizer is generated by \(ZXZIIIIII\) and all its cyclic shifts, so it is apparent that \[\mathrm{wt}S\geq d=3\;\forall S\in\mathsf{S}_{W}\setminus\{I\}\,.\] (101) By Corollary 5 the code is nondegenerate, so we choose all linearly independent Pauli errors with weight no larger than 2, which means by (23) we have \[|\mathbb{F}|=1+9\times 3=28.\] Because \[2^{\lceil\log_{2}|\mathbb{F}|\rceil}\dim\mathcal{H}_{\mathrm{C}}=2^{5}\times 1 2<2^{5}\times 2^{4}=\dim\mathcal{H}=2^{9},\] it is possible for this code to have Paulian stabilizers that correct all the relevant errors. 2. A.2 and A.3 are routine. 3. A.4: As \(m=\lceil\log_{2}|\mathbb{F}|\rceil=5>\log_{2}|\mathbb{F}|\), we will have excess syndromes in this case, and they are \(2^{m}-|\mathbb{F}|=4\) in total. 4. A.5: If we want the stabilizers to be Paulian on the whole space, then each syndrome space is composed of \(n-m=4\) qubits. For each syndrome \((t)\) that points to an error \(F_{(t)}\) in \(\mathbb{F}\), \(\dim\mathcal{H}_{(t)}-\dim F_{(t)}\mathcal{H}_{\mathrm{C}}=\dim\mathcal{H}_{ (t)}-\dim\mathcal{H}_{\mathrm{C}}=4\), so we need four elements of \(\mathbb{W}_{\perp}\) to construct the associated syndrome space \(\mathcal{H}_{(t)}\), while for each excess syndrome we need \(2^{4}=16\) elements of \(\mathbb{W}_{\perp}\). 5. We can define the Paulian stabilizers following A.6. As there are four excess syndromes, there are four syndrome spaces no correctable errors will map the code space into--They exist to make the stabilizers Paulian. If we want to use the three Pauli stabilizers from Ref. [82], since they are also part of the word stabilizer (Corollary 3), it is better to let them be in the tuple of generators \(g\), and steps A.4 and A.5 should be done accordingly; e.g., in A.4 the syndrome for each orthonormal Pauli error should be chosen by how the error commutes with the Pauli stabilizers--so that these Pauli stabilizers will be among the Paulian stabilizers built in step A.6. ## Appendix G Gottesman-Kitaev-Preskill codes Let \(q=\left(a^{\dagger}+a\right)/\sqrt{2}\) and \(p=i\left(a^{\dagger}-a\right)/\sqrt{2}\) be conjugate quadrature operators. A Gottesman-Kitaev-Preskill (GKP) code for a single oscillator has two stabilizers, which are \(e^{2i\pi q/\alpha}\) and \(e^{-in\alpha p}\) for some real \(\alpha\), where \(n\) is the dimension of the code space [79; 89]; clearly the stabilizers are not Paulian. Such codes can correct small shifts in both \(q\) and \(p\); specifically, they can correct displacements with \(|\Delta q|<\alpha/2\) and \(|\Delta p|<\pi/(n\alpha)\)[79]. The eigenstates of these stabilizers are not physical in that they are infinitely-squeezed, so in practice finitely-squeezed states are used; the error probability can be acceptably low if the state is squeezed sufficiently [79; 80]. If the anticipated errors in \(q\) and \(p\) are comparable in magnitude, "square" GKP codes can be used, by choosing \(\alpha=\sqrt{2\pi/n}\). When \(n=\dim\mathcal{H}_{\mathrm{C}}=2\), the stabilizers are \(e^{2i\sqrt{\pi}q}\) and \(e^{-2i\sqrt{\pi}p}\)[72; 79; 80]. To measure the syndrome, one way is by preparing the ancilla in a GKP state, and utilizing the Steane circuit to ascertain the amount of shifts by measuring the ancilla [72; 79; 90; 91]. The outcomes are analog (or connected) rather than binary [72], and the corresponding measurement on the system is therefore not Paulian. Another avenue is phase estimation [72; 89]: Given a unitary operator \(U\) on a system, if the system is in an \(e^{i\theta}\)-eigenstate, the procedure to estimate the phase, i.e., \(\theta\) is called phase estimation. Because the stabilizers of GKP codes are unitary, we can obtain the syndrome this way; furthermore, as the simultaneous eigenspaces of \(e^{2i\pi q/\alpha}\) and \(e^{-in\alpha p}\) are translations of the code space in \(p\) and \(q\), they are orthogonal and isomorphic [79]. Phase estimation can be achieved by coupling the system and ancilla qubits via controlled-\(U^{k}\) gates, and after performing suitable operations and measurements on the ancilla qubits we are able to approximate the phase \(\theta\)[92; 93; 94; 94; 95]. It may seem that each measurement of an ancilla qubit is equivalent to measuring a Paulian operator on the system, as we have two measurement outcomes and they are equally likely (cf. Sec. II.2); however, it can be easily checked that such measurements in general are not orthogonal measurements, which is also evident from the coupling between the system and the ancilla being controlled-\(U^{k}\), cf. Sec. III.4--Hence, we cannot describe each measurement with a single self-adjoint operator, let alone a Paulian operator. Theoretically, we can construct commutative "Paulian" operators \(Z_{j}^{\mathrm{S}}\)s for phase estimation: For conve nience, let's rescale \(\theta\), so that the eigenvalues of \(U\) are \(e^{2i\pi\theta}\) with \(\theta\in[0,1)\)[4]. Each \(Z_{j}^{\mathrm{S}}\) is to measure the \(2^{-j}\) digit of \(\theta\) in binary representation, and \(\theta=0\) would correspond to the \((1,1,\cdots)\)-simultaneous eigenvalues of \(Z_{j}^{\mathrm{S}}\)'s. Hence, with \(\mathcal{H}_{\theta}\) denoting the \(e^{2i\pi\theta}\)-eigenspace of \(U\), we let the \(1\) and \(-1\)-eigenspaces of \(Z_{j}^{\mathrm{S}}\) be the direct sums of \(\mathcal{H}_{\theta}\) over all \(\theta\) whose \(2^{-j}\) digit in binary representation are \(0\) and \(1\), respectively. Under this construction, \(Z_{j}^{\mathrm{S}}\)'s shall be commutative and stabilize \(1\)-eigenvectors of \(U\) (i.e., \(\theta=0\)), and we can measure \(Z_{j}^{\mathrm{S}}\)'s to estimate the phase: For example, for an eigenvector of \(U\) with \(\theta=0.1010\) in binary representation, it is a \((0,1,0)\)-simultaneous eigenvector of \(Z_{j}^{\mathrm{S}}\)'s for \(j=1,2,3\). However, whether they are truly Paulian or not (as defined in Sec. II.2) depends on the spectral structure of \(U\)--The \(\pm 1\)-eigenspaces may fail to be isomorphic. For GKP codes, we can construct phase estimation operators for \(e^{2i\pi q/\alpha}\) and \(e^{-inap}\) respectively according to the previous paragraph, and these phase estimation operators are truly Paulian. The issue is that, even though they exist, to carry them out we need to couple very specific intervals of \(\theta\) with the ancilla; see the \(\pm 1\)-eigenspaces of each \(Z_{j}^{\mathrm{S}}\) above and Sec. III.4. Hence, existing schemes for error correction, such as those in Refs. [72; 80; 91; 96; 89], are more practical. A closing remark: As discussed in Sec. VI, commutative Paulian stabilizers are not unique, nor are the ones shown above. However, to construct practical Paulian stabilizers, appropriate syndrome spaces should be chosen, and this poses a great challenge, especially given the "continuous" nature of the errors for GKP codes. That being said, in practice states that approximate the true GKP codewords are used, and if confined to these physical states, we might be able to find suitable syndrome spaces to build practical Paulian stabilizers. This is, however, beyond the scope of this work.
2303.17152
Mixed Autoencoder for Self-supervised Visual Representation Learning
Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks via randomly masking image patches and reconstruction. However, effective data augmentation strategies for MAE still remain open questions, different from those in contrastive learning that serve as the most important part. This paper studies the prevailing mixing augmentation for MAE. We first demonstrate that naive mixing will in contrast degenerate model performance due to the increase of mutual information (MI). To address, we propose homologous recognition, an auxiliary pretext task, not only to alleviate the MI increasement by explicitly requiring each patch to recognize homologous patches, but also to perform object-aware self-supervised pre-training for better downstream dense perception performance. With extensive experiments, we demonstrate that our proposed Mixed Autoencoder (MixedAE) achieves the state-of-the-art transfer results among masked image modeling (MIM) augmentations on different downstream tasks with significant efficiency. Specifically, our MixedAE outperforms MAE by +0.3% accuracy, +1.7 mIoU and +0.9 AP on ImageNet-1K, ADE20K and COCO respectively with a standard ViT-Base. Moreover, MixedAE surpasses iBOT, a strong MIM method combined with instance discrimination, while accelerating training by 2x. To our best knowledge, this is the very first work to consider mixing for MIM from the perspective of pretext task design. Code will be made available.
Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung
2023-03-30T05:19:43Z
http://arxiv.org/abs/2303.17152v3
# Mixed Autoencoder for Self-supervised Visual Representation Learning ###### Abstract Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks via randomly masking image patches and reconstruction. However, effective data augmentation strategies for MAE still remain open questions, different from those in contrastive learning that serve as the most important part. This paper studies the prevailing mixing augmentation for MAE. We first demonstrate that naive mixing will in contrast degenerate model performance due to the increase of mutual information (MI). To address, we propose homologous recognition, an auxiliary pretext task, not only to alleviate the MI increasement by explicitly requiring each patch to recognize homologous patches, but also to perform object-aware self-supervised pre-training for better downstream dense perception performance. With extensive experiments, we demonstrate that our proposed Mixed Autoencoder (MixedAE) achieves the state-of-the-art transfer results among masked image modeling (MIM) augmentations on different downstream tasks with significant efficiency. Specifically, our MixedAE outperforms MAE by **+0.3% accuracy**, **+1.7 mIoU** and **+0.9 AP** on ImageNet-1K, ADE20K and COCO respectively with a standard ViT-Base. Moreover, MixedAE surpasses iBOT, a strong MIM method combined with instance discrimination, while accelerating training by \(2\times\). To our best knowledge, this is the very first work to consider mixing for MIM from the perspective of pretext task design. Code will be made available. ## 1 Introduction Self-supervised learning (SSL) has become one of the most popular pre-training paradigm due to its independence of human annotation. Previous literature mainly focuses on the handcrafted pretext task design [16, 23, 47] and instance discrimination [7, 11], while with the development of Vision Transformer [18], masked image modeling (MIM), deeply motivated by masked language modeling [15], has started to demonstrate more superior effectiveness by firstly **masking** some patches of the input images and then **reconstructing** the masked patches from visible ones by predicting certain targets generated by masked patches. In order to complete reconstruction, the encoder is expected to generate highly semantic representation which can be better transferred to downstream tasks [26, 36, 37, 62] for superior performance. Existing MIM works mainly concentrate on the design of the reconstruction targets (_e.g_., visual tokenizers [3, 17], pixels [58, 27], graphical features [54] and instance discrimination [2, 19, 63]) and masking strategies (_e.g_., random [3, 27]), attention-guide [31] and sample-dependent [50]). See more detailed discussions in Sec. 2. Despite the superior performance, we observe that the input augmentations for MIM have been seldom explored. Specifically, adding _color jittering_, an essential augmentation technique of contrastive learning [9], with MAE [27] even _degrades_ transfer results, suggesting that MIM might possess a different preference for data augmentations, and the effective data augmentation strategies for MIM are still an open question. In this paper, we explore the usage of image mixing, a commonly used technique in both supervised [60, 61] and contrastive learning [49, 59], with MAE [27]. We start by Figure 1: **Fine-tuning accuracy on ImageNet-1K. Our _MixedAE_ achieves the best trade-off between pre-training overhead and transfer performance. Specifically, _MixedAE_ surpasses MAE [27] consistently with only 3% extra overhead, while outperforms the strong iBOT [63] with only 53.4% of its computation overhead. See more detailed comparisons in Tab. 1. ID stands for instance discrimination, while MIM represents masked image modeling.** constructing a simple baseline to adopt mixing with MAE directly, which, different from in supervised and contrastive learning, would instead ease the reconstruction pretext by increasing the _mutual information_ between the model input and reconstruction target due to the usage of image mixing with global self-attention as proved in Sec. 3.1. To address this issue, we propose _homologous recognition_, an auxiliary pretext task to enforce each patch to recognize homologous patches explicitly according to attention distributions before reconstruction, and build our Mixed Autoencoder network (_MixedAE_) in Sec. 3.3. Moreover, we demonstrate that our simple yet effective method can not only achieve significant performance improvement, but also conduct object-aware SSL pre-training without any specifically designed modules for better downstream dense perception results in Sec. 3.4. Concurrently, MixMIM [38] also considers mixing with MAE, but different from ours, 1) **Purpose**: MixMIM uses mixing to recover the 2D structure after random masking for an efficient implementation to conduct MAE-style pre-training on hierarchy Vision Transformers [41], while ours utilizes mixing to conduct object-aware SSL pre-training for better representation learning. 2) **Method**: MixMIM uses masked self-attention to only perform attention within patches from the same images given the mixing masks as **input**, sharing the exactly same pretext task with MAE, while ours requires explicit homologous recognition given mixing masks as **target**, actively emerging mixing into the pretext design. 3) **Formulation**: The mixing ratio \(r\) is limited to 0.5 in MixMIM, which instead can be flexibly selected from \((0,0.5]\) in our formulation. See more details in Sec. 3. The main contributions of this work contain three parts: 1. We propose the Mixed Autoencoder (_MixedAE_), a simple yet effective approach to conduct object-aware pre-training without introducing any specifically designed modules. With extensive experiments, we demonstrate that _MixedAE_ can achieve the state-of-the-art transfer performance on various downstream tasks including image classification, semantic segmentation and object detection, while maintaining significant efficiency. 2. We theoretically demonstrate the underlying design differences between MIM and previous supervision with mixing (_e.g_., supervised and contrastive learning). 3. To our best knowledge, this is the first work to consider mixing as an effective data augmentation strategy for MIM from the perspective of pretext design with a pure autoencoder-based architecture. ## 2 Related Work Self-supervised learning aims at learning a transferable representation without human annotation. Previous works mainly focus on handcrafted pretext design [16, 23, 47] and instance discrimination [9, 10, 25]. Mask image modeling (MIM), inspired by the mask language modeling [15], has achieved significant performance with superior pre-training efficiency by firstly masking portion of an image, and then reconstructing the masked part based on the visible one. Reconstruction target.BEiT [3] pioneeringly proposes to predict visual tokens generated by a pre-trained visual tokenizer [48], which is simplified by SimMIM [58] to use pixel values as the reconstruction target directly. MAE [27] proposes an asymmetric encoder-decoder architecture for better efficiency. Besides pixels as the target, MaskFeat [54] utilizes HOG features, while PeCo [17] enhances the BEiT tokenizer with an additional perceptual loss. Recent works combine the idea of instance discrimination [9] with MIM. iBOT [63] considers MIM as a self-distillation process with a Siamese architecture, and data2vec [2] proposes a unified framework to conduct masked reconstruction pre-training for speech, images and languages. SplitMask [19] divides an image into two equal partitions and performs contrastive learning and MIM in the multi-task manner. In this paper, we build _MixedAE_ based on MAE due to its efficiency and effectiveness, while the improvement brought by _MixedAE_ is complementary to more advanced reconstruction targets. Masking strategy.Instead of random masking [3, 27], AttMask [31] proposes a novel attention-guided masking strategy by masking according to the attention map of the final Transformer layer, while ADIOS [50] introduces an adversarial objective between masking and reconstruction to generate learnable masks for MIM pre-training. In this paper, we utilize random masking for _MixedAE_ following MAE due to its simplicity and effectiveness. Input augmentationinstead has been less explored for MIM. Instead of masking, CIM [22] adopts a small BEiT as the generator to corrupt an image, which is further taken as input to an enhancer to reconstruct the corrupted patches or distinguish the corrupted patches from the uncorrupted ones. Concurrently, MixMIM [38] considers mixing with MAE also, but different from ours, MixMIM uses _masked self-attention_ to only perform attention within patches from the same images given the mixing masks as _input_, which is exactly the same with MAE from the perspective of pretext design, while ours utilizes mixing as part of the pretext task actively to conduct object-aware SSL pre-training. ## 3 Method In this section, we start by adopting mixing in MAE [27] with a simple baseline in Sec. 3.1, which, as we can prove, will instead ease the reconstruction pretext task. Then, we propose a novel auxiliary pretext task to formulate our final _MixedAE_, which can not only alleviate the ease of reconstruction, but also achieve object-aware SSL pre-training without specifically designed modules in Secs. 3.2 to 3.4. ### Mixing: A Simple Baseline Given a unlabeled dataset, we randomly sample a _clean_ data batch with size \(B\), which are later divided into non-overlapping patch sequences \(\{\mathbf{x}_{i}\}_{i=1}^{B}\) (\(\mathbf{x}_{i}\in\mathcal{R}^{L\times(P^{2}\cdot C)}\)) following ViT [18], where \(L\) is the sequence length, \(P\) is the patch size, and \(C\) is the image channel dimension. Mixing.The data batch is further separated into _groups_\(\{\{\mathbf{x}_{i}^{j}\}_{i=1}^{1/r}\}_{j=1}^{Br}\), and each group will generate a single _mixed_ image, where \(r\in(0,0.5]\) is the _mixing ratio_ representing the ratio of patches each clean image contributes to within a single mixed sample. Different from MixMIM [38], \(r\) is not restricted to 0.5 in our formulation. Therefore, the mixing process for the \(j\)-th group can be represented as, \[\mathbf{\hat{x}}^{j}=\sigma_{mix}(\{\mathbf{x}_{i}^{j}\},\mathbf{M}^{j})=\sum_{i=1}^{1/r} \mathbb{1}\left(\mathbf{M}^{j}=i\right)\mathbf{x}_{i}^{j}, \tag{1}\] where \(\mathbb{1}(\cdot)\) is indicator function and \(\mathbf{M}^{j}\in\{1,2,...,1/r\}^{L}\) represents a random mixing mask independently generated for the \(j\)-th group, which satisfies, \[\sum_{l=1}^{L}\mathbb{1}\left(M_{l}^{j}=i\right)=rL,\;\forall i\in\{1,2,..., 1/r\}. \tag{2}\] So, \(\mathbf{M}^{j}\) determines the source patch in each position of \(\mathbf{\hat{x}}^{j}\), while maintaining the mixing ratio for each clean image equal with \(r\) (_i.e_. symmetric mixing). The mixed images \(\mathbf{\hat{x}}^{j}\) are further fed into the encoder for feature extraction, which can be represented as \(\mathbf{\hat{z}}^{j}=\texttt{enc}(\mathbf{\hat{x}}^{j})\). Unmixing.Following MAE [27], \(\mathbf{\hat{z}}^{j}\) is then "unmixed" to recover the input batch before mixing by inserting a special [MASK] token with \(\mathbf{M}^{j}\). For \(\forall i\in\{1,2...,1/r\}\), we have, \[\mathbf{z}_{i}^{j}=\mathbb{1}(\mathbf{M}^{j}=i)\mathbf{\hat{z}}^{j}+\left[1-\mathbb{1}(\bm {M}^{j}=i)\right]\texttt{[MASK]}. \tag{3}\] The "unmixed" group \(\{\mathbf{z}_{i}^{j}\}_{i=1}^{1/r}\) is then taken as input to the decoder for pixel reconstruction, as \(\mathbf{y}_{i}^{j}=\texttt{dec}(\mathbf{z}_{i}^{j})\). Finally, the reconstruction loss can be formulated as, \[\mathcal{L}_{recon}=\sum_{i=1}^{1/r}\sum_{l=1}^{L}[1-\mathbb{1}(M_{l}^{j}=i)] (\mathbf{y}_{i,l}^{j}-\mathbf{x}_{i,l}^{j})^{2}. \tag{4}\] So far, we have built a simple baseline to adopt mixing for MAE, which, however, performs even worse than MAE, as demonstrated in Tab. 3f. In the following, we provide a theoretical explanation to prove that this naive incorporation will actually **ease** the reconstruction pretext task. Mutual information analysis.Without loss of generality, we take \(r=0.5\) as an example. Denote \(\mathbf{X}_{1},\mathbf{X}_{2}\) as two random variables representing two input images, and \(\mathbf{X}_{1}\) is further considered as the reconstruction target (symmetric for \(\mathbf{X}_{2}\)). Then, we can prove that the mutual information (MI) between the mixed input \(\sigma_{mix}(\{\mathbf{X}_{1},\mathbf{X}_{2}\},\mathbf{M})\) and the target \(\mathbf{X}_{1}\) is no smaller than that between the MAE input \(\sigma_{MAE}(\mathbf{X}_{1},\mathbf{M})\) and \(\mathbf{X}_{1}\) as (see proofs in Appendix A), \[I(\sigma_{mix}(\{\mathbf{X}_{1},\mathbf{X}_{2}\},\mathbf{M});\mathbf{X}_{1})\geq I(\sigma_{MAE }(\mathbf{X}_{1},\mathbf{M});\mathbf{X}_{1}). \tag{5}\] Therefore, different from masking, which is introduced to **decrease** the mutual information between the model input Figure 2: **Model architecture of Mixed Autoencoder (_MixedAE_). (a) The input images are first separated into groups to generate mixed samples independently, which are further taken as input to the encoder for feature extraction. (b) The self-attention operations are replaced with our homologous attention, enforcing each patch to only attend to patches with the highest attention mass. (c) The encoder features will be “unmixed” and fed into the decoder for pixel reconstruction. (d) Meanwhile, the homologous contrastive loss is adopted to verify the sampling accuracy by encouraging features of homologous patches to be similar, while heterologous ones to be dissimilar.** and the reconstruction target due to the redundancy of image signals [27], naive mixing will instead **increase** the MI, and thus, ease the reconstruction pretext task. Verification experiment is conducted in Appendix C. Also note that the MI increasement brought by mixing is target-invariant, suggesting that Eq. (5) also holds when the target is _semantic labels_ for supervised learning or _positive samples_ for contrastive learning, for which MI increasement is appealing. This might explain why naive mixing without auxiliary supervision is beneficial for supervised [60, 61] and contrastive learning [49, 59], but not for MAE. ### Recognition: Homologous Recognition Another indispensable factor to achieve MI increasement is the usage of _global self-attention_ in ViT, with which each query patch will inevitably attend to heterologous patches from other images. Due to the uncertainty of generative modeling, heterologous patches might provide a shortcut to complete reconstruction (, the green color of cucumbers is a "valuable" cue to reconstruct the forest behind the fox in Fig. 3a). To address, we propose a novel auxiliary pretext task called _homologous recognition_ to enforce each query to explicitly recognize and only attend to homologous patches. Homologous attentionrecognizes homologous patches on-the-fly by enforcing each query patch to only attend to key patches with the highest attention mass using a \(\mathrm{TopK}(\cdot)\) sampling operation. Specifically, the homologous attention can be formulated as, \[A_{HomoAtt}=\mathrm{softmax}(\mathrm{TopK}(\mathbf{qk}^{T}/\sqrt{D_{h}})), \tag{6}\] where \(\mathbf{q}\) is the query patch, \(\mathbf{k}\) are the key patches and \(D_{h}\) is the feature dimension. By default, all the self-attention operations in ViT are replaced with homologous attention except the very first layer. See comparisons in Tab. 3e. Homologous contrastiveaims at verifying the \(\mathrm{TopK}(\cdot)\) sampling accuracy by encouraging the encoder features of homologous patches to be similar, while heterologous ones to be dissimilar in the supervised contrastive manner [32]. The homologous contrastive loss can be formulated as, \[\mathcal{L}_{HomoCon}=-\sum_{l=1}^{L}\sum_{l^{+}}\log\frac{exp(cos(\hat{\mathbf{ z}}_{l}^{j},\hat{\mathbf{z}}_{l^{+}}^{j})/\tau)}{\sum_{l^{\prime}\neq l}^{L}exp( cos(\hat{\mathbf{z}}_{l}^{j},\hat{\mathbf{z}}_{l^{\prime}}^{j})/\tau)}, \tag{7}\] where \(\tau\) is the temperature and \(cos(\cdot,\cdot)\) is the cosine similarity. As demonstrated in Fig. 6, the \(\mathrm{TopK}(\cdot)\) sampling accuracy is significantly improved and stabilized with the usage of the homologous contrastive loss \(\mathcal{L}_{HomoCon}\). Segment embedding.Beside the positional embeddings, we add another segment embedding to the mixed sequence \(\hat{\mathbf{x}}^{j}\) following BERT [15] to provide necessary information to complete homologous recognition, due to the uncertainty of generative modeling. The segment embedding is shared for patches from the same image, while different for patches from different images, as demonstrated in Fig. 3b. Mixing mode.For a fair comparison under different training overheads, two mixing modes are adopted for _MixedAE_, 1) **Compose**: each group generates a single mixed sample following Eq. (1), and the effective encoder batch size is \(Br\); 2) **Full**: each group generates \(1/r\) mixed samples by sampling \(\mathbf{M}^{j}\) for \(1/r\) times independently, and the effective encoder batch size is \(B\). An example is provided in Fig. 4 when \(r=0.5\). As shown in Tab. 1, _MixedAE_ achieves the SoTA performance under different training overheads. If no otherwise specified, compose mixing is adopted by default. Figure 4: **Visualization of two mixing modes when \(r=0.5\). (a) Each group generates a single mixed sample for the compose mixing mode, (b) while \(1/r\) mixed samples are generated for the full mixing mode to maintain the effective batch size unchanged.** Figure 3: **Visualization of segment embeddings**. (a) Due to the uncertainty of generative modeling, green colors of the cucumber and the forest are both reasonable for patches in the red ellipse. (b) We adopt different segment embeddings for different images to provide necessary information for homologous recognition. ### Reconstruction: Loss Function Loss function.We formulate _MixedAE_ in the multi-task learning manner and the final loss function is a weighted sum of the reconstruction loss \(\mathcal{L}_{recon}\) and the homologous contrastive loss \(\mathcal{L}_{HomoCon}\) as, \[\mathcal{L}_{MixedAE}=\mathcal{L}_{recon}+\lambda\mathcal{L}_{HomoCon}, \tag{8}\] where the balanced weight \(\lambda\) is set to be 0.1 by default. ### Discussion: Object-aware Pre-training With the usage of mixing, we observe that _MixedAE_ can achieve object-aware self-supervised pre-training without any specifically designed components, such as K-means [8], selective search [55] and object discovery network [29], because homologous recognition requires each query patch to recognize all homologous patches within a mixed image. Due to the _single-centric-object_ guarantee [8] of ImageNet, that most images are pre-processed to guarantee only one object in the center part of them, the mixed image can be considered as a "pseudo" multi-instance image, and given a query patch, the process of recognizing all patches from the same image within a mixed sample is exactly recognizing all patches from the same object within a given "pseudo" multi-instance image. Therefore, the awareness of object existence and completeness is enhanced in the learnt representation of our _MixedAE_. In Fig. 5, we visualize the attention maps of MAE and _MixedAE_ by averaging all attention heads of the last layer, taken the [CLS] token as query and patch tokens as keys. Compared with MAE which mainly focuses on the most discriminative patches (_e.g._, boundaries and corners), our _MixedAE_ can successfully discover the foreground object patches more precisely and completely, which might also explain why _MixedAE_ improves more significantly when transferred to dense perception tasks, such as semantic segmentation [62] and object detection [37], as shown in Tab. 1. ## 4 Experiments ### Implementation Details Architecture.We mainly use the standard ViT-Base [18] as the backbone architecture, and further provide ViT-Large experiments in Appendix C. The input images are resized to \(224\times 224\), resulting in a total sequence length \(L=196\) with the patch size being \(16\times 16\). Following MAE [27], the decoder consists of 8 Transformer layers with the hidden dimension as 512 by default. For a fair comparison with BEiT [3], we additionally build a _MixedAE_ in full mixing mode with a lightweight decoder made up of 2 Transformer layers and the hidden dimension as 256, as shown in Tab. 1. The mixing ratio \(r\) is set to be 0.25 (_i.e._, corresponding to the 0.75 masking ratio in MAE [27]) by default, and the threshold \(K\) in the \(\mathrm{TopK}(\cdot)\) operation is therefore set to be \(0.25L\). Following common practices [7, 11], we adopt a linear projector with the output dimension as 128 right before the homologous contrastive loss, and the temperature coefficient \(\tau\) is set to be 0.25. Optimization.We pre-train _MixedAE_ on the ImageNet-1K [14] training set with the AdamW [43] optimizer and a cosine learning rate schedule with a linear warm-up of 40 epochs. The batch size is set to be 4096 for the compose mixing, while 1024 for the full mode. The base learning rate is set to be \(7.5e^{-5}\), which will scale linearly with the batch size (\(lr=lr_{base}\times bs/256\)). Only standard random cropping and flipping are utilized for data augmentation. The remaining hyperparameters are all maintained the same with MAE for a fair comparison (see Appendix B for more details). Figure 5: **Visualizations of attention maps** on images from ImageNet-1K [14] (1st-3rd columns), Microsoft COCO [37] (4th-6th columns) and ADE20K [62] datasets (7th-9th columns). Both MAE and _MixedAE_ are pre-trained on ImageNet-1K for 300 epochs. Compared with MAE which mainly focuses on the most discriminative patches, (_e.g._, boundaries (1st, 2nd & 5th) and edges (6th & 8th)), _MixedAE_ discovers foreground object patches more precisely (3rd & 9th) and completely (4th & 7th). See more attention maps in Appendix E. ### Transfer Results on ImageNet-1K Setup.We consider the fully fine-tuning performance on ImageNet-1K for 100 epochs and report the Top-1 accuracy. Following MAE [27], we average all the patch tokens after the final Transformer layer, which is taken as input to a linear head for classification. See more details in Appendix B. Comparison with MAE.As shown in Tab. 1, _MixedAE_ obtains consistent improvements over MAE under different pre-training epochs with only 3% additional overhead. The 300-epoch pre-trained _MixedAE_ with full mixing acquires even better accuracy than the 1600-epoch pre-trained MAE, demonstrating the efficiency of data utilization. Comparison with other MIM augmentations.Our _MixedAE_ with the lightweight decoder and the full mixing mode obtains 83.7% Top-1 accuracy, 0.5% and 0.4% higher than MixMIM [38] and CIM [22] respectively, meanwhile saving at least 23.4% computational overhead, revealing the simplicity of our _MixedAE_. Comparison with other SSL approaches.Our _MixedAE_ obtains consistent improvements under various pre-training epochs and overheads, and achieves a better trade-off between pre-training overhead and transfer performance, as shown in Fig. 1. Specifically, _MixedAE_ pre-trained for 1600 epochs achieves 83.9% accuracy, constructing a new state-of-the-art result with a pure **autoencoder**-based framework. Requiring only 53.4% of the training overhead, _MixedAE_ surpasses the strong iBOT [63], demonstrating remarkable efficiency. Moreover, the improvement brought by mixing is orthogonal to the usage of more advanced reconstruction targets [2, 17, 54] and masking strategies [31, 50]. ### Transfer Results on Downstream Tasks We further consider three downstream settings to evaluate the learnt representation, and more details about the different transfer procedures are included in Appendix B. Semantic segmentation.We utilize the UperNet [57] to perform semantic segmentation on ADE20K [62] following BEiT [3]. As reported in Tab. 1, our 800-epoch _MixedAE_ achieves 48.7 mIoU, even surpassing the MAE pre-trained for 1600 epochs by 0.6 mIoU, and our 1600-epoch _MixedAE_ further improves to 49.8 mIoU, outperforming all baseline methods by a non-trivial margin, which is more significant than the improvement on ImageNet-1K (1.7 vs. 0.3), thanks to the object-aware pre-training, as discussed in Sec. 3.4. Object detection and instance segmentation.We utilize Cascade Mask R-CNN [28, 5] to produce bounding boxes and instance masks simultaneously on COCO [37]. As demonstrated in Tab. 1, _MixedAE_ consistently outperforms MAE under different epochs (0.6/0.9/0.9 & 0.5/0.6/0.7). Similarly with ADE20K, more significant improvements are observed due to the high-quality attention maps learnt by the object-aware pre-training, as demonstrated in Fig. 5. \begin{table} \begin{tabular}{l|c c|c|c|c c c|c c c} \hline \multirow{2}{*}{Method} & Pre-train & Pre-train\({}^{\dagger}\) & ImageNet & ADE20K & \multicolumn{6}{c}{COCO} \\ & Epochs & GPU-days & Top-1 Acc. & mIoU & AP\({}^{bb}_{50}\) & AP\({}^{bb}_{75}\) & AP\({}^{mk}_{50}\) & AP\({}^{mk}_{50}\) & AP\({}^{mk}_{75}\) \\ \hline DeiT [52] & 300 & 19.6 & 81.8 & 46.9 & 48.8 & 68.7 & 52.7 & 42.5 & 65.9 & 45.5 \\ MoCov3 [11] & 600\({}^{\dagger\dagger}\) & 54.8 & 82.8 & 46.8 & 47.2 & 66.9 & 50.8 & 41.1 & 63.6 & 44.1 \\ DINO [7] & 1600\({}^{\dagger\dagger}\) & 120.5 & 82.8 & 46.9 & 49.5 & 69.1 & 53.6 & 42.9 & 66.0 & 46.3 \\ \hline BEiT [3] & 300 & 32.1 & 82.9 & 44.7 & 39.3 & 57.7 & 42.4 & 34.8 & 55.2 & 36.8 \\ MAE [27] & 300 & 16.4 & 82.7 & 46.1 & 47.2 & 65.8 & 51.3 & 41.1 & 62.9 & 44.4 \\ MixMIM [38] & 300 & 40.2 & 83.2 & - & - & - & - & - & - \\ CIM-RevDet [22] & 300 & 42.7 & 83.1 & - & - & - & - & - & - & - \\ CIM-ResPix [22] & 300 & 42.7 & 83.3 & - & - & - & - & - & - & - \\ **MixedAE** & 300 & 16.9 & 83.1\({}^{+0.4}\) & 47.0\({}^{+0.9}\) & 47.8\({}^{+0.6}\) & 66.6\({}^{+0.8}\) & 52.0\({}^{+0.7}\) & 41.6\({}^{+0.5}\) & 63.6\({}^{+0.7}\) & 45.0\({}^{+0.6}\) \\ **MixedAE-Full\({}^{*}\)** & 300 & 30.8 & 83.7\({}^{+1.0}\) & 47.4\({}^{+1.3}\) & 48.9\({}^{+1.7}\) & 67.6\({}^{+1.8}\) & 53.3\({}^{+2.0}\) & 42.5\({}^{+1.4}\) & 64.8\({}^{+1.9}\) & 45.9\({}^{+1.5}\) \\ **MixedAE-Full** & 300 & 62.3 & **83.8\({}^{+1.1}\)** & **48.9\({}^{+2.8}\)** & **51.0\({}^{+3.8}\)** & **69.7\({}^{+3.9}\)** & **55.2\({}^{+3.9}\)** & **44.1\({}^{+3.0}\)** & **67.0\({}^{+4.1}\)** & **47.9\({}^{+3.5}\)** \\ \hline BEiT [3] & 800 & 85.5 & 83.2 & 45.6 & 40.8 & 59.4 & 44.1 & 36.0 & 56.8 & 38.2 \\ MAE [27] & 800 & 43.7 & 83.3 & 47.2 & 49.4 & 68.1 & 53.9 & 42.9 & 65.5 & 46.6 \\ **MixedAE** & 800 & 45.0 & **83.5\({}^{+0.2}\)** & **48.7\({}^{+1.5}\)** & **50.3\({}^{+0.9}\)** & **69.1\({}^{+1.0}\)** & **54.8\({}^{+0.9}\)** & **43.5\({}^{+0.6}\)** & **66.2\({}^{+0.7}\)** & **47.4\({}^{+0.8}\)** \\ \hline MAE [27] & 1600 & 87.4 & 83.6 & 48.1 & 50.6 & 69.4 & 55.0 & 43.8 & 66.6 & 47.5 \\ iBOT [63] & 1600\({}^{\dagger\dagger}\) & 172.1 & 83.8 & 49.6 & 51.2 & 70.1 & 55.2 & 44.3 & 67.4 & 48.0 \\ **MixedAE** & 1600 & 90.1 & **83.9\({}^{+0.3}\)** & **49.8\({}^{+1.7}\)** & **51.5\({}^{+0.9}\)** & **70.2\({}^{+0.8}\)** & **55.9\({}^{+0.9}\)** & **44.5\({}^{+0.7}\)** & **67.5\({}^{+0.9}\)** & **48.2\({}^{+0.7}\)** \\ \hline \end{tabular} \end{table} Table 1: **Transfer performance comparison between methods pre-trained on ImageNet-1K. 1**) Effectiveness: _MixedAE_ achieves the state-of-the-art performance under different pre-training epochs and overheads. 2) Efficiency: _MixedAE_ consistently surpasses the strong iBOT [63] baseline, while only requiring 53.4% of the pre-training overhead. 3) Object-aware pre-training: more significant improvements are achieved when transferred to downstream dense perception tasks (0.3 vs. 1.7 vs. 0.9). \({}^{*}\): a lightweight decoder is deployed to maintain similar pre-training overhead with BEiT [3]. \({}^{\dagger}\): GPU-days estimated on Tesla V100 GPUs. \({}^{\dagger\dagger\dagger}\): effective epochs following iBOT [63]. **Downstream classification.** Following [20, 39, 40], we study transfer performance on 11 downstream classification datasets, including both fine-grained (_e.g_., Cars [33]) and coarse-grained ones (_e.g_., CIFAR100 [34]) in Tab. 2. Our _MixedAE_ achieves consistent improvement over MAE on all 11 downstream tasks with an average accuracy of 86.9%, outperforming all counterparts as demonstrated in Tab. 2. ### Ablation Study **Setup.** We conduct 300-epoch pre-training with a base learning rate of \(1.5e^{-4}\) for all ablation studies on _MixedAE_ with compose mixing. By default, we report the fine-tuning accuracy on ImageNet-1K [14] and mIoU on ADE20K [62]. See more detailed settings and results in Appendix C. **Mixing ratio \(r\).** Different from MixMIM [38], the mixing ratio \(r\) in our formulation can be flexibly selected from the range of \((0,0.5]\). As shown in Tab. 2(a), \(r=0.25\) works better, while requiring less pre-training overhead (since the effective encoder batch size scales linearly with \(r\), as shown in Eq. (1)). Note that \(r=0.25\) is also corresponding to the default 0.75 masking ratio in MAE. **Position of homologous contrastive.** We study whether the encoder features before or after the final Layer Normalization ([LN]) [1] of ViT [18] achieves better performance as input to the homologous contrastive loss in Tab. 2(b). The latter achieves consistent improvements on both ImageNet-1K and ADE20K, suggesting that the features after [LN] are more suitable for homologous recognition. **Positives of homologous contrastive.** In Eq. (7), given a query patch, all homologous patches are considered as positive samples but taken separately to calculate \(\mathcal{L}_{HomoCon}\) in the supervised contrastive manner [32]. We further ablate to utilize the average of both the query and all its homologous patches as its positive, which, however, performs slightly worse than the separate manner (-0.2 mIoU on ADE20K). **Threshold \(K\) of homologous attention.** We study the threshold number \(K\) in \(A_{HomoAtt}\) in Tab. 2(d), where **all** the global self-attention operations in the ViT are replaced with our homologous attention. Compared with the no sampling baseline, the usage of homologous attention obtains consistent improvement on ImageNet-1K, while the best achieves at \(K=0.25L\), which is consistent with the mixing ratio \(r\) in Tab. 2(a) specifically. **Position of homologous attention.** As shown in Fig. 5(a), homologous attention cannot achieve promising accuracy in early Transformer layers without sufficient information engagement. Thus, we further explore to maintain the global self-attention in early layers unchanged in Tab. 2(e), and observe empirically that utilizing global self-attention in the very first layer only achieves the best performance. **Homologous recognition.** In Tab. 2(f), we further compare the effectiveness of different components of homologous recognition. Without the homologous contrastive loss for verification, utilizing homologous attention only obtains a significant drop of 0.5 mIoU on ADE20K. Although achieving improvement, utilizing the homologous contrastive only still suffers from the ease of reconstruction brought by MI increasement, as previously discussed in Eq. (5). Finally, the best performance is achieved when using homologous attention and contrastive loss simultaneously. ### Analysis **Effectiveness of \(\mathrm{TopK}\) sampling.** To further observe the effectiveness of \(\mathrm{TopK}\) sampling in homologous attention, we visualize the sampling accuracy with respect to different layers and pre-training epochs in Fig. 6. As demonstrated in Fig. 5(a), the naive usage of homologous attention only cannot achieve promising sampling accuracy, which, therefore, suffers from a significant performance drop in Tab. 2(f). Specifically, neither the sampling accuracy of the first two \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Method & Aircraft & Caltech & Cars & C10 & C100 & DTD & Flowers & Food & Pets & SUN & VOC & Avg. \\ \hline _SSL ResNets_ & & & & & & & & & & & \\ MoCov2 [10] & 79.9 & 84.4 & 75.2 & 96.5 & 71.3 & 69.5 & 94.4 & 76.8 & 79.8 & 55.8 & 71.7 & 77.7 \\ SimCLR [9] & 78.7 & 82.9 & 79.8 & 96.2 & 79.1 & 70.2 & 94.3 & 82.2 & 83.2 & 61.1 & 78.2 & 80.5 \\ BYOL [25] & 79.5 & 89.4 & 84.6 & 97.0 & 84.0 & 73.6 & 94.5 & 85.5 & 89.6 & 64.0 & 82.7 & 84.0 \\ SwAV [6] & **83.1** & 89.9 & 86.8 & 96.8 & 84.4 & 75.2 & 95.5 & 87.2 & 89.1 & 66.2 & 84.7 & 85.3 \\ SDR [40] & 82.6 & 89.0 & 87.5 & 97.4 & 84.4 & 75.6 & 97.0 & 86.1 & 89.3 & 66.1 & 85.3 & 85.5 \\ \hline _SSL Transformers_ & & & & & & & & & & & \\ MoCov3 [11] & 76.6 & 91.2 & 86.6 & 98.3 & 88.3 & 72.6 & 95.5 & 86.4 & 92.0 & 65.6 & 84.5 & 85.2 \\ DINO [7] & 69.4 & 91.2 & 81.3 & **98.4** & **88.9** & 77.6 & 96.9 & 87.3 & 93.5 & 64.7 & 86.3 & 85.1 \\ BEiT [3] & 66.3 & 80.2 & 78.6 & 96.1 & 80.0 & 69.9 & 92.9 & 83.2 & 85.3 & 57.1 & 76.7 & 78.7 \\ MAE [27] & 78.2 & 91.2 & 88.4 & 97.0 & 82.5 & 75.3 & 96.6 & 84.7 & 92.6 & 65.4 & 86.0 & 85.3 \\ \hline **MixedAE** & 82.1 & **91.5** & **88.8** & 97.9 & 85.9 & **78.7** & **97.1** & **87.4** & **93.6** & **66.2** & **86.4** & **86.9\({}^{+1.6}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: **Transfer performance comparison on 11 downstream classification tasks**. Our 1600-epoch pre-trained _MixedAE_ achieves consistent improvements over MAE on all 11 classification datasets with an average accuracy of 86.9%, surpassing all counterparts. nor final two layers exceed 60%, and accuracy of the very final layer even maintains under 40% throughout the whole pre-training process, as shown in Fig. 5(b) (the green curve). However, with the usage of the homologous contrastive loss for verification, the sampling accuracy is significantly improved and stabilized around 70% to 80% for all layers except the first two, as in Fig. 5(a). The sampling accuracy of the final layer rapidly increases to around 70% when pre-trained only for 20 epochs, maintaining stable throughout the remaining pre-training, as in Fig. 5(b) (the orange curve). Comparison with existing MIM methods combined with contrastive.Although also utilizing a "contrastive loss" with reconstruction, _MixedAE_ differs from existing MIM works [19, 63] combined with contrastive learning from two perspectives, 1) **Purpose**: existing works utilize contrastive loss to perform instance discrimination simultaneously with MIM, while our homologous contrastive is only utilized to guarantee the sampling accuracy. Therefore, homologous contrastive performs more like a regularization term instead of an individual self-supervision in [19, 63]. To verify, we pre-train a _MixedAE_ with \(\mathcal{L}_{HomoCon}\) only without \(\mathcal{L}_{recon}\), which cannot achieve reasonable performance, as reported in Appendix C. 2) **Efficiency**: given a single input, existing works require forward propagation at least twice for several augmented views to conduct instance discrimination, while only once for our homologous contrastive, resulting in the significant efficiency. Specifically, our _MixedAE_ surpasses iBOT [63] with only 53.4% of its computational overhead. ## 5 Conclusion This paper explores the usage of image mixing for MAE. Different from in supervised and contrastive learning, we first theoretically demonstrate naive mixing might instead ease the reconstruction pretext task. To address that, our _MixedAE_ with the proposed homologous recognition as the auxiliary supervision can not only achieve state-of-the-art performance with a better trade-off between transfer results and pre-training overhead, but also conduct object-aware pre-training without any specifically designed modules. We hope our simple yet effective method can bring researchers' attention to more effective data augmentations for MIM. Acknowledgments.We gratefully acknowledge support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research. Figure 6: **Analysis of \(\mathrm{TopK}\) sampling accuracy. Without the homologous contrastive loss for verification (green curve), utilizing the homologous attention only cannot achieve promising accuracy, dramatically varying between different layers. However, with the usage of homologous contrastive loss (orange curve), the sampling accuracy is significantly improved and stabilized mostly around 70% to 80% throughout the whole pre-training process, which is essential to achieve remarkable transfer performance, as demonstrated in Tab. 3f.**
2301.01149
I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic Segmentation
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
Haoyu Ma, Xiangru Lin, Yizhou Yu
2023-01-03T15:19:48Z
http://arxiv.org/abs/2301.01149v1
# I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic Segmentation ###### Abstract Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss, and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5\(\rightarrow\)Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU. Semantic Segmentation, Unsupervised Domain Adaptation, Photometric Alignment, Texture Alignment, Manifold Modelling, Category Triplet Loss, Consistency Regularization. ## 1 Introduction Semantic segmentation, a classical and fundamental research task in computer vision, aims to assign category labels to individual pixels in an image. It has been extensively investigated and has inspired many downstream applications including autonomous driving [1, 2] and medical image analysis [3, 4, 5]. Although the performance of existing semantic segmentation models have enjoyed a significant improvement in the wave of deep neural networks [6, 7, 8], training a semantic segmentation model usually requires a large number of images with pixel-level annotations, the collection process of which is laborious and time-consuming. Unsupervised Domain Adaptation (UDA) for semantic segmentation is an alternative to avoid the data annotation problem: it aims at learning a well-performing model from an unlabeled target dataset by jointly exploiting labeled images from a different source dataset (the label spaces of the two datasets must be compatible). However, domain shifts/disercipancies exist between different datasets. The most obvious differences are low-level image statistics related to colors, textures, or even illumination conditions. These differences can be partly alleviated by image-level adaptation. However, there are also object-level differences, such as object poses and spatial distributions, between different datasets, which give rise to different feature distributions. All these domain shifts have a detrimental impact on the final performance of the semantic segmentation model. Therefore, it is crucial to learn a feature representation capable of overcoming both image-level and feature-level domain shifts for unsupervised domain adaptive semantic segmentation. The causes of domain shifts/disercipancies have been extensively studied in previous works. In general, the primary causes can be categorized into image-level domain shifts and feature-level domain shifts. Image-level domain shifts refer to the differences in imaging conditions, such as lighting and settings in the camera imaging pipeline. They affect the overall appearance of an image and have a subtle influence on feature-level distributions. Existing work addressing image-level domain shifts is in general based on image-level style transfer, which makes use of deep models such as generative models or image-to-image translation models [9, 10], or Fourier Transformation [11]. We term these methods image-level adaptation methods. These methods have proven that transferring image styles or aligning feature distributions can bring the two domains closer. However, generative methods usually require a computationally expensive training process, whose instability is notorious. Generative models also suffer from mode collapse, which makes the range of the generated features unusually small (more explanation in Related Work). On the other hand, the Fourier Transformation based method [11] produces inferior style-transferred images, as shown in Figure 8. We have observed that previous work in domain adaptive semantic segmentation focusing on image-level domain alignment [10, 12] usually has inferior final segmentation performance in comparison to recent work that adopts a more complete pipeline [13, 14]. Such recent work further demonstrates that replacing the original source domain images with image-level domain aligned images can further improve the final performance of feature-level adaptation techniques. This indicates that the domain gap can only be partially alleviated with aforementioned image-level adaptation methods, and feature-level alignment can still benefit from an extra image translation module. Therefore, feature-level adaptation is still necessary after image-level adapta tion. For feature-level adaptation, a common practice in previous studies employs an adversarial method [14, 15], which considers features from the source and target domains aligned if they cannot be distinguished by a trained discriminator. But adversarial methods tend to generate a narrow range of feature distributions to fool the discriminator. When different images share similar feature distributions, trained models would have poor generalization performance. On the other hand, to perform category-level feature adaptation, some existing methods use category anchors computed in the source domain to align the two domains [16, 17], which can be regarded as imposing hard constraints on category-level feature distributions. This method ignores feature distances across different categories, and categories with similar feature distributions in the source domain may still have similar ones in the target domain, resulting in erroneous pseudo-labels when no supervision signals are available in the target domain. Our experiments demonstrate that imposing soft regularization on category-level feature distributions by adjusting the relative magnitude of inter-category and intra-category feature distances can improve model capacity. According to the above analysis, performing either image-level adaptation or feature-level adaptation alone could not address domain shifts adequately. Moreover, existing work on UDA for semantic segmentation lacks a unified approach to minimize domain shifts. Therefore, we approach the problem from both perspectives and propose a novel and efficient pipeline that unifies image-level and feature-level adaptation. For image-level domain shifts, we propose two novel and training-free image-level operation, called global photometric alignment and global texture alignment, to adapt images from the source domain to the target domain. However, image-level adaptation alone does not guarantee domain alignment in the feature space. Therefore, we devise a global manifold alignment module for feature-level adaptation. This module represents the source domain feature manifold with a set of atoms, and any pixel feature from the source domain or the target domain can be projected onto this manifold. By minimizing the projection errors between the input features and the manifold, all source and target domain features are aligned to the same manifold. To perform category-level feature adaptation, we also introduce two category-level feature distribution regularization methods: a category-oriented triplet loss is proposed in the source domain to softly regularize category centers by enlarging the margin between inter-category and intra-category feature distances. It is only adopted in the source domain because the measurement of inter-category and intra-category distances require reliable annotations that only exist in the source domain. The category-level feature adaptation method applied to the target domain is the self-supervised consistency regularization. This regularization makes the prediction on an augmented target image consistent with the pseudo-label of the corresponding non-augmented image, thus forcing the class labels of similar semantic contents to be consistent in the target domain. By addressing domain shifts from all perspectives simultaneously, experimental results demonstrate that our proposed method is capable of achieving significant performance improvements. Domain adaptive semantic segmentation methods can be applied to either synthetic source images or real source images as long as there exist significant domain gaps. For the application to synthetic source images, we follow the common practice [11, 13, 14, 15, 16, 18] and use the GTA5\(\rightarrow\)Cityscapes and SYNTHIA\(\rightarrow\)Cityscapes benchmarks to evaluate our proposed domain adaptation algorithm. In addition to synthetic source data, we also construct a new task on two open-source real-world endoscopic image datasets, Hyper-Kvasir [19] and Piccolo [20]. This task can serve as a new medical image benchmark for future studies in domain adaptive semantic segmentation. Experiment results on all three benchmarks demonstrate that our proposed method is capable of achieving significant performance improvements over existing state-of-the-art algorithms. To this end, this paper is an extension of [18] and the contributions of [18] can be summarized as follows, * A novel image-to-feature domain adaptive semantic segmentation pipeline is proposed to seamlessly combine coarse image-level adaptation with category-level feature distribution regularization. * Two novel and effective category-level regularization methods are proposed to deal with the source and target domain shifts, respectively. The first one is category-oriented triplet loss which regularizes category centers in the source domain, and the second one performs target domain consistency regularization over augmented target domain images. * The proposed method in [18] outperforms all previous methods, achieving state-of-the-art performances on both GTA5\(\rightarrow\)Cityscapes and SYNTHIA\(\rightarrow\)Cityscapes benchmarks. Compared to the conference version [18], this paper gives a more complete introduction and analysis of the proposed non-adversarial image-to-feature domain adaptive semantic segmentation pipeline. We provide more insights and discussions about the modules proposed in [18]. More importantly, we extend our work in [18] by introducing global manifold alignment in the high-level feature space. This manifold alignment algorithm serves as a feature-level adaptation strategy complementary to global photometric alignment proposed in [18]. An auxiliary data augmentation scheme for global texture alignment is also proposed to reduce the domain gap caused by texture variations. Experimental results demonstrate that our proposed global manifold alignment and global texture alignment modules make our proposed method more robust and achieve new state-of-the-art performance. To sum up, this paper has the following new contributions: * A manifold alignment algorithm is proposed to represent the high-level feature space via dimension reduction and clustering algorithms. To the best of our knowledge, this is the first piece of work that tackles unsupervised domain adaptive semantic segmentation with explicit manifold modeling. All related ablation studies have been conducted for this new module. * Global texture alignment is proposed as a data augmentation scheme for domain adaptive semantic segmentation. It reduces the sensitivity of the trained model with respect to domain-specific textures. * For synthetic source data, our updated method outperforms all previous UDA methods by a large margin, achieving new state-of-the-art performance on both GTA5\(\rightarrow\)Cityscapes and SYNTHIA\(\rightarrow\)Cityscapes benchmarks. * We further construct a new medical image domain adaptive semantic segmentation task on the basis of two open-source real-world endoscopic image datasets. Our proposed method also achieves state-of-the-art performance on this new task. ## 2 Related Work **Photometric Alignment.** Previous works [21, 22, 23] in unsupervised domain adaptation for image classification do not pay attention to image-to-image translation. However, it has been proven that a model trained with source images transfered into the target domain style can significantly improve the final performance in semantic segmentation tasks [14, 16]. This is perhaps because deep features for semantic segmentation are relatively more sensitive to local information compared to image classification. In order to achieve image-level photometric alignment, adversarial methods have been widely used in previous work on domain adaptation [12, 14, 17, 24, 25, 26, 27, 28], such as GAN [9, 29] and CycleGAN [10]. These GAN-based methods can transfer the styles of the images in the target domain to that of the source domain and thus significantly reduce image-level photometric differences [12, 14, 17]. Then, these style transferred source domain images are used to train a segmentation model. Because they are photometrically aligned with the target domain images, the models trained with these style transferred source domain images usually yield better performance compared to the model trained with the source domain images [17] only. However, it is also noted that adversarial models are unstable during training. Previous work has shown that image-level adversarial methods generally convert the source domain image-level distribution to the one in the target domain to improve the performance of the domain adapted model [14, 17]. But it is still an open question whether the style transferred images distribution roughly covers the whole target domain image-level distribution or just a small part of it. The non-adversarial photometric alignment methods for unsupervised semantic segmentation are rare. One latest line of research is the Fourier Domain Adaptation proposed in [11]. The motivation is that the low-frequency component of an image consists of the major photometric information, and replacing the low-frequency component in a source domain image with its reference image counterpart in the target domain could align the photometric information between different domains. However, the decomposition of frequency components is very sensitive to the image's content, and simply replacing the low-frequency information of an image with that of another image often introduces extra noises and leaves unsatisfactory visual artifacts. According to their experiments, the model's performance trained on the frequency-aligned samples also relies heavily on a multi-band ensemble with multiple models [11]. Unlike the Fourier Domain Adaptation, our proposed method is directly applied to color channels without the frequency decomposition, which provides us with comparable (superior) performance and image quality to its generative (Fourier Transformation-based) counterpart. Moreover, our proposed method only consists of several image-level operations which do not require standalone training and can be used with arbitrary source-target image pairs. **Adversarial Methods for Domain Adaptation.** There are traditional manifold learning methods that model high-dimensional feature spaces before the deep learning era [30, 31, 32, 33], but they are usually computationally costly when transplanted to deep learning applications. Previous work on handling feature spaces in UDA typically adopts adversarial methods [14, 15], which do not directly model the feature manifold, but consider features from the source and target domains aligned if they are indistinguishable by a trained discriminator. However, generators trained by adversarial methods are inclined to produce outputs with similar feature distributions [34]. They can surely reduce cross-domain feature distribution discrepancies and make image features agnostic to the input domain. However, it also reduces the diversity of image-level feature distributions from the same domain. It is difficult to visualize high-dimensional feature distributions resulting from adversarial methods, but we can take style-transferred RGB images generated by adversarial methods as an example. As shown in Figure 8c, all images generated by GAN are dark and smooth regardless of diverse image-level color distributions in the target domain. This phenomenon is called the mode collapse problem and is detrimental to the generalization capability of the domain adapted model in the target domain. Most recent algorithms [11, 15, 16] choose to remove adversarial methods from their last stages due to this mode collapse problem. Our approach differs from adversarial methods in that we model the feature manifold directly by learning a feature manifold from the source domain denoted by a set of representative feature vectors. Then, we propose a pixel feature projection loss that learns to project pixel features from both domains to the source domain feature manifold using these representative feature vectors. Therefore, minimizing the projection errors from both domains benefits domain alignment from a feature-level perspective. **Category-Based Methods.** The distribution of different category proportions can be very different between the source domain and the target domain. Existing work [15, 24, 27, 35] typically utilizes category labels/predictions to enforce global semantic constraints on category distribution of predicted labels in the target domain. Similar to their counterparts in image classification [21, 22, 23], some previous works in semantic segmentation (e.g. [16] and [17]) take one step forward to utilize category information: the penultimate image features which are used for generating pseudo-labels in the output layer in the target domain are mapped to their corresponding counterparts in the source domain. Another concurrent work [36] proposes to learn category prototypes online and correct pseudo labels according to the distance measurements between pixels features and those learned category prototypes, which is an improved version of [16]. However, category feature centroids used in [16, 36], or instance features used in [17] only serve as anchors for category-based feature adaptation. The margins between different categories are not explicitly enlarged. This is a problematic alignment strategy because category centroids close to each other in the source domain are still difficult to separate in the target domain. Therefore, we propose a method that differs from theirs in two major perspectives: first, a category-oriented triplet loss is proposed for the source domain to impose a soft constraint that regularizes the category centers for different categories. This approach actively makes inter-category distances between different categories in the high-level feature space larger than intra-category distances of a certain category by a specified margin; secondly, we enforce the predictions on augmented target domain images to be consistent with the pseudo-labels generated by the segmentation model of the corresponding non-augmented images. This is essentially a self-supervision based consistency regularization method and the design philosophy is based on the fact that the supervision signal in the target domain is weak due to the lack of confident pseudo-labels. ## 3 Method ### _Algorithm Pipeline_ The underlying philosophy of our proposed pipeline is straightforward: first, we exploit the photometric differences in the two domains to coarsely adapt the source domain images with the target domain images to minimize the image-level domain shifts, and the high-frequency distribution from the target domain is also randomly transfered into the source domain image.; then, we perform feature-level adaptation by aligning pixel features from both domains with the feature manifold generated by the coarsely adapted model regardless of its categories; finally, we impose soft constraints on inter-class center distances and intra-class feature variations to regularize category-level feature distributions. Overview of the pipeline is presented in Figure 1 and is illustrated as follows. **Settings.** Suppose the labeled source domain dataset be \(\mathbb{D}^{s}=\{(\mathbf{I}_{m}^{s},\mathbf{Y}_{m}^{s})\}_{m=1}^{N_{f}^{s}}\) where \(\mathbf{I}_{m}^{s}\) is a source image, \(\mathbf{Y}_{m}^{s}\) is the pixel-level annotation of \(\mathbf{I}_{m}^{s}\), and \(N_{f}^{s}\) is the number of source images in the source domain dataset. The target domain dataset \(\mathbb{D}^{u}\) contains a large number of unlabeled images \(\mathbb{D}^{u}=\{\mathbf{I}_{m}^{u}\}_{m=1}^{N_{f}^{s}}\). We assume the shape of all images is \(h\times w\times 3\), and the number of target classes to be segmented is \(M_{c}\). Hence, we have \(\mathbf{Y}_{m}^{s}\in\{1,2,\cdots,M_{c}\}^{h\times w}\). The purpose is to learn a semantic segmentation model for the target domain. **Step 0: Image-level Adaptation.** Given a source domain image \(\mathbf{I}_{m}^{s}\) in the training batch and a randomly selected target domain reference image \(\mathbf{I}_{m}^{u}\), \(\mathbf{I}_{m}^{s}\) and \(\mathbf{I}_{m}^{u}\) are converted into Lab color space as \((\mathbf{L}_{m}^{s},\mathbf{a}_{m}^{s},\mathbf{b}_{m}^{s})\) and \((\mathbf{L}_{n}^{u},\mathbf{a}_{n}^{u},\mathbf{b}_{n}^{u})\) by our proposed GPA module respectively. The histogram mapping function \(f_{match}(\cdot)\) is then used to process both \(\mathbf{a}_{m}^{s}\) and \(\mathbf{b}_{m}^{s}\) channels, and gamma correction function \(f_{gamma}(\cdot)\) is applied to \(\mathbf{L}_{m}^{s}\) to form \(\big{(}f_{gamma}(\mathbf{L}_{m}^{s}),f_{match}(\mathbf{a}_{m}^{s}),f_{match}(\mathbf{b}_{ m}^{s})\big{)}\). After the mappings, the image is then converted back to RGB space to generate the aligned image \(\mathbf{\widetilde{I}}_{m}^{s}\). All these randomly generated adapted images are used to construct the adapted source domain training set \(\mathbb{D}^{s}=\{(\mathbf{\widetilde{I}}_{m}^{s},\mathbf{Y}_{m}^{s})\}_{m=1}^{N_{s }^{s}}\) for each training epoch. Then, a stochastic function \(\tau_{1}(\cdot)\) is applied to the source domain training set \(\mathbb{D}^{s}\) to produce an augmented version. A segmentation model \(\mathcal{F}_{0}\) is then trained based on the augmented style-transferred source domain images \(\tau_{1}(\mathbb{D}^{s})\) with the cross-entropy loss \(\mathcal{L}_{seg}\). **Step 1: Feature-level Adaptation.** The aforementioned image-level adaptation only diminishes the image-level domain shifts between the source and target domains. But image-level adaptation operations do not guarantee the adaptation of high-level features because image components such as textures are not altered by image-level photometric operations, and still impact the high-level features. Therefore, we further modify a random subset of the photometrically aligned images, and make their texture-related high frequency components follow the corresponding distributions in the target domain. Let \(\mathbf{\widetilde{I}}_{m}^{s}\) be a photometrically aligned image, whose texture components are further updated. The resulting image \(\mathbf{\widetilde{I}}_{m}^{s}\) is the actual input to the segmentation model in this step. We also introduce a global manifold alignment module to tackle the feature-level domain shifts. Before training a new segmentation model, we learn a representation of the feature manifold in the source domain offline. We first apply the initial model \(\mathcal{F}_{0}\) to all source domain images to obtain their feature maps and prediction probability maps. Correctly classified feature vectors from these feature maps are randomly sampled to form matrix \(\mathcal{X}\). Then both PCA and K-Means clustering are applied to \(\mathcal{X}\) to learn a feature manifold represented with a set of cluster centers \(\mathbf{z}\) in a dimension reduced feature subspace. When a new segmentation model is trained, features from both the source and target domains are projected onto this manifold, and the projection error, \(\mathcal{L}_{mfd}\), is minimized. In addition to cross-entropy loss \(\mathcal{L}_{seg}\) and manifold projection error \(\mathcal{L}_{mfd}\), two loss functions for category-level feature distribution regularization are also adopted in the training process. Category center \(\mathbf{f}_{c}\) for every category \(c\) is calculated as the \(L_{2}\) normalized mean of all pixel features from category \(c\) in the source domain. One of the two loss functions is a category-oriented triplet loss \(\mathcal{L}_{triplet}\) defined over the image style transferred source domain dataset \(\mathbb{D}^{s}\) to enlarge the inter-category distances and minimize intra-category variances. In the target domain, the pseudo-label at a certain pixel location and its associated confidence are defined according to the prediction probability maps produced from the initial segmentation model \(\mathcal{F}_{0}\). Pseudo-labels with confidence higher than an adaptive threshold are considered valid samples, and are used to define a target domain consistency loss \(\mathcal{L}_{cst}\) to regularize category-level feature distributions in the target domain. The remaining pixels are left out during back-propagation. We fine-tune the segmentation model \(\mathcal{F}_{0}\) for \(U\) iterations by minimizing \(\mathcal{L}_{seg}+\mathcal{L}_{mfd}+\mathcal{L}_{triplet}+\mathcal{L}_{cst}\) to produce a new segmentation model \(\mathcal{F}_{1}\) for the current step. **Step 2 to K: Iterative Self-Supervised Training.** Model \(\mathcal{F}_{1}\) trained in Step 1 can be further improved with iterative steps similar to Step 1. Such an iterative approach is called self-supervised training and is widely adopted in the area of unsupervised domain adaptive semantic segmentation [11, 14, 16, 17]. The same Step 1 is performed, but the pre-trained model \(\mathcal{F}_{0}\) is replaced with \(\mathcal{F}_{i-1}\). And model \(\mathcal{F}_{i-1}\) is also used to update manifold atoms \(\mathbf{z}\), pseudo-labels and category centers \(\mathbf{f}_{c}\). This process is repeated for \(K-1\) times. Refined pseudo-labels generated by models from each stage further improve the segmentation performance and reduce the domain gap (Figure 2). But erroneous pseudo-labels also accumulate false supervision signals and limit the magnitude of performance improvement. Our proposed image-to-feature pipeline is shown in Figure 1. ### _Global Photometric Alignment_ Since global domain shifts are mostly related to low-level image attributes, global photometric alignment is proposed in our work to transfer low-level image attributes of the target domain to source domain images. It is observed that the spatial lightness distribution of an image can be very complicated in different scenarios. It is also important to note that directly operating on RGB channels would cause severe artifacts and fake colors. In contrast, the spatial color distribution of the \(a\) and \(b\) color channels always have similar bell-shaped histograms. Therefore, we approach lightness and color with different treatments: we perform classic histogram matching [37] between the source domain image and the target domain reference image only on color channels \(a\) and \(b\) to avoid introducing artifacts commonly seen in histogram matching results. **Lightness Gamma Correction.** On the other hand, the \(L\) channel is much more sophisticated under different circumstances. This is because light interacts with the 3D structure of a scene in a complicated manner. Simple histogram matching function results in large areas of overexposure and fake structures. Thus, instead of using histogram matching for every histogram bin to prescribe strict mapping, we choose to constrain the mean value of the lightness channel in the source domain image and make it equal to the mean value of the target domain reference image. Because mean-variance policy might make the pixel value smaller than 0 or larger than 1, we choose the power-law function, which is also widely used in gamma correction. But the difference between our proposed method and the classic gamma correction is that our coefficients for the power-law function are not pre-defined by users. They are automatically calculated with given source-target image pairs. Specifically, the power-law function can be written as \(f_{gamma}(L)=L^{\gamma}\), where \(L\) is the normalized lightness value from 0 to 1 at each pixel location. \(\gamma=1\) when it is an identical transformation. The mean value constraint can then be written as \[\sum_{L}f_{gamma}(L)h_{m}^{s}(L)=\sum_{L}L^{\gamma}h_{m}^{s}(L)=\sum_{L}Lh_{n} ^{u}(L), \tag{1}\] where \(h_{m}^{s}\) is the lightness histogram of a source image \(\mathbf{I}_{m}^{s}\), and \(h_{n}^{u}\) is the lightness histogram of a target reference image \(\mathbf{I}_{n}^{u}\). In practice, we introduce a regularization term \(\beta\) to prevent \(\gamma\) from deviating too much away from 1. Thus, \(\gamma\) can be solved numerically in the following nonlinear optimization, \[\gamma*=\arg\min_{\gamma}\left(\sum_{L}L^{\gamma}h_{m}^{s}(L)-\sum_{L}Lh_{n}^{ u}(L)\right)^{2}+\beta(\gamma-1)^{2}. \tag{2}\] This optimization problem is a simple convex optimization with only one variable \(\gamma\), and can be easily solved with few steps of gradient descent. The optimization problem is a simple convex optimization with only one variable \(\gamma\), and can be easily solved with few steps of gradient descent. Fig. 1: (a) The pipeline consists of 1 image-level adaptation stage and K feature-level adaptation stages. (b) At first image-level adaptation is implemented using the global photometric alignment operation. (c) Then the obtained model \(\mathcal{F}_{i}\) is used to compute pseudo-labels, manifold atoms \(\mathbf{x}\), category centers \(f_{c}\), category thresholds \(t_{c}\), as well as initialize the segmentation model for the subsequent feature-level adaptation stages in an iterative self-supervised manner. dent descent. Source-target image pairs are generated randomly on the fly during training epochs because our proposed GPA is highly efficient and does not require training in comparison to GAN-based methods [14, 17]. The process of the proposed GPA module is illustrated in Figure 3. ### _Global Texture Alignment_ As discussed in previous work [38], CNN-based models are sensitive to high-frequency information. We observe that synthetic images have different and often stronger high-frequency information in comparison to real-world images, which jeopardizes the generalization performance of our model in the target domain. Although the proposed GPA module maintains the diversity of the source domain dataset, it modifies the photometric properties of an image instead of the high-frequency texture. To alleviate this problem, a global texture alignment module is proposed as an auxiliary data augmentation scheme. The idea is straightforward: we modify the high frequency components of a random subset of the source domain images to make their distribution in each image more consistent with that of the corresponding reference image, which is sampled from the target domain. The process is illustrated in Figure 1. This data augmentation scheme teaches the segmentation model to ignore texture information and focus on structural information. To be specific, a bilateral filter \(f_{bilateral}(\cdot)|_{d,\sigma_{c},\sigma_{s}}\) is applied to a source domain image \(\mathbf{\bar{I}^{s}}\) to generate the filtered image \(\mathbf{\bar{I}^{s}}=f_{bilateral}(\mathbf{\bar{I}^{s}})|_{d,\sigma_{c},\sigma_{s}}\). We use the bilateral filter to preserve image structures and modify the texture component only. In order to determine the parameters (\(d\), \(\sigma_{c}\) and \(\sigma_{s}\)) of the bilateral filter, we quantify the distribution of high-frequency image components, and ensure that \(\mathbf{\bar{I}^{s}}\) and its target domain reference image \(\mathbf{I}^{u}\) have similar distributions of high-frequency components. We convert both \(\mathbf{\bar{I}^{s}}\) and \(\mathbf{I}^{u}\) to grayscale images and apply the Laplacian operator \(f_{Lap}\) to obtain their high-frequency components \(H^{s}=f_{Lap}(\mathbf{\bar{I}})\) and \(H^{u}=f_{Lap}(\mathbf{I}^{u})\), respectively. Let \(h(H^{s})\) and \(h(H^{u})\) be the respective histogram of \(H^{s}\) and \(H^{u}\), and represent their distribution of high-frequency components. To align \(h(H^{s})\) and \(h(H^{u})\), the parameters of the bilateral filter are determined by solving the following optimization problem, \[d*,\sigma_{c}*,\sigma_{s}*=\arg\min_{d,\sigma_{c},\sigma_{s}}KL(\sum_{s}h(H^{s} ),\sum_{u}h(H^{u}))). \tag{3}\] By applying the bilateral filter with optimized parameters, the KL divergence between the distributions of the high-frequency components of \(\mathbf{\bar{I}^{s}}\) and \(\mathbf{I}^{u}\) can be significantly reduced. Note that \(d\), \(\sigma_{c}\) and \(\sigma_{s}\) are fixed once optimized. To introduce stochasticity, each source domain image has 50% chance to be bilaterally filtered before being fed to the segmentation model. We find that adding this data augmentation scheme in the image-level adaptation step would damage the final performance, and therefore, only use it as an additional source domain data augmentation scheme during the feature-level adaptation steps. ### _Training Loss_ The only training loss during image-level adaptation step is the segmentation cross-entropy loss. The overall loss function we use during feature-level adaptation steps consists of four parts: the cross-entropy segmentation loss, the global manifold alignment loss, the category-oriented triplet loss, and the target domain consistency regularization loss. Fig. 4: By minimizing the projection error of source/target domain features onto the manifold, our proposed manifold loss mitigates the discrepancies between source domain feature distribution and target domain feature distribution. Fig. 3: (a)Input source domain image and (b) a randomly chosen target domain image is aligned in (c) Lab channels to generate (d) aligned image. Fig. 2: Iterative self-supervised training further improves the segmentation performance. **Global Manifold Alignment.** Methods such as Locally Linear Embedding (LLE) and Isomap are commonly used to depict manifolds, but they are too computationally costly for gradient backpropagation based training. Here we use the K-means algorithm to simplify the computation. As LLE uses a piecewise linear model to approximate a high dimensional feature manifold, K-means can be considered as a piecewise constant approximation of the manifold. Every centroid obtained by K-means is a constant approximation of a local region. By approximating the manifold with a set of representative feature vectors, we can further align features from the source and target domains. In order to acquire feature representations, first we need to apply the segmentation model obtained in the previous step \(\mathcal{F}_{i-1}\) to each source image \(\mathbf{I}_{m}^{s}\) to compute the feature map of the second last layer \(\mathbf{X}_{m}^{s}\) and the final prediction probability map \(\mathbf{P}_{m}^{s}\). The feature vector and prediction probability at a given pixel location \(j\) is denoted as \(\mathbf{x}_{j}^{s}\) and \(\mathbf{p}_{j}^{s}\), respectively. The true category label at location \(j\) is denoted as \(\mathbf{y}_{j}^{s}\). Next, the predicted probabilities (\(\mathbf{p}_{j}^{s}\)) are compared with true category labels (\(\mathbf{y}_{j}^{s}\)), and the correctly classified feature vectors are randomly sampled to form the source domain sample matrix \(\mathcal{X}\in\mathbb{R}^{N_{p}\times D_{e}}\), where \(N_{p}\) is the total number of sampled feature vectors and \(D_{c}\) is the dimensionality of each feature vector. Then, principal component analysis (PCA) is applied to \(\mathcal{X}\) to keep around 90% of the total explained ratio of energy and obtain the dimension reduced version \(R(\mathcal{X})\in\mathbb{R}^{N_{p}\times D_{c^{\prime}}}\), where \(D_{c^{\prime}}<<D_{c}\). Afterwards, the classic K-Means clustering algorithm is applied to \(R(\mathcal{X})\) to find the representative locations on the feature manifold. These locations are denoted as \(\mathbf{z}\in\mathbb{R}^{N_{s}\times D_{c^{\prime}}}\), which are essentially the atom vectors of the source domain feature manifold. Any pixel feature from the source domain (\(\mathbf{x}_{j}^{s}\)) or the target domain (\(\mathbf{x}_{j}^{u}\)) can be projected to the subspace spanned by the atoms in \(\mathbf{z}\), and the projection is represented as \(\mathbf{\hat{x}}_{j}^{\prime}=\mathbf{w}^{T}\mathbf{z}\) (we omit the superscript for simplicity). The projection mapping \(\mathbf{w}\) is applied here for two reasons: first, although.Let \(R^{-1}\) be the reconstruction operator of PCA. The projection error \(||R^{-1}(\mathbf{\hat{x}}_{j}^{\prime})-\mathbf{x}_{j}||^{2}\) is considered as the deviation from the source domain manifold and is part of the projection error loss \(\mathcal{L}_{mfd}\). The motivation of our proposed global manifold alignment is straightforward: minimizing the source domain projection error makes the feature manifold smoother, and minimizing the target domain projection error decreases the distance (i.e. improves the alignment) between feature distributions of the source and target domains respectively. Specifically, we adopt an attention mechanism to calculate the linear coefficients of atom vectors. The manifold projection error \(\mathcal{L}_{mfd}\) and reconstructed feature vector \(\mathbf{\hat{x}}_{j}^{\prime}\) can be computed using the following equations, \[\begin{split}\mathcal{L}_{mfd}&=\sum_{j}||R^{-1}( \mathbf{\hat{x}}_{j}^{\prime})-\mathbf{x}_{j}||^{2}\\ \mathbf{\hat{x}}_{j}^{\prime}&=\mathbf{w}^{T}\mathbf{z}\\ \mathbf{w}^{T}&=\text{softmax}\left(\frac{(R(\mathbf{x}_{j}) \mathbf{W}_{1}^{T})(\mathbf{W}_{2}\mathbf{z}^{T})}{\sqrt{N_{z}}}\right),\end{split} \tag{4}\] where \(R(\mathbf{x}_{j})\) is the \(j\)-th row of \(R(\mathcal{X})\), both \(\mathbf{W}_{1}\in\mathbb{R}^{N_{h}\times D_{c^{\prime}}}\) and \(\mathbf{W}_{2}\in\mathbb{R}^{N_{h}\times D_{c^{\prime}}}\) are trainable linear matrices respectively. They are introduced to further lower the memory overhead of the attention mechanism. They also project the manifold and all features to a lower dimensional space, and two distinct projection matrices enable better alignment between the projected manifold and features. \(N_{h}\) is a hyperparameter representing the number of hidden neurons, \(\mathbf{w}\in\mathbb{R}^{N_{s}}\) is the vector of atom coefficients. The details to calculate the manifold projection loss is illustrated in Figure 4. Although the global manifold loss is defined for the global alignment of features, it cannot be adopted in the image adaptation stage because it relies on a pre-trained model to provide manifold atoms \(\mathbf{z}\). **Category-oriented Triplet Loss.** Although the aforementioned GPA and GMA modules could learn domain-invariant features to some extent, the losses used in previous training do not explicitly control the category-wise feature distribution, and some category-sensitive domain shifts are overlooked. Pixel features from different categories are naturally distributed unevenly, and some category centers are close to each other. To tackle this issue, we propose a category-oriented triplet loss that aims to push the category-wise features further closer to the corresponding category centers the pixel belongs to and further away from other category centers. Note that category centers are intentionally introduced to make the calculation of category-oriented triplet loss practical. If we use the traditional triplet loss without category centers, we need to store pairwise distances among all pixels with a tremendous GPU memory overload. Therefore, the category center \(\mathbf{f}_{c}\) of category \(c\) is calculated as follows, \[\mathbf{f}_{c}=G(\frac{1}{N_{c}}\sum_{s}\sum_{j}1\left(\mathbf{y}_{j}^{s}=c\right)\mathbf{x }_{j}^{s}), \tag{5}\] where \(\mathbf{x}_{j}^{s}\) refers to the pixel-wise features in the penultimate feature map, and \(\mathbf{y}_{j}^{s}\) be the ground truth pixel-wise labels of a source domain image at pixel location \((j)\). \(N_{c}\) refers to the total number of pixels in category \(c\), \(s\) refers to the source domain image index, and \(G(\cdot)\) is an \(L_{2}\) normalization function. Note that it is crucial to use the \(L_{2}\) normalization \(G(\cdot)\) to keep the category centers on the unit sphere and avoid scaling issues among stages. The category centers are updated after the training, allowing the centers to become further and further from each other on the sphere surface. Our category-oriented triplet loss is formulated as follows, \[\begin{split}\mathcal{L}_{triplet}=&\frac{1}{N_{s}} \sum_{s}\sum_{j}\max_{c,c\neq y_{j}^{s}}\max(\left\|G(\mathbf{x}_{j}^{s})-\mathbf{f}_{ y_{j}^{s}}\right\|\\ &-\left\|G(\mathbf{x}_{j}^{s})-\mathbf{f}_{c}\right\|+\alpha,0),\end{split} \tag{6}\] where \(N_{s}\) is the total number of pixels in all images, and \(\alpha\) is a prescribed margin. The loss will be zero if every feature \(\mathbf{x}_{j}^{s}\) is at least \(\alpha\) closer to its corresponding category center \(y_{j}^{s}\) than other category centers. Because triplet loss is focused on hard samples and only reliable category labels for hard samples in the source domain, the proposed category-oriented triplet loss is only applied to the source domain images. Fig. 5: Our proposed category-oriented triplet loss exploits hard samples and further enlarge category margins. \(d_{pos}\) and \(d_{neg}\) represent the distance of positive and negative pairs respectively. The working principles of our proposed category-oriented triplet loss are illustrated in Figure 5. In cooperation with the proposed global photometric alignment and data augmentation in the source domain, our proposed triplet loss can exploit hard samples in the source domain that have been coarsely aligned to the target domain and further improve the generalization capability of the trained model. The proposed category triplet loss can be considered complementary to the cross-entropy loss and the manifold projection loss. **Target Domain Consistency Regularization.** The category-wise features are regularized by our proposed category-oriented triplet loss in the source domain, where the annotated ground truth labels are available. However, the supervision signal is weak in the target domain where there is no labeled data provided. Consistency regularization is an important component of many recent state-of-the-art self-supervised learning algorithms, which utilizes unlabeled data by relying on the assumption that the model should output similar predictions when fed perturbed versions of the same image [39, 40]. Motivated by this, we propose a target domain consistency regularization method shown in Figure 1 to perform category-level feature distribution regularization in the target domain. In the target domain, the pseudo-label at a certain pixel location is defined as the category corresponding to the largest component of the probability vector produced from the segmentation model \(\mathcal{F}_{i-1}\) trained in the previous step, and the largest component of the probability vector itself defines the confidence of the pseudo label. We further pre-define a pair of probability threshold \(P_{h}\) and percentage threshold \(p\) for all categories. \(p\) is a constant value but leads to a category-specific probability threshold \(P_{s,c}\), meaning \(p\%\) pixels in the category \(c\) have confidence above threshold \(P_{s,c}\). Thus the final confidence threshold for category \(c\) is \(t_{c}=\min(P_{h},P_{s,c})\), and any pseudo-labels with a confidence higher than \(t_{c}\) in category \(c\) are considered valid samples. Our proposed target domain consistency regularization is straightforward: given a target domain image \(\mathbf{I}_{n}^{u}\), with the trained segmentation model \(\mathcal{F}_{i-1}\), we extract the pseudo-label \(\hat{\mathbf{y}}_{n}^{u}\) by forwarding \(\mathbf{I}_{n}^{u}\) to \(\mathcal{F}_{i-1}\) followed by applying the \(\arg\max(.)\) function to its output; and the corresponding pixel prediction is converted to a hard label vector \(\mathds{1}_{[c=\hat{\mathbf{y}}_{n,j}^{u}]}\); then, a stochastic function \(\tau_{2}(\cdot)\) is applied to \(\mathbf{I}_{n}^{u}\) to obtain a perturbed version \(\widetilde{\mathbf{I}}_{n}^{u}\); after that, we forward \(\widetilde{\mathbf{I}}_{n}^{u}\) to \(\mathcal{F}_{i}\) to obtain the prediction \(\widetilde{\mathbf{P}}_{n}^{u}\) on the perturbed image; finally, the prediction \(\widetilde{\mathbf{P}}_{n}^{u}\) is forced to be consistent with \(\hat{\mathbf{y}}_{n}^{u}\) by using a cross entropy loss function at pixel locations whose largest class probability is above the previously defined category-level confidence threshold \(t_{c}\). Note that the perturbed target domain image generated via the stochastic function \(\tau_{2}(\cdot)\) makes prediction harder. Thus more samples in the target domain can be converted into hard samples, but the generation of pseudo-labels is unaffected. In this way, category-level feature distributions in the target domain are regularized under the supervision of valid pseudo-labels. The overall formula is defined as follows, \[\mathcal{L}_{cst}= \sum_{j}\mathbbm{1}(\max(\mathcal{F}_{i-1}(\mathbf{I}_{n}^{u})|_{j} )\geq t_{c}) \tag{7}\] \[\text{CELoss}(\mathds{1}_{[c=\hat{\mathbf{y}}_{n,j}^{u}]},\widetilde{ \mathbf{P}}_{n,j}^{u}),\] \[\hat{\mathbf{y}}_{n,j}^{u}= \arg\max(\mathcal{F}_{i-1}(\mathbf{I}_{n}^{u})|_{j}),\] \[\widetilde{\mathbf{P}}_{n,j}^{u}= \mathcal{F}_{i}(\widetilde{\mathbf{I}}_{n}^{u})|_{j}.\] It is essential to use the trained model \(\mathcal{F}_{i-1}\) rather than model \(\mathcal{F}_{i}\) to generate pseudo-labels. This is because \(\mathcal{F}_{i}\) is still being trained and unstable. Fluctuating pseudo-labels generated by \(\mathcal{F}_{i}\) would be catastrophic to the training process. Experimental results illustrate that this consistency regularization method is simple yet efficient. It strengthens the supervision signal in the target domain and improves the final performance. ## 4 Experiments ### _Datasets and Implementation Details_ For commonly used synthetic datasets, we follow the same evaluation settings as used in [16]. Our proposed method are evaluated with datasets **GTA5**[42], **Synthesis**[43], and **Cityscapes**[44]. The **Cityscapes** dataset is the target domain dataset with \(2,957\) of size \(2048\times 1024\) training images and \(500\) validation images of the same resolution. **Cityscapes** has 19 categories of objects in total. The **GTA5** and **Synthesis** are two source domain datasets of computer generated synthetic images, which contain \(24,966\) of size \(1914\times 1052\) training images and \(9400\) of size \(1280\times 760\) training images respectively. The **GTA5** dataset shares 19 common categories with the **Cityscapes** dataset, and all the irrelevant categories are ignored during training. The **Synthesis** dataset shares 16 common categories with the **Cityscapes** dataset. Some previous work [11, 14] only train and test on a 13-category subset of the **Synthesis** dataset, or train two models on both subset and the whole set for better performance [16]. Here we follow the practice in [13, 15] to train a model only on the whole set and test it on both settings. In order to evaluate the performance of our proposed method on real-world source images, we construct a new domain adaptive semantic segmentation task Kvasir\(\rightarrow\)Piccolo on the basis of two open-source datasets **Hyper-Kvasir**[19] and **Piccolo**[20]. The **Hyper-Kvasir** dataset is the source dataset and consists of \(1000\) wide-band (WL) gastrointestinal images. The image resolution of the **Hyper-Kvasir** dataset is not fixed and is roughly \(625\times 530\). The Piccolo dataset is the target dataset. Among the \(1302\) narrow-band (NBI) colonoscopy images of size \(854\times 480\), \(1161\) are used as training images and \(141\) as validation images. We follow the rule in [41] to construct this new task specifically designed for medical images: the images were collected with different modes (NBI vs. WL), different locations (GI tract vs. colonoscopy) and different devices to create a significant domain gap between the source and target domains. According to Figure 1, the photometrically adapted source domain images are used first to train the initial segmentation model \(\mathcal{F}_{0}\) in the image-level adaptation step. Then, the model is trained in an iterative self-supervision manner with \(K=6\) and \(U=20k\). We compare to previous work in domain adapted semantic segmentation [16, 17] based on self-supervision and set the total number of training iterations to \(140k\). As reported in [14], the best performance is achieved when \(P_{h}=0.9\) and \(p=10\) for pseudo-labels, and we follow this setting in our experiment. The regularization term \(\beta\) used by GPA module in (7) is set to \(0.01\). For the proposed global texture alignment (GTEXA) module, \(d=5\), \(\sigma_{c}=75\) and \(\sigma_{s}=25\) are the optimized parameters of the bilateral filter for the GTA5\(\rightarrow\)Cityscapes and Synthia\(\rightarrow\)Cityscapes tasks, the KL divergence is reduced roughly from 0.16 to 0.10 and from 0.43 to 0.07, respectively. Because images from both **Hyper-Kvasir** and **Piccolo** are real images, they have quite similar distributions of high-frequency components. We use a gentle bilateral filter with \(d=5\), \(\sigma_{c}=10\) and \(\sigma_{s}=25\). The KL divergence stays at 0.10 before and after the bilateral filter is applied. For the proposed GMA module, \(D_{c^{\prime}}\) is set to keep roughly \(90\%\) explained ratio of the energy of \(\mathbf{x}_{j}\), which is \(D_{c^{\prime}}=32\) for DeeplabV3+ and \(D_{c^{\prime}}=256\) for DeeplabV2, compared to \(D_{c}=256\) for DeeplabV3+ model and \(D_{c}=2048\) for DeeplabV2 respectively. The K-Means is used because it is the simplest clustering algorithm to validate our motivation. The number of clusters centers are set to \(N_{z}=64\) with hidden neuron dimension \(N_{h}=32\). Note that a larger \(N_{z}\) or a more advanced clustering method like KSVD might improve the performance. Still, it is not pragmatic because of the memory consumption or the computational complexity. We adopt the standard color-jittering as the stochastic function \(\tau_{1}(\cdot)\) in both source, and target domains as in [15] in the image-level adaptation stages. We utilize standard color-jittering, elastic deformation [45], and standard random blurring in the feature-level adaptation stages. Elastic deformation is used to mimic the differences between shapes in different domains, and random blurring is used to simulate the resolution differences. We conduct different settings for \(\tau_{1}(\cdot)\) and \(\tau_{2}(\cdot)\) because we observed that both elastic deformation and random blurring are strong data augmentations, and using them in the image adaptation stage will distort the distribution of the training data, which undermines the final segmentation performance. We follow the same experiment settings in [18]. In addition to the DeeplabV3+(ResNet101) [6] discussed in [18], we also compare our proposed model with another commonly adopted segmentation model DeeplabV2(ResNet101) used by other state-of-the-art studies [14, 15, 17]. We implement our proposed method with PyTorch [46], and deploy our experiments on 4 NVIDIA GeForce 2080Ti GPUs with 1 source domain image and 1 target domain image randomly selected and stored on each GPU for each backpropagation step. The stochastic gradient descent is used during the image-level adaptation with a momentum of \(0.9\) and weight decay of \(1e-4\). The learning rate is initially set to \(5e-4\) and is decreased using the polynomial learning rate policy with a power of \(0.9\) during training. For the feature-level adaptation steps, we halve the learning rate to \(2.5e-4\) based on the learning rate from the image-level adaptation to fine-tune previously trained models. ### _Comparisons with State-of-the-Art Methods_ In this section, we compare our method against all the existing state-of-the-art methods [11, 13, 14, 15, 16, 17], on the GTA5\(\rightarrow\)Cityscapes, Synthia\(\rightarrow\)Cityscapes and Kvasir\(\rightarrow\)Piccolo tasks. As shown in Table I, the performance improvement achieved by our proposed method outperforms all previous methods with all different segmentation models. On the GTA5\(\rightarrow\)Cityscapes task, our model achieves a new state-of-the-art mIoU (\(58.2\%\)) on DeeplabV3+ and (\(53.3\%\)) on DeeplabV2, which are \(8.0\%\) and \(2.9\%\) higher than the previous best result using the same backbone (Table I), respectively. On the Synthia\(\rightarrow\)Cityscapes task, the performances of our proposed method are \(60.1\%\) and \(59.1\%\), which are \(7.6\%\) and \(5.0\%\) higher than that of the previous method, respectively. Note that we do not include the comparison between our work and a concurrent work [36] because three major differences make the comparison unfair, which are presented as follows: (1) much stronger baseline models. [36] adopts the domain adapted model from [24] as \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c c} \hline \hline the baseline model, which generates much stronger performance than ours. For example, our baseline model (\(37.6\%\)) is \(5.8\%\) lower than theirs (\(43.4\%\)) on the GTA5\(\rightarrow\)Cityscapes task; (2) much stronger segmentation networks. [36] uses a more advanced segmentation network proposed in [47, 48] instead of the standard DeeplabV2 that we used; (3) much stronger augmentation strategy. RandAug [49] and Cutout [50] are utilized as data augmentations in [36], which are much superior than our augmentations. On the Kvasir\(\rightarrow\)Piccolo task, the segmentation performance of our proposed method is \(81.5\%\) and \(84.2\%\) on DeeplabV2 and DeeplabV3+ respectively, which are \(3.7\%\) and \(6.0\%\) higher than the previous best results, as shown in Table III. We re-implemented three general-purpose state-of-the-art methods [11, 15, 16] for this comparison. Our proposed method also significantly outperforms the domain adaptive segmentation algorithm in [41], which was specifically designed for endoscopic images. Our method achieves the best performance in many important categories, including 'road','sidewalk', 'building', 'fence','vegetation', 'terrace', 'person', 'car', 'rider', 'truck', 'train', 'bus','motor', and 'bike'. In particular, our model performs the best when classifying over 'road','sidewalk','motor', and 'bike', even if some of these categories have very similar local appearances. This is because our proposed category-oriented triplet loss maximizes the inter-category distances and minimizes the intra-category distances by exploiting the most difficult samples in Fig. 6: Qualitative examples of the comparisons between our method and CAG [16] on the GTA5\(\rightarrow\)Cityscapes task. Specifically, (a) Input images, (b) CAG [16], (c)Ours, (d) Labels the source domain, which improves the generalization capability across different domains. Moreover, our proposed global manifold/texture alignment brings an extra \(2.1\%\) improvements in the final performance compared to the experiments in [18]. This is because global photometric alignment is generally conducted in the image-level inputs, and the features/textures between different domains are still not explicitly aligned. Finally, our proposed target consistency regularization also strengthens the relatively weak supervision signal in the target domain. It improves the segmentation accuracy of categories with large intra-category variances, such as 'building' and'sky', by regularizing its feature distribution. According to our qualitative analysis, the previously over-exposed white buildings are easily misclassified as skies but can be corrected by our proposed target consistency regularization. One interesting fact is that our proposed method has larger overall performance improvements in the GTA5\(\rightarrow\)Cityscapes task compared to the Synthia\(\rightarrow\)Cityscapes task. This is because DeeplabV3+ adopts high-resolution feature maps and lower feature dimensions, which improves the segmentation performance. The Synthia dataset is mostly constituted by large objects, limiting the improvements that a DeeplabV3+ segmentation model can bring. Another interesting fact is that although our proposed GMA has achieved clear performance improvements in both DeeplabV3+ and DeeplabV2, yet the performance improvement with DeeplabV2 is higher as shown in Table I. This is because the feature dimension for DeeplabV2 is much larger than the feature dimension for DeeplabV3+(2048 vs. 256). This results in a more complicated feature manifold which is difficult to align with simple image-level photometric alignment. By modeling the manifold and minimize the projection error, our proposed GMA can effectively align high-dimensional features. Although DeeplabV3+ is powerful, by cooperating with our proposed GMA module, the performance of our proposed model on the Synthia\(\rightarrow\)Cityscapes task with DeeplabV2 is even higher than the one with DeeplabV3+. We further show some of the segmentation examples in Figure 7 and Figure 6 to qualitatively demonstrate the superiority of our method. Our proposed method generates finer edges and makes fewer mistakes. We also compare the style-transferred images generated by our proposed GPA model with other state-of-the-art style-transfer techniques used by the domain adaptation methods in Figure 8, and our proposed method has better quality and a higher level of diversity. ### _Ablation Studies_ **Component Analysis.** In most previous work, a source-only model trained on the source domain training set only is often required to serve as initial pseudo label producer. Although we do not use the source-only model during training, we train one to provide a baseline so that it is convenient to verify the primary performance gains from our proposed pipeline. Then, following the experiment settings in [16], we perform extensive ablative experiments using DeeplabV3+ on the GTA5\(\rightarrow\)Cityscapes task to verify the effectiveness of each of our proposed component. As shown in Table IV, the source-only baseline using Deeplab v3+ has a performance of \(37.6\%\) for the GTA5\(\rightarrow\)Cityscapes task, and our proposed overall pipeline improves the baseline performance by \(20.6\%\). Following the same settings in previous state-of-the-art methods [13, 15, 16], we further evaluate the impact of each proposed component according to the final performance of our model on the GTA5\(\rightarrow\)Cityscapes task by removing one component at a time. The results are shown in Table IV. According to our results, the final performance of the segmentation model has the most deterioration when the global photometric alignment module is removed. This is because the GPA module is critical to the image-level adaptation. Removing it literally removes the first image-level adaptation stage, and thus, the resulting erroneous \begin{table} \begin{tabular}{l l l l l} \hline \hline & & polyps & background & mIoU \\ \hline \multirow{3}{*}{\begin{tabular}{l} DeeplabV2 \\ image \\ \end{tabular} } & FGGAN [15] & 62.2 & 88.5 & 75.4 \\ & FDA [11] & 66.2 & 89.5 & 77.8 \\ & image adapt, [18] & 54.7 & 84.9 & 69.8 \\ & IZF all (ours) & **72.0** & **90.9** & **81.5** \\ \hline \multirow{3}{*}{ \begin{tabular}{l} DeeplabV3/+ \\ image adapt, [18] \\ \end{tabular} } & WCBT [41] & 56.9 & 86.5 & 76.3 \\ & CAG [16] & 67.0 & 89.4 & 78.2 \\ \cline{1-1} & image adapt, [18] & 55.8 & 82.0 & 68.9 \\ \cline{1-1} & IZF all (ours) & **76.2** & **92.2** & **84.2** \\ \hline \hline \end{tabular} \end{table} TABLE III: Performance comparison with state-of-the-art methods on the Kvasir\(\rightarrow\)Piccolo task. Best results are marked in bold. Fig. 7: Qualitative examples of the comparisons between our method and CAG [16] on the Kvasir\(\rightarrow\)Piccolo task. Specifically, (a) Input images, (b) CAG [16], (c)Ours, (d) Labels pseudo-labels are harmful to later stages. Although our proposed global manifold alignment module can align the feature distributions from different domains, the error from unaligned models still accumulates across steps, which is detrimental to the final performance of the model. This also validates the necessity of a image-level adaptation step and the importance of an accurate initial model. Even though our proposed GPA module serves as an image-level adaptation between domains, the feature-level domain shifts are still not aligned completely. Our proposed GMA module can further improve the feature-level adaptation between the source domain and the target domain, decreasing the model performance by \(1.4\%\) when removing the GMA module. In addition, the GTEXA module is designed to modify the high-frequency components of an image with a certain probability. This action makes trained models robust to texture variations and further improves the performance by roughly \(0.7\%\). Furthermore, despite its simplicity, our proposed target domain consistency regularization has been proved to be very effective. The main reason for this phenomenon is that there are fewer valid training samples in the target domain than the source domain, and our proposed TCR essentially serves as a data augmentation technique that increases the number of valid training samples in the target domain. It also introduces more hard samples without damaging the pseudo-labels. Therefore, it gives rise to a significant performance gain, decreasing the model performance by \(4.8\%\) when removing the TCR module. Our proposed category-oriented triplet loss applied on the source domain also boosts the performance by \(3.1\%\) as it exploits hard samples in the source domain and improves generalization capability across different domains. **Photometric Alignment.** There are currently other methods, which can achieve the goal of the image-level adaptation, such as the GAN-based method in [14] and the frequency-based method in [11]. We substitute our proposed global photometric alignment and global texture alignment with these two methods and retrain our whole pipeline. The result is shown in Table V. We also visu Fig. 8: Qualitative analysis on the global photometric alignment (GPA) module. (a) Input images, (b) Reference images, (c) BDL-GAN [14], (d) Fourier Adaptation [11], (e) Global Photometric Alignment. alize some representative aligned images produced with different methods in Figure 8. Our proposed GPA can generate the aligned image according to a randomly chosen target domain reference image. Simultaneously, the GAN-based model [14] performs deterministically and generates aligned images with a similar style, only covering part of the actual target domain image span. This explains why our proposed model works even better than the pre-trained deep adversarial model. Although the frequency-based method proposed in [11] can generate style-transferred images randomly, the concatenation of frequencies usually introduces significant noises during training, which largely limits its final performance. Based on our observation, gamma correction on Lab channels does not have sufficient adaptation capability, while histogram matching on all three channels results in image artifacts. We use the simple mean-variance of RGB channels as the benchmark, and run a comparison for the image-level adaptation stage. The result in Table V shows our hybrid scheme performs the best. **Category Triplet Loss.** We only apply the category-oriented triplet loss to the source domain category labels in our proposed method but not the pseudo-labels in the target domain. Although the target domain images with pseudo-labels can be used as supplementary samples when the pseudo-labels are of high confidence, our proposed triplet loss aims to deal with hard samples and pseudo-labels of hard samples in the target domain are not reliable. To verify this, we include pseudo-labels in our category-oriented triplet loss, and the result is shown in Table V. We follow the default settings as in [51] and set \(\alpha=0.2\). But we also tested our proposed category triplet loss with other settings as shown in Figure 9, which shows that the best result is achieved when \(\alpha=0.2\). **Manifold Alignment.** In our proposed model, we directly use the PCA+K-Means to model the feature manifold. It shares similar functionality with the adversarial methods used in previous work. But our proposed method suffers little from mode collapse. The mode collapse is easily observed in the style-translated images as in Figure 8. Still, it also exists in the high-level features and undermines the diversity of the training set. To make fair comparisons, we substitute our proposed manifold alignment module with a traditional global discriminator as in [17]. According to Tabel 5, the result illustrates that it performs even worse than the version without a global discriminator, manifesting the superiority of our GMA module. We also conduct extensive experiments to verify the values of the hyperparameters \(N_{h}\) and \(N_{z}\). The experiment results are presented in Figure 10. In general, the performance of the model is insensitive to the choice of \(N_{h}\), and larger \(N_{z}\) leads to better performance, the best performance is achieved with settings \(N_{h}=32\) and \(N_{z}=64\). Note that we can not increase atom number \(N_{z}\) to more than \(64\) because the calculation of atom weights and the reconstruction of pixel feature \(\mathbf{x}_{j}\) require a large amount of GPU memory. ## 5 Conclusions In this paper, we have explored non-adversarial methods in both image-level and feature-level domain adaptation, and proposed a novel unified image-to-feature adaptation pipeline for unsupervised domain adaptive semantic segmentation. During this study, we have found out that for this specific problem, adversarial methods could damage the diversity of feature distributions, and a simple photometric alignment module can achieve better performance. We have also found out that a simple self-supervised consistency loss is capable of regularizing category-level feature distributions \begin{table} \begin{tabular}{l l l l l l} \hline & GPA & GTEXA & GMA & CTL & TCR & mIoU \\ \hline Source only & & & & & 37.6 \\ Image adapt. & \(\surd\) & & & & 47.3 \\ w/o Alignments & & & \(\surd\) & \(\surd\) & \(\surd\) & 47.5 \\ w/o GPA & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & 51.0 \\ w/o GMA & \(\surd\) & \(\surd\) & & \(\surd\) & \(\surd\) & 56.6 \\ w/o CTL & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & 55.1 \\ w/o TCR & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & 53.4 \\ w/o GTEXA & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & 57.5 \\ all & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & 58.2 \\ \hline \end{tabular} \end{table} TABLE IV: Ablation study of the proposed components on the GTAS\(\rightarrow\)Cityscapes task. GPA: global photometric alignment, GTEXA: global texture alignment, CTL: category-oriented triplet loss, TCR: target domain consistency regularization. Fig. 10: Quantitative analysis on the hyper-parameters of global manifold alignment (GMA) module. (a) higher \(N_{z}\) leads to better performance, (b) mIoU is generally not very sensitive to the choice of \(N_{h}\). \begin{table} \begin{tabular}{l l l} \hline Modules & Methods & mIoU \\ \hline \multirow{2}{*}{Image Aapt.} & Frequency Align [11]. & 54.0 \\ & BDL-GAN [14] & 55.4 \\ & Photometric+Texture & 58.2 \\ \hline \multirow{2}{*}{GPA Scheme} & RGB Mean-Variance & 42.3 \\ & Lab Gamma Correction & 44.5 \\ & Lab Histogram Match & 43.3 \\ & Hybrid & 47.3 \\ \hline \multirow{2}{*}{Pseudo-labels} & Triplet loss with pseudo-labels & 56.1 \\ & Triplet loss w/o pseudo-labels & 58.2 \\ \hline \multirow{2}{*}{Manifold Align.} & Adversarial Method & 55.8 \\ & Manifold Alignment & 58.2 \\ \hline \end{tabular} \end{table} TABLE V: Ablation studies of the image adaptation strategy, photometric alignment scheme, and using pseudo-labels for the category-oriented triplet loss on the GTAS\(\rightarrow\)Cityscapes task. Fig. 9: Quantitative analysis on the selection of category margin \(\alpha\). Best performance is achived with \(\alpha=0.2\). in the target domain. The proposed pipeline effectively integrates global image-level and feature-level adaptation and category-level feature distribution regularization. The global texture alignment module also serves as an auxiliary data augmentation scheme for the proposed pipeline. In particular, we have introduced a novel and efficient global photometric alignment module to adapt source domain images to the target domain. A global texture alignment module has been designed to modify the high-frequency components of images from the source domain and make the trained model robust to domain gaps caused by domain-specific textures. We have also proposed a global manifold alignment module to directly model the distribution of the pixel features from the source domain and align the feature distributions from both domains. To our best knowledge, this is the first piece of work that models the feature manifold directly in unsupervised domain adaptation for semantic segmentation. A category-oriented triplet loss has been devised for the source domain to regularize source domain category centers. A target domain consistency regularization method has also been introduced for the target domain to regularize category-level feature distributions. Extensive experiments have shown that each of our proposed techniques improves the generalization capability of our model significantly. The proposed three modules form a complete adaptation strategy to tackle domain shifts. Integrating them gives rise to a significant improvement over existing state-of-the-art unsupervised domain adaptive semantic segmentation methods, demonstrating that minimizing global and category-level domain shifts simultaneously deserves more attention. **Limitations.** Our work still has a few limitations. First of all, the photometric alignment module is isotropic, which means texture information is not altered by our proposed module. Although we have proposed a global texture alignment scheme, it is activated only when the source domain images have stronger or similar high-frequency components in comparison to the target domain images. It deserves more attention to develop a method that can better close the domain gap without hurting the feature diversity of source domain samples. In addition, our scheme for global feature manifold alignment is the first attempt to model the feature manifold directly. However, when we design our scheme, the priority is making manifold alignment compatible with gradient back-propagation based training, but not achieving optimal alignment performance. Nonetheless, it demonstrates the potential of direct feature manifold modeling in domain adaptation tasks. At last, some of our proposed components are only designed for the close-set setting because they are based on the assumption that deep features from the same category should be similar, which is not well suited for open-set tasks where different unseen categories are all labeled as "unknown". Thus how to extend our algorithm to the open-set scenario remains an open problem.
2310.18359
DeSIQ: Towards an Unbiased, Challenging Benchmark for Social Intelligence Understanding
Social intelligence is essential for understanding and reasoning about human expressions, intents and interactions. One representative benchmark for its study is Social Intelligence Queries (Social-IQ), a dataset of multiple-choice questions on videos of complex social interactions. We define a comprehensive methodology to study the soundness of Social-IQ, as the soundness of such benchmark datasets is crucial to the investigation of the underlying research problem. Our analysis reveals that Social-IQ contains substantial biases, which can be exploited by a moderately strong language model to learn spurious correlations to achieve perfect performance without being given the context or even the question. We introduce DeSIQ, a new challenging dataset, constructed by applying simple perturbations to Social-IQ. Our empirical analysis shows DeSIQ significantly reduces the biases in the original Social-IQ dataset. Furthermore, we examine and shed light on the effect of model size, model style, learning settings, commonsense knowledge, and multi-modality on the new benchmark performance. Our new dataset, observations and findings open up important research questions for the study of social intelligence.
Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
2023-10-24T06:21:34Z
http://arxiv.org/abs/2310.18359v1
# DeSIQ: Towards an Unbiased, Challenging Benchmark for Social Intelligence Understanding ###### Abstract Social intelligence is essential for understanding and reasoning about human expressions, intents and interactions. One representative benchmark for its study is Social Intelligence Queries (Social-IQ), a dataset of multiple-choice questions on videos of complex social interactions. We define a comprehensive methodology to study the soundness of Social-IQ, as the soundness of such benchmark datasets is crucial to the investigation of the underlying research problem. Our analysis reveals that Social-IQ contains substantial biases, which can be exploited by a moderately strong language model to learn spurious correlations to achieve perfect performance without being given the context or even the question. We introduce DeSIQ, a new challenging dataset, constructed by applying simple perturbations to Social-IQ. Our empirical analysis shows DeSIQ significantly reduces the biases in the original Social-IQ dataset. Furthermore, we examine and shed light on the effect of model size, model style, learning settings, commonsense knowledge, and multi-modality on the new benchmark performance. Our new dataset, observations and findings open up important research questions for the study of social intelligence. ## 1 Introduction Social intelligence is a long-standing research area in social science and psychology [14, 15]. It is the capacity to understand and navigate complex social situations. Social intelligence is more than the perception of objects and human actions, as it requires a deeper understanding of human intents and interactions behind these actions or words. The study of social intelligence is an emerging area in both the NLP and computer vision communities. One representative work, Social-IQ [16], is a benchmark dataset measuring social intelligence of current AI systems. It is a multiple choice question answering dataset with multi-modal inputs, including questions, answer options, videos, etc; see an example in Figure 1. Although Social-IQ contains rigorously human-annotated data, surprisingly, we find even small models like T5-small [17] could easily achieve 100% answer option accuracy (Table 3). The perfect performance of such an underpowered model prompted us to conduct further investigation to identify its source. Through employing different models and perturbation methods on the answer options, we identify significant biases in the Social-IQ dataset, in which the representations of correct and incorrect options are easily separable, regardless of the questions (Figure 3). Thus, the models are able to exploit such a _shortcut_[18, 19] to answer questions with a high accuracy, without necessarily understanding social intelligence. To debias the Social-IQ dataset, we propose a simple yet effective debiasing approach and present a new unbiased benchmark DeSIQ, by substituting all the incorrect answer options with correct answer options from randomly selected other questions. We establish a performance baseline on DeSIQ with T5-small and Delphi [19], a language model pretrained with commonsense and social norms knowledge. Given answer options only or question-answers, both T5-small and Delphi obtain close to random accuracy. By making use of multi-modal inputs, both T5-small and Delphi achieve an accuracy of up to 77%. These results demonstrate that DeSIQ is unbiased and challenging. Interestingly, both models also outperform GPT-3 and ChatGPT, further indicating the challenging nature of the social intelligence understanding problem. Our contributions are: * We propose six formally defined methods to identify the bias in Social-IQ. From the answer pertur bation experiments, we find that the bias mainly exists in the answer options. * We propose DeSIQ, an unbiased, and more challenging multi-modal question answering benchmark, designed to better measure social intelligence for machine learning models. * We propose two effective models that outperform the baseline and GPT-3/ChatGPT on our new benchmark. We also make detailed analysis and comparison on the performance of these models. ## 2 Identifying Biases in Social-IQ ### The Social Intelligence Datasets Social-IQ [23] is an unconstrained multi-modal, multiple-choice question answering (MCQA) dataset designed to evaluate the social intelligence of machine learning models. It contains videos about social interactions, questions and multiple-choice answer options, in which the questions and answer options were crowdsourced. For each video, the context for all questions and answer options includes not only the original video, but also the corresponding extracted audio and automatically generated transcripts1. Detailed dataset statistics are shown in Table 1. Footnote 1: We don’t have access to the raw transcript, video and audio data so we use extracted features downloaded from [https://github.com/matsuolab/CMU-MultimodalSDK](https://github.com/matsuolab/CMU-MultimodalSDK). Social-IQ provides two configurations: A2 (2-way, i.e. one correct answer option and one incorrect option for each question) and A4 (2-way, i.e. one correct option and 3 incorrect options for each question) for training and evaluation, in which \begin{table} \begin{tabular}{l|c c c} Number & Training & Development & Total \\ \hline Video & 888 & 127 & 1,015 \\ Question & 5,328 & 762 & 6,090 \\ Correct & 21,312 & 3,048 & 24,360 \\ Incorrect & 15,984 & 2,286 & 18,270 \\ \end{tabular} \end{table} Table 1: Statistics of the Social-IQ dataset. On average, each video has 6 questions; for each question, there are 4 correct answer options and 3 incorrect answer options. \begin{table} \begin{tabular}{l|c c} Number & Training & Development & Total \\ \hline Video & 987 & 145 & 1,132 \\ Question & 6,159 & 943 & 7,102 \\ Correct & 6,159 & 943 & 7,102 \\ Correct & 18,477 & 2,829 & 21,306 \\ \end{tabular} \end{table} Table 2: Statistics of the Social-IQ-2.0 dataset. For each question, there is only 1 correct answer option. Figure 1: One example in the Social-IQ and DeSIQ benchmark. For Social-IQ, q, a, i stand for question, correct and incorrect answer respectively, while a’ with yellow background color is the unbiased incorrect answer we substitute in DeSIQ. Different colors represent different persons, including the facial expressions and oral speaking words. The transcripts are in three black squares related to certain video clips in the above. model performance is measured using binary and 4-way accuracy respectively. Most recently, Social-IQ-2.0 was released online2 with the A4 configuration. Though nearly half of the videos overlap with Social-IQ, almost all questions and answers were newly annotated. Moreover, raw video and audio files have been provided instead of only features in the original Social-IQ dataset. The detailed statistics are shown in Table 2. For simplicity, **v1** and **v2** represent Social-IQ and Social-IQ-2.0 respectively, which are used interchangeably. Footnote 2: [https://cmu-multicomp-lab.github.io/social-iq-2.0/](https://cmu-multicomp-lab.github.io/social-iq-2.0/) ### Methodology In this section, we propose several experimental settings to identify biases in a MCQA dataset. Let \(q\) and \(q^{\prime}\) denote two different questions, \(a\) and \(i\) denote the correct and an incorrect answer option of \(q\) respectively, and \(a^{\prime}\) and \(i^{\prime}\) denote the correct and an incorrect answer option of \(q^{\prime}\) respectively. We define six methods to identify biases: **No context and question (NCAQ):**: the contexts and questions for all answer options are removed. I.e., the model is only given all answer options. An MCQA dataset should be sufficiently challenging that no model can predict a correct answer when neither the input context nor the question is not provided. **More Powerful Model (MPM):**: the model is substituted by a larger, more capable model. It is plausible to induce a performance increase on the dataset when a stronger model (e.g. with more trainable parameters and/or one that is fine-tuned on relevant data) is employed. However, a sufficiently hard dataset should not induce a perfect model performance (i.e. near 100% accuracy score). This can be tested with models of different sizes and thus capabilities. **RIWI:**: Replace \(i\) with \(i^{\prime}\), \((q,a,i)\rightarrow(q,a,i^{\prime})\) **RIWA:**: Replace \(i\) with \(a^{\prime}\), \((q,a,i)\rightarrow(q,a,a^{\prime})\) **RAWI:**: Replace \(a\) with \(i^{\prime}\), \((q,a,i)\rightarrow(q,i^{\prime},i)\) **RAWA:**: Replace \(a\) with \(a^{\prime}\), \((q,a,i)\rightarrow(q,a^{\prime},i)\) With the above perturbations, we expect the dataset to induce the following robustness behaviours. With **RIWI** or **RIWA** applied to the dev/test set, we should expect that a model's performance should not significantly deviate from the original dataset. With **RAWI** or **RAWA**, the model should perform significantly worse. ### Biases in Social-IQ We evaluate the A2 (binary choice) configuration and A4 (multiple choice) configurations of Social-IQ, and A4 configuration of Social-IQ-2.0 in the experimental settings discussed above, and surprisingly, we observe that they are both biased. Below, we describe our detailed analysis and show that Social-IQ contains substantial biases that can be exploited by moderately strong language models. Table 3 summarises the experimental results. In the fully supervised setting, we evaluate the performance of the LSTM-based model in the original Social-IQ paper (Zadeh et al., 2019) (Figure 2) as well as the more capable T5-small (Raffel et al., 2020), which we use as the encoder to replace the LSTM in Figure 2. **Evidence of Dataset Bias.** We start from the **NCAQ** settings, i.e., only the answer options (\(a\) and \(i\)) are given as model input, without the question and video, for both training and evaluation. Under this setting, we also compare models' performance with different perturbations on the answer options. Table 3 shows that the basic LSTM model outperforms the random guess by 9.45% on **v1** (i.e. Social-IQ). With the unreasonable inputs (with no context nor question), these accuracy scores show that the Social-IQ dataset is biased. We postulate that while a stronger model (i.e. **MPM**) should obtain better performance than LSTM, without being given su Figure 2: Overall architecture of the LSTM baseline (Zadeh et al., 2019). \(q,a,i,t,v\) denote question, correct answer, incorrect answer, transcript and video features. \(r_{q},r_{a},r_{i},r_{t},r_{v}\) are corresponding features extracted using different LSTMs. Dashed squares represent optional input features. Two multi-layer perception (MLP) are parameter-shared. The output will be two scores \(s_{1},s_{2}\) respectively of the correct and incorrect answer options. even the stronger model should not perform unreasonably well. Thus, we experiment with T5-small, a modestly-sized yet more capable model. As it can be seen in Table 3, T5-small outperforms LSTM by a large margin on **v1**. Surprisingly, it also achieves a perfect 100% accuracy score on **v1** and 63.35% on **v2** without being given the context nor the question. These results provide strong evidence of the biases in these datasets. Finally, we study the other four perturbation settings by applying them to the dev sets. Below we analyse the performance on **v1** in detail, followed by a discussion on **v2**. * **RIWI.** Similar to the performance on the original dataset, T5-small achieves an unreasonable performance of 97.37% on A2 and 99.97% in A4. It indicates that the model can easily distinguish the correct answer from the incorrect options. * **RIWA.** It leads to a large performance degradation: A2 from 100% to 50.21%, A4 from 100% to 25.03%, similar to random guess (i.e., 50% and 25%). This shows that T5-small is unable to distinguish the correct answer options, regardless of the question it is used for. * **RAWI.** This produces a dataset containing only incorrect answer options. We consider the incorrect answer option that replaces the correct answer option as the correct answer. Intuitively, it should lead a model to randomly guess, as none of the options is correct. In Table 3, we can observe that **RAWI** leads to a large performance drop: A2 from 100% to 49.93%, A4 from 100% to 23.76%, indicating that T5-small cannot distinguish incorrect answers from each other, confirming our intuition. * **RAWA.** It should lead to A2 with 50% and A4 with 25% performance since the correct answer option is replaced with an irrelevant correct answer of another question. Contrary to our intuition, **RAWA** leads to a near-perfect performance of 97.25% on A2 and even better 100% on A4. These unexpectedly high scores indicate that the model can easily distinguish the correct answer options from the incorrect ones of the original dataset, regardless of the question they are used for, consistent with the results of **RIWI**. Figure 3 shows the T-distributed Stochastic Neighbor Embedding (T-SNE) visualization of the embeddings of all answer options in the Social-IQ dev set. We can observe a clear boundary between correct and incorrect answer options. The above results provide compelling evidence of the unwanted bias in Social-IQ, manifested in T5-small's strong capability in distinguishing the correct and incorrect answer options. Similar evidence can be found in Social-IQ-2.0, as can be seen in the **v2** rows in Table 3. ## 3 DeSIQ: Debiased Social-IQ In this section, we first describe our approach to debias Social-IQ. We then study the effectiveness of our debiasing approach and the resultant DeSIQ datasets, by comparing the performance of both LSTM and T5-small on DeSIQ in different settings. ### Constructing DeSIQ We propose the following perturbation-based approach to debias Social-IQ and construct a more Figure 3: T-SNE visualization of correct and incorrect answer options. Red dots are incorrect answer options and blue dots are correct answer options. \begin{table} \begin{tabular}{c|c|c|c c} \hline Data & Model & Settings & A2 & A4 \\ \hline \hline \multirow{4}{*}{**v1**} & Random & none & 50 & 25 \\ & LSTM & NCAQ & 59.45 & 34.84 \\ \cline{2-5} & \multirow{4}{*}{T5-small (MPM)} & NCAQ & **100** & **100** \\ \cline{3-5} & & NCAQ+RIWI & 97.37 & 99.97 \\ \cline{3-5} & & NCAQ+RIWA & 50.21 & 25.03 \\ \cline{3-5} & & NCAQ+RAWI & 49.93 & 23.76 \\ \cline{3-5} & & NCAQ+RAWA & 97.25 & **100** \\ \hline \multirow{4}{*}{**v2**} & \multirow{4}{*}{T5-small (MPM)} & NCAQ & - & **63.35** \\ \cline{3-5} & & NCAQ+RIWI & - & 59.66 \\ \cline{1-1} \cline{3-5} & & NCAQ+RIWA & - & 24.72 \\ \cline{1-1} \cline{3-5} & & NCAQ+RAWI & - & 23.72 \\ \cline{1-1} \cline{3-5} & & NCAQ+RAWA & - & **62.36** \\ \hline \end{tabular} \end{table} Table 3: Model performance on A2 (binary choice) and A4 (multiple choice) under different experimental settings, in which only answer options are given as model inputs (but not questions nor context). ‘-’ represents the results are inapplicable. meaningful and challenging dataset on social intelligence. Specifically, we apply the **RIWA** perturbation on both the training and development sets of Social-IQ, ie substituting the incorrect answer options with correct answer options from the other questions. We construct two debiased datasets3: Footnote 3: Below we describe the dataset construction process for the A2 configuration. Similar perturbations are applied to the A4 configuration of Social-IQ. * **DeSIQ\({}_{d}\)**. Given an original triplet \((q,a,i)\), we randomly sample another triplet \((q^{\prime},a^{\prime},i^{\prime})\) from _another video_. Thus, for each original triplet \((q,a,i)\), we form a new triplet \((q,a,a^{\prime})\). * **DeSIQ\({}_{s}\)**. We sample \((q^{\prime},a^{\prime},i^{\prime})\) from the _same video_ for each \((q,a,i)\). Similarly, we replace the incorrect answer option \(i\) with \(a^{\prime}\). Since \(q\) and \(q^{\prime}\) are from the same video, their answers can have a higher chance of referring to the same entity that appears in the video. Thus, **DeSIQ\({}_{s}\)** is a more challenging dataset of \((q,a,a^{\prime})\). An example video and some associated questions and answer options for both Social-IQ and DeSIQ\({}_{s}\) can be seen in Figure 1. For Social-IQ-2.0, we do the same approach to obtain **DeSIQ\({}_{d}\)-2.0**. ### Effectiveness of the Debiasing Approach We set up a number of models in both fully supervised and zero/few-shot learning settings to show the effectiveness of our debiasing approach, Supervised Learning.We train the LSTM and T5-small on Social-IQ, DeSIQ\({}_{d}\) and DeSIQ\({}_{s}\) in the same architecture (Figure 2), and train T5-small on Social-IQ-2.0. Table 4, Table 5 and Table 6 show the results, where the relevant results are shaded in gray. The second column "Input" represents the input used in both the training and evaluation procedures, where "a", "q", "t", and "v" represent answer options, the question, the transcript and the video, respectively. The third column "Concat" represents different model architectures. The symbol "\(\boldsymbol{\mathcal{X}}\)" denotes that all inputs are separately encoded as in Figure 2, which is the focus of this subsection. The symbol "\(\boldsymbol{\mathcal{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\check{\ few-shot learning to show the strength of our debiased datasets. Social-IQ experiments are performed in the A2 configuration using GPT-3, while Social-IQ-2.0 experiments in the A4 using Chat-GPT. For zero-shot evaluation, we concatenate the question with correct and incorrect answer options (i.e. "q+a") to form the prompt4, where the order of the two answer options is randomly shuffled. The **zero-shot prompt** is constructed as follows: Footnote 4: We observe that when only given answer options but not the question, GPT-3 tends to select the first option. "Choose the correct answer option corresponding to the question: " + \(q\) + " A: " + \(a\) + " B: " + \(i\)" A or B")*3 + **zero-shot prompt** Table 7 shows the results. For Social-IQ, under the zero-shot setting, GPT-3 can obtain 58.26% with "q+a" and 64.63% with "q+a+t" on Social-IQ. In comparison, under either zero-shot or few-shot setting, both the DeSIQ\({}_{d}\) and DeSIQ\({}_{s}\) dataset lead \begin{table} \begin{tabular}{c l l} \hline Dataset & Input & Concat & A4 \\ \hline \hline \multirow{4}{*}{Social-IQ-2.0} & a & ✗ & 63.35 \\ & q+a & ✗ & 64.63 \\ & q+a+t & ✗ & 64.06 \\ & q+a+v & ✗ & 62.28 \\ \hline \hline \multirow{4}{*}{DeSIQ\({}_{d}\)-2.0} & a & ✗ & 28.07 \\ & q+a & ✗ & 28.45 \\ & q+a+t & ✗ & 22.17 \\ & q+a+v & ✗ & 24.13 \\ & q+a+s & ✗ & 25.87 \\ \hline \multirow{4}{*}{DeSIQ\({}_{d}\)-2.0} & a & ✓ & 28.07 \\ & q+a & ✓ & 57.23 \\ & q+a+t & ✓ & 52.02 \\ & q+a+v & ✓ & 68.93 \\ & q+a+s & ✓ & **74.13** \\ & q+a+t+v+s & ✓ & 37.72 \\ \end{tabular} \end{table} Table 6: Accuracy on the Social-IQ-2.0 and DeSIQ\({}_{d}\)-2.0 development sets. Results shaded in gray are relevant to Sec. 3. \begin{table} \begin{tabular}{c l l l|c c|c c|c c} \hline \hline Dataset & Input & Concat & \multicolumn{3}{c|}{A2} & \multicolumn{3}{c}{A4} \\ & & LSTM & T5-small & T5-small\({}_{Delphi}\) & LSTM & T5-small & T5-small\({}_{Delphi}\) \\ \hline \hline \multirow{4}{*}{Social-IQ} & a & ✗ & 59.45 & 100 & 100 & 34.84 & 100 & 100 \\ & q+a & ✗ & 59.78 & 100 & 100 & 38.55 & 100 & 100 \\ & q+a+t & ✗ & 60.00 & 100 & 100 & 43.84 & 100 & 100 \\ & q+a+v & ✗ & 64.38 & 100 & 100 & 46.08 & 100 & 100 \\ \hline \hline \multirow{4}{*}{DeSIQ\({}_{d}\)} & a & ✗ & 48.52 & 50.16 & 50.33 & 27.23 & 34.15 & 28.97 \\ & q+a & ✗ & 58.58 & 60.55 & 50.19 & 26.05 & 27.57 & 25.78 \\ & q+a+t & ✗ & 60.46 & 50.16 & 50.40 & 27.59 & 28.84 & 27.15 \\ & q+a+v & ✗ & 61.05 & 50.59 & 50.70 & 25.91 & 27.60 & 25.55 \\ \hline \multirow{4}{*}{DeSIQ\({}_{d}\)} & a & ✓ & 49.20 & 49.52 & 50.33 & 34.85 & 36.30 & 38.18 \\ & q+a & ✓ & 61.41 & 73.47 & 75.69 & 34.91 & 62.43 & 72.91 \\ & q+a+t & ✓ & 13.17 & 74.69 & **76.77** & 29.22 & 70.80 & **74.51** \\ & q+a+v & ✓ & 56.67 & 76.72 & 74.99 & 41.77 & 72.69 & 73.24 \\ \end{tabular} \end{table} Table 4: Accuracy on the Social-IQ and DeSIQ\({}_{d}\) development sets. Results shaded in gray are relevant to Sec. 3. \begin{table} \begin{tabular}{c l l|c c|c c|c c} \hline \hline Dataset & Input & Concat & \multicolumn{3}{c|}{A2} & \multicolumn{3}{c}{A4} \\ & & LSTM & T5-small & T5-small\({}_{Delphi}\) & LSTM & T5-small & T5-small\({}_{Delphi}\) \\ \hline \hline \multirow{4}{*}{Social-IQ} & a & ✗ & 59.45 & 100 & 100 & 34.84 & 100 & 100 \\ & q+a & ✗ & 59.78 & 100 & 100 & 38.55 & 100 & 100 \\ & q+a+t & ✗ & 60 & 100 & 100 & 43.84 & 100 & 100 \\ & q+a+v & ✗ & 64.38 & 100 & 100 & 46.08 & 100 & 100 \\ \hline \hline \multirow{4}{*}{DeSIQ\({}_{s}\)} & a & ✗ & 48.24 & 48.73 & 48.96 & 27.06 & 33.53 & 28.12 \\ & q+a & ✗ & 59.97 & 49.17 & 59.59 & 26.16 & 24.22 & 25.20 \\ & q+a+t & ✗ & 60.02 & 58.89 & 56.83 & 27.31 & 26.79 & 23.83 \\ & q+a+v & ✗ & 61.00 & 59.19 & 56.42 & 26.99 & 25.00 & 24.22 \\ \hline \multirow{4}{*}{DeSIQ\({}_{s}\)} & a & ✓ & 48.24 & 48.73 & 48.71 & 29.67 & 29.17 & 32.32 \\ & q+a & ✓ & 59.35 & 63.08 & 63.42 & 34.73 & 62.47 & 60.58 \\ \cline{1-1} & q+a+t & ✓ & 11.52 & 65.41 & **67.70** & 22.33 & **65.23** & 51.69 \\ \cline{1-1} & q+a+v & ✓ & 51.04 & 65.96 & 65.02 & 32.56 & 56.61 & 55.05 \\ \hline \end{tabular} \end{table} Table 5: Accuracy on the Social-IQ and DeSIQ\({}_{s}\) development sets. Results shaded in gray are relevant to Sec. 3. to a performance drop of more than \(4\%\). Under the few-shot setting for "q+a", GPT-3 does not seem to learn _shortcuts_, as the performance is unchanged compared to the zero-shot setting5. These results show that DeSIQ\({}_{d}\) and DeSIQ\({}_{s}\) are less biased and more challenging than Social-IQ. For Social-IQ-2.0, the performance does not change that much when leveraging ChatGPT under both zero-shot and few-shot learning settings, which also proves it is less biased than Social-IQ. Footnote 5: We could not perform few-shot experiments with “q+a+t” due to GPT-3’s limit of prompt length. ## 4 Setting Baseline Performance on DeSIQ For our more challenging DeSIQ benchmark, we introduce a new baseline model to better handle multi-modal inputs. Its architecture is shown in Figure 4. Compared with the model in Figure 2, we add three more projection layers (three yellow MLPs) to map the original feature representations into the same dimensions. We then concatenate all the resulting representations as the inputs to a backbone MPM. For DeSIQ\({}_{d}\)-2.0 containing raw data, we employ Vision Transformer (ViT) (Dosovitskiy et al., 2021) and Wav2Vec 2.0 (Baevski et al., 2020) to obtain the video and audio representations respectively. We note again that raw video and audio files are not available for **v1**, thus we develop the above architecture to uniformly handle both datasets, and leave how to best use multi-modal inputs in DeSIQ-2.0 for future work. As social intelligence usually requires commonsense knowledge, we posit that injecting commonsense knowledge into the backbone language model in our architecture would improve the model's performance. Therefore, inspired by Jiang et al. (2021), we distill commonsense social knowledge from the following datasets into T5-small: Social Chemistry 101 (Forbes et al., 2020), ETHICS (Hendrycks et al., 2021) and Moral Stories (Emelin et al., 2021). Specifically, we pretrain T5-small on these corpora and then finetune it on the downstream Social-IQ and DeSIQ datasets. We call this variant T5-small\({}_{Delphi}\). ### Results on DeSIQ We analyze the effectiveness of our proposed architecture, and the effect of the distillation of commonsense knowledge. The results of our new model architecture are shown in the bottom portions of Tables 4 and 5, where the inputs are concatenated ("\(\blackcheck\)" for the column "Concat"). We can make the following observations: * Both T5-small and T5-small\({}_{Delphi}\) outperform the LSTM baseline on both DeSIQ\({}_{d}\) and DeSIQ\({}_{s}\) while not achieving near perfect accuracy, showing the effectiveness of our proposed architecture as well as the unbiased nature of DeSIQ. * When the question is given as part of the model input, T5-small and T5-small\({}_{Delphi}\) (\(\blackcheck\)) significantly outperform the vanilla versions (\(\mathbf{X}\)), showing the effectiveness of our model architecture. * Injecting commonsense knowledge can indeed improve model performance on social intelligence. T5-small\({}_{Delphi}\) with "q+a+t" inputs shows the best A2 score as 76.77% and A4 and 74.51% on DeSIQ\({}_{d}\), and 67.70% in A2 on DeSIQ\({}_{s}\). On DeSIQ\({}_{d}\), it outperforms T5-small in all but one settings (q+a+v for A2). On DeSIQ\({}_{s}\), however, T5-small shows competitive performance, and significantly outperforms T5-small\({}_{Delphi}\) on A4 for both q+a+t. We leave the investigation of this result to future work. * In many cases, adding the transcript can help improve model performance, and usually more effective than adding the video modality. Since T5-small\({}_{Delphi}\) is pretrained on a textual corpus, it is reasonable that adding the video modality may decrease model performance. * Compared to DeSIQ\({}_{d}\), DeSIQ\({}_{s}\) is a more challenging dataset, as except for "a", performance of T5-small and T5-small\({}_{Delphi}\) drops for all others. * Comparing the performance of q+a+t/q+a+v and q+a, we can observe that both T5-small and T5-small\({}_{Delphi}\) can learn some shortcuts, as they achieve comparable performance when only given the question and answers as input. Some examples are shown in Appendix A Figure 5, illustrating the influence of different modalities. The first two examples show how the transcript and video features may provide clues for answering the \begin{table} \begin{tabular}{c c|c c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Input} & \multicolumn{2}{c}{A2 (GPT-3)/A4 (ChatGPT)} \\ \cline{3-4} & & Zero-shot & Few-shot \\ \hline \hline \multirow{2}{*}{Social-IQ} & q+a & 58.26 & 56.22 \\ & q+a++ & 64.63 & - \\ \hline \multirow{2}{*}{DeSIQ\({}_{d}\)} & q+a & 54.78 & 54.13 \\ & q+a+t & 59.79 & - \\ \hline \multirow{2}{*}{DeSIQ\({}_{s}\)} & q+a & 54.39 & 53.29 \\ & q+a+t & 60.13 & - \\ \hline \multirow{2}{*}{DeSIQ\({}_{d}\)-2.0} & q+a & 59.61 & 58.02 \\ & q+a+t & 59.24 & - \\ \hline \end{tabular} \end{table} Table 7: GPT-3 performance on the A2 of Social-IQ and DeSIQ, ChatGPT performance on the A4 of DeSIQ-2.0. question. For instance, the first example cannot be correctly answered based on "q+a", since the transcript contains the required information. T5-small\({}_{Delphi}\) is the only model that predicts correct options for the last example in Figure 5, which we attribute to Delphi's commonsense knowledge. ### Results on DeSIQ-2.0 For DeSIQ-2.0, we can apply multi-modal model using the raw videos and audios. The experimental results are in Table 6. Apart from some similar observations on DeSIQ-1.0 above, some new conclusions can be made as follows: * Adding audios or videos can help improve model performance. Moreover, audios are more effective as the model achieves overall best A4 score of 74.13% under the "q+a+s" setting. * Employing raw transcripts can reduce model performance (\(57.23\%\to 52.02\%\) ) as they are usually 5 times longer than other input features in length, which can largely influence the representation learning procedure of other inputs. * Compared with ChatGPT in Table 7, our best result outperforms 24.52% on A4, which shows DeSIQ-2.0 to be a challenging dataset. We also conduct experiments with settings "a+t" and "a+v", but don't include them in the paper. After debiasing, both settings for the proposed model on DeSIQ2.0 are near the random guess performance: "a+t" 22.66% and "a+v" 26.46%. Thus, questions are necessary when compared with the performance of "q+a+t" and "q+a+v" inputs in Table 6. ### Further Research Questions The above results show the lack of biases and challenging nature of our DeSIQ datasets as well as promising performance by modestly-sized language models. These results lead to the following important research directions for further investigation: * Are there still noticeable biases in DeSIQ, and if so, how to further debias it? * What is the performance of stronger language models on DeSIQ? * How to effectively incorporate socio-cultural and commonsense knowledge into large language models for this task? * How to utilize multi-modal language models to better exploit video and audio input? ## 5 Related Work Debiasing.Shah et al. (2020) proposed a number of _expectations_ to examine a **model**'s performance on a number of multiple-choice QA datasets and observed that the model (RoBERTa) falls short of the expectations. Different from this work, we establish a systematic methodology, consisting of six novel methods, to examine a **dataset**. And we design some experimental settings on both Social-IQ and Social-IQ-2.0. Language Dependence/Prior is actually a **MODEL** side bias resulting in the model largely depending on one major modality (usually text). Reducing it can be regarded as an optimization problem. Gat et al. (2020) try to balance the influence of text and image from the MODEL side. Though the paper includes Social-IQ dataset and gets positive results, it doesn't realise the bias's existence in the original Social-IQ dataset. Shortcut is a **DATA** side bias resulting in the model easily learning the pattern/repeated word in one dataset. For example, some keywords can occur both in the question and the correct answer, but not in the incorrect answers, so that the model directly gets clues from this overlap. Ye and Kovashka (2021) identify the shortcut and show its negative effects. However, they only modify the validation data and propose a masking approach to perform more robust training on the MODEL side. In this paper, we start from the DATA side and also peform debiasing on the DATA side. Moreover, the bias we identify in the Social-IQ dataset is not the same kind, which is mainly in the answers and much harder to be debiased in the DATA side. Thus, though they share some similarities, we consider it a new task. Multi-modal Question Answering.With different multiple input modalities, such as image and video, multi-modal question answering problem is more challenging and has been rising more and more attention in the past few years. Datasets like MovieQA Tapaswi et al. (2016), TGIF-QA Jang et al. (2017), TVQA Lei et al. (2018) and TVQA+ Lei et al. (2020) provide images, GIFs or video clips in addition to text-based single-turn questions. There are some datasets like AVSD Alamri et al. (2019) that require dialogue history to predict answers for multi-turn questions. All these datasets evaluate model capacity of perceive the contextual information contained in both text and non-text modalities. Social Intelligence Learning.Understanding and reasoning about social commonsense knowledge and human interactions is essential for cognitive social intelligence learning. Bosselut et al. (2019) present a comprehensive study on automatic commonsense knowledge base construction, which mines the intents and reasons behind human behaviors. Jiang et al. (2021) propose a commonsense moral model to better understand social norms and make reliable ethical judgments on real-world human actions. In this paper, we focus on the Social-IQ dataset Zadeh et al. (2019), a benchmark provides a diverse annotated set of videos and question-answer pairs. We run all the experiments on this dataset because it is much more related to social intelligence learning than other datasets. ## 6 Conclusion Social intelligence is an essential ingredient for effective human-computer communications. In this paper, we analyze Social-IQ, a multiple-choice question answering benchmark dataset for social intelligence. Our empirical analysis reveal the severe biases present in Social-IQ, which can be easily exploited by modestly-sized language models such as T5-small to achieve perfect accuracy on its development set. We construct the DeSIQ benchmark by applying simple perturbation-based techniques on Social-IQ and show that the DeSIQ vastly reduce the biases in Social-IQ. Moreover, we propose a new model architecture on DeSIQ and set strong performance baselines for this challenging new dataset. Finally, our comprehensive analyses open up a number of important research questions for further investigation. ## Limitations For the proposed model architecture designed to address the new DeSIQ benchmark, we mainly employ text-based language models and pretrain them on text-based corpora. The exploration of powerful multi-modal language models, instead of using the projection function as is done in this paper, is thus an important future research work direction. Due to resource constraints, all the experiments in this work were under conducted only once with the same random seed equals 42. Multiple runs with different random seeds would enable us to performance statistical significance tests of the results, and thus make the findings more reliable. ## Ethics Statement Although the benchmark is designed for studying human behaviors and research purposes only, the resources and findings could be used unexpectedly. For example, it is possible that harmful content exists in the Social-IQ dataset, thus also in our DeSIQ datasets, based on which trainable models could turn from a positive to a negative perspective. Thus, it is prudent for researchers working on social intelligence to pledge to only make ethical use of our benchmark datasets. ## Acknowledgement This material is based on research partially sponsored by the DARPA Assured Neuro Symbolic Learning and Reasoning (ANSR) program under award number FA8750-23-2-1016, the DARPA Knowledge Management at Scale and Speed (KMASS) program under award number HR00112220047, and the DARPA Computational Cultural Understanding (CCU) program under the agreement number HR001122C0029. The U.S. Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The authors are grateful to the anonymous reviewers for their helpful comments.
2304.13393
STIR: Siamese Transformer for Image Retrieval Postprocessing
Current metric learning approaches for image retrieval are usually based on learning a space of informative latent representations where simple approaches such as the cosine distance will work well. Recent state of the art methods such as HypViT move to more complex embedding spaces that may yield better results but are harder to scale to production environments. In this work, we first construct a simpler model based on triplet loss with hard negatives mining that performs at the state of the art level but does not have these drawbacks. Second, we introduce a novel approach for image retrieval postprocessing called Siamese Transformer for Image Retrieval (STIR) that reranks several top outputs in a single forward pass. Unlike previously proposed Reranking Transformers, STIR does not rely on global/local feature extraction and directly compares a query image and a retrieved candidate on pixel level with the usage of attention mechanism. The resulting approach defines a new state of the art on standard image retrieval datasets: Stanford Online Products and DeepFashion In-shop. We also release the source code at https://github.com/OML-Team/open-metric-learning/tree/main/pipelines/postprocessing/ and an interactive demo of our approach at https://dapladoc-oml-postprocessing-demo-srcappmain-pfh2g0.streamlit.app/
Aleksei Shabanov, Aleksei Tarasov, Sergey Nikolenko
2023-04-26T09:10:15Z
http://arxiv.org/abs/2304.13393v2
# STIR: Siamese Transformer for Image Retrieval Postprocessing ###### Abstract Current metric learning approaches for image retrieval are usually based on learning a space of informative latent representations where simple approaches such as the cosine distance will work well. Recent state of the art methods such as HypViT move to more complex embedding spaces that may yield better results but are harder to scale to production environments. In this work, we first construct a simpler model based on triplet loss with hard negatives mining that performs at the state of the art level but does not have these drawbacks. Second, we introduce a novel approach for image retrieval postprocessing called Siamese Transformer for Image Retrieval (STIR) that reranks several top outputs in a single forward pass. Unlike previously proposed Reranking Transformers, STIR does not rely on global/local feature extraction and directly compares a query image and a retrieved candidate on pixel level with the usage of attention mechanism. The resulting approach defines a new state of the art on standard image retrieval datasets: Stanford Online Products and DeepFashion In-shop. We also release the source code1 and an interactive demo2 of our approach. Footnote 1: [https://github.com/OML-Team/open-metric-learning/tree/main/pipelines/postprocessing/](https://github.com/OML-Team/open-metric-learning/tree/main/pipelines/postprocessing/) Footnote 2: [https://dapldoc-oml-postprocessing-demo-scrappmain-pfh2g0.streamlit.app/](https://dapldoc-oml-postprocessing-demo-scrappmain-pfh2g0.streamlit.app/) ## 1 Introduction Modern approaches for metric learning and image retrieval usually employ a standard pretrained backbone which is fine-tuned for the task with a metric learning objective such as the triplet loss [1]. Much of the progress in the field has concentrated on ways to improve upon the basic triplet loss. The backbones have usually been standard successful deep architectures, first convolutional ones such as ResNet-50 and later Transformer-based such as the Vision Transformer (ViT) [1]. In 2020, Musgrave at el. [16] performed an experimental evaluation of a long line of metric learning results and found little improvement over standard approaches, concluding that (undeniable) progress had been mostly due to steadily improving backbones. Since then, metric learning and information retrieval have become dominated by Transformer-based architectures, with ViT [1] being especially influential for image retrieval. The standard baseline today is a ViT-like model fine-tuned with the triplet loss to produce a latent space where the dot product of embeddings corresponds to the similarity needed for the retrieval problem. Still, latest works claim significant improvements with a variety of new loss functions [14, 15] and even remapping the embeddings into a hyperbolic space [1] (see Section 2). Our first contribution in this work is to go back and re-evaluate the standard Figure 1: STIR postprocessing on a real example from the In-Shop dataset: the reranking model receives as input concatenated query and gallery images and outputs the probability of them being a negative pair. triplet loss-based approach with a ViT backbone, which we call _ViT-Triplet_ (Fig. 2a). We find that with a brief tuning of hyperparameters and efficient implementation, ViT-Triplet outperforms state of the art results in some settings and reaches similar results in others. Apart from improved results, ViT-Triplet, in our opinion, is a better option in practice since other solutions are either harder to bring to production environments or require more complicated tuning. Second, we consider postprocessing for ViT-Triplet in the form of reranking the top results. Reranking for image retrieval has a long history [1, 1, 2, 3] but it has usually been applied to relatively weak models, where one needed to rerank hundreds of results. We consider reranking for ViT-Triplet output and note that since the results of ViT-Triplet are already quite good we can concentrate on reranking the top few results, which allows us to use much more computationally intensive methods. We present the _Siamese Transformer for Image Retrieval_ (STIR) model that uses a ViT architecture to process a concatenation of each query-result pair with a small MLP head on top (Fig. 2b). We show that STIR indeed improves over ViT-Triplet and prior art and thus sets new state of the art for several well-known image retrieval datasets. Fig. 1 illustrates sample STIR reranking on the In-Shop dataset [1]: distances on top are the result of ViT-Triplet, distances at the bottom are the results of STIR, and the ground truth answer is highlighted in green. The paper is organized as follows: Section 2 reviews related work, Section 3 introduces ViT-Triplet and STIR reranking, Section 4 presents our evaluation results, and Section 5 concludes the paper. ## 2 Related work We identify two relevant directions of related work. First, image retrieval itself, where the best recent work employs Transformer-based backbones. Vision Transformers [1] were fine-tuned for image retrieval by the IRT model [1] based on the DeiT distillation approach [2]. Hyperbolic Vision Transformers (Hyp-ViT) [1] reach state of the art results on several datasets by using pairwise cross-entropy with hyperbolic distances measured on the Poincare ball. However, the resulting embeddings are not suitable for most existing vector search engines that rely on algorithms optimized for Euclidean spaces, so Hyp-ViT is hard to bring to production environments. Moreover, Hyp-ViT defines an entire family of models (six variations, two embedding sizes for each) that need to be evaluated in each case, which we view as a kind of hyperparameter tuning. Another direction of study introduces new loss functions that approximate or provide bounds for non-differentiable retrieval metrics. ROADMAP [16] presents a decomposable differentiable upper bound for the average precision. Patel et al. [2] proposed a differentiable surrogate loss for recall optimization further augmented with mixup regularization; this, however, also leads to a set of new hyperparameters such as sigmoid temperatures that need to be tuned. We also note the HAPPIER model that proposes a new loss function for hierarchical image retrieval [14]. Interestingly, despite the prevalence of Transformers some of the top results are still produced by CNNs: a combination of multiple CNN-based global descriptors was proposed in [15], while the standard ResNet-50 backbone has been leveraged with the ProxyNCA++ method (an update on proxy-neighborhood component analysis) in [13] and with the Metrix loss function that extends mixup to metric learning objectives (including triplet loss) in [21]. Second, specifically postprocessing (reranking) approaches are rare in recent works. Classical approaches usually reranked image retrieval results based on local descriptors extracted from the images [1, 1, 2]. We note _SuperGlue_ that used graph neural networks to link local descriptors [12] and the _Reranking Transformer_ (RRT) approach that uses a Transformer to process global and local descriptors extracted from an image pair [20]. Unlike most approaches that rerank a large set of results (at least several hundred), we aim to correct an already high-performing Transformer-based model so we concentrate on (relatively heavyweight) postprocessing of a few top results. ## 3 Method ### ViT-Triplet For the ViT-Triplet model, we follow the approach from [1, 13] and fine-tune a ViT backbone for image retrieval as shown in Fig. 2a. We form batches by taking \(P\) labels (item ids) and \(K\) instances (images) for each label. To decrease the number of hyperparameters we set \(P=4\) since the median size of a class is 5 in InShop and 4 in SOP (see Section 4.1), so we can avoid severe under- or oversampling. The parameter \(K\) is chosen such as to fill the GPU memory (\(K=150\) in our case for an NVIDIA V100 GPU). After a batch is sampled, we perform hard triplet mining to form \(PK\) triplets; namely, we calculate the distance matrix between embeddings of images in the batch and take for each image the hardest positive sample (same label, maximum distance) and the hardest negative sample (different label, minimum distance). Then we compute the triplet loss function \[L(q,p,n)=\max\left(0,d(q,p)-d(q,n)+m\right)\] for a query image \(q\), positive sample \(p\), negative sample \(n\), distance in the embedding space \(d(\cdot,\cdot)\), and constant margin \(m\); we set \(m=0.15\) in all experiments (in previous works, it was usually chosen as \(m\in[0.1,0.2]\)). We name the resulting model _ViT-Triplet_. ### Siamese Transformer Qualitative error analysis has shown that many mistakes may be caused by the fact that the feature extractor has to "blindly" represent a given image as a vector in the latent space, without understanding what other images it would be compared against. In a perfect world, all the information needed for this comparison would be already included in the feature vector, but in reality, the model can benefit a lot from a direct side-by-side comparison of the images. Another motivation comes from the distribution of results. For instance, the CMC@1 metric for _ViT-Triplet_ on the In-Shop dataset is 92.1% but CMC@5 reaches 97.6%, i.e., less than a third of the error rate. It means that we usually already have a correct answer at the top of the list, and side-by-side comparison may help us push it in the first place. In general, with a good feature extractor CMC@k saturates for relatively small values of \(k\), such as \(k=5\), so we do not need to rerank a lot and can afford to use a relatively heavyweight model. We suggest to use a reranking model that performs pairwise comparisons of the query image and top retrieved results. We want our pairwise postprocessor to have the following properties: 1. it has to reuse an already trained feature extractor, ideally without new large trainable networks; 2. it has to have an attention mechanism to compare the regions of a query image and regions of a gallery image pairwise; 3. it has to be interpretable or at least provide some mechanism that can be used to interpret the results; 4. it has to be simple and not require additional manual labeling or extra data. Thus, we propose to use the _Siamese Transformer for Image Retrieval_ (STIR) model, which is a ViT feature extractor with an additional MLP on top that Figure 2: Network architectures: (a) ViT-Triplet; (b) STIR postprocessing. takes two concatenated images as input and returns the probability of these images to be a negative pair (Fig. 2b). This output can also be interpreted as a "distance": lower probability means more similar images. STIR satisfies the requirements above: 1. a pretrained _ViT-Triplet_ is used for initialization; 2. the built-in ViT attention mechanism considers the interactions of patches both inside an image and across images; 3. the resulting attention maps help to achieve interpretability; 4. the only overhead is a two-layer MLP, and the input can reuse the same image pairs. To train the postprocessor, similarly to _ViT-Triplet_ we form batches by taking \(P\) labels and \(K\) images for each label, with the same \(K=4\) but \(P=30\) instead of \(150\) since STIR has a larger memory footprint due to larger input size. After a batch is sampled, we mine hard pairs (pairs with largest distances and same labels or smallest distances and different labels), concatenate the images, and feed them to STIR, which predicts the probability of a pair to be negative. We use the binary cross-entropy as the objective function for STIR. Another important property of STIR is that it is _asymmetric_, i.e., the results may depend on whether we put the query on the left and a gallery image on the right or vice versa. We have not found significant differences between these two options, but the results improve a little further if we symmetrize STIR by averaging their results. We call this version _STIR-Symmetric_ in the tables below and propose it as a slightly improved version of STIR reranking with an additional cost of running the model twice. ## 4 Evaluation ### Datasets and experimental setup We concentrate on two standard image retrieval datasets. The _In-shop Clothes Retrieval Benchmark_ (**In-Shop**) dataset [16] is a part of the _DeepFashion_ dataset with \(7\,982\) clothing items and \(52\,712\) high quality in-shop images, with the median of \(5\) photos per item. There are \(25\,882\) images in the training set and \(26\,830\) images in the test set, which is divided into two non-overlapping parts: query set (\(14\,128\) images) and gallery (the search index, \(12\,612\) images). Each query image corresponds to one or more images of the same clothing item in the gallery. _Stanford Online Products_ (**SOP**) [17] has \(22\,634\) online products with \(120\,053\) related images, with the median of \(4\) photos per item. There are \(11\,318\) products (\(59\,551\) images) in the training set and \(11\,316\) products (\(60\,502\) images) in the test set. The test set has no fixed query-gallery split, so we consider each individual photo as a query and evaluate it versus the rest of the images in the test set. To ensure a fair comparison, we copy (as much as possible) the parameters and backbone models for ViT-Triplet from Hyp-ViT [15]. The model architecture is ViT-S/16 (small version, patch size 16), the optimizer is AdamW with lr=1e-5, the image size is 224, and augmentation transforms are Horizontal Flip and Random Resized Crop with scale randomly chosen from \((0.2,1.0)\); we did not use any additional information from the data such as bounding boxes or category labels. The training setup for STIR is mostly the same as for ViT-Triplet: image size 224, the same augmentations, but with a less agressive Random Resized Crop (sampling the scale parameter from \((0.8,1)\)) so that STIR can compare almost the entire two images side-by-side. We used the AdamW optimizer with learning rate 2e-3 for the first 3 epochs, when we fine-tune the MLP head only, and 1e-5 for the rest of the training. The MLP head consists of two fully connected layers with sizes (384, 192) and (192, 1) respectively, separated by a dropout layer with probability \(p=0.5\) and sigmoid activation function. We run all training experiments on two NVIDIA V100 GPUs with half-precision turned on. For the final metrics evaluation, we used only one GPU and turned off half-precision. In the evaluation tables, all external results are taken from the corresponding papers except for surrogate recall, where the original work [14] reports only the ViT-B version, which is better than the results in Table 1 that use the ViT-S backbone. Therefore, we have re-evaluated surrogate recall with ViT-S using the original code [14]. ### Evaluation metrics Most works on metric learning and information retrieval report Recall@k for various values of \(k\) as the primary evaluation metric. Interestingly, there is a significant discrepancy between the understanding of recall in classical information retrieval and the "recall" metric used in many works on metric learning. Usually, recall is defined as \[\text{Recall@}k=\frac{n_{\text{k}}}{n_{\text{gt}}},\] where \(n_{\text{k}}\) is the number of ground truth results in the top \(k\) retrieved results and \(n_{\text{gt}}\) is the total number of ground truth results. However, metric learning works often report, e.g., Recall@1 values close to 1 even when there exist several ground truth answers to a query, \(n_{\text{gt}}>1\), and Recall@1 should be bounded by \(1/n_{\text{gt}}\). This is because instead of recall they actually report the _cumulative matching characteristics_ (CMC) metric: \[\text{CMC@}k=\begin{cases}1,&\text{if a correct answer is among top $k$ retrieved results},\\ 0,&\text{otherwise}.\end{cases}\] For datasets with exactly one ground truth answer for every query, CMC and recall coincide; however, this is not the case for In-Shop and SOP datasets so we keep the CMC terminology in evaluation tables. We also note that since In-Shop and SOP have several correct answers, the _precision_ metric, \[\text{Precision@}k=\frac{n_{\text{k}}}{k},\] also makes sense for evaluation. Therefore, below we report mean average precision (mAP) values as well, where \[\text{AP@k}=\frac{1}{n_{\text{k}}}\sum_{i=1}^{k}\left[\#i\text{ is correct} \right]\cdot\text{Precision@}i\] is the average precision (area under the precision-recall curve), and mAP@k is AP@k averaged over the test set queries. Unfortunately, we have nothing to compare with in terms of mAP since its values have not been reported in previous works. ### Results Table 1 shows the main results of our comparison. First, note that the ViT-Triplet model, trained with the standard embedding dimension 384 only with the triplet loss and ViT backbone, shows state of the art results on both SOP and In-Shop datasets, losing to the current state of the art HypViT only in the CMC@1 metric on In-Shop. This supports our conclusion that most of the latest progress in image retrieval has been due to steadily improving Transformer-based backbones, and a well-trained ViT backbone with a straightforward triplet loss is still a very competitive approach to image retrieval. Second, Table 1 shows how STIR postprocessing improves the results of ViT-Triplet, outperforming the best previous results (including ViT-Triplet itself) and Reranking Transformers in the CMC@1 metric (Recall@1). Note that since STIR in Table 1 is limited to reranking the top \(n=5\) results, it cannot change the CMC@k and Recall@k metrics for \(k\geq 5\), so the rest of the results coincide with ViT-Triplet. STIR results improve monotonically with \(n\), but we have chosen \(n=5\) to report in Table 1 because in this case, STIR postprocessing has the same running time as reranking Transformers [13]. In Table 2, we report mean average precision scores for our methods; we do not have the results of other approaches here, so we show these numbers to provide a baseline and hope that later works will measure mAP as well. Table 3 presents the results of our ablation study for STIR variations differing by the number \(n\) of the results they rerank. Since STIR requires to run a ViT-based model for each query-gallery pair, it is a relatively heavyweight approach to postprocessing so we limit the comparison to small values of \(n\). We see that the CMC@1 (Recall@1) metric saturates quickly as we increase \(n\). Table 3 also shows the advantages of mAP in this case: it is a holistic metric that improves as positive samples move closer to the top of the list so mAP@5 and mAP@10 can be used to detect improvements, while CMC@10, naturally, remains unchanged when we rerank top results for \(n\leq 10\). \begin{table} \begin{tabular}{| ### Qualitative analysis Figure 3 shows several reranking examples from the InShop dataset as shown in our interactive demo of STIR3. Fig. 3a and Fig. 3b show two results where the reranking improves the results according to the ground truth labeled in the test set; in particular, in both cases the best (top-1) result has been corrected from wrong to right. Footnote 3: [https://dapladoc-oml-postprocessing-demo-srappmain-pfh2g0.streamlit.app/](https://dapladoc-oml-postprocessing-demo-srappmain-pfh2g0.streamlit.app/) Fig. 3c shows a result where STIR reranking actually makes the output worse according to the ground truth labeling. Note, however, that the ground truth results in this case are problematic themselves: they deal only with the shirt of the model while the query clearly shows both the shirt and jeans that do not match in the second "correct" answer. Unfortunately, such ambiguous results are encountered in existing datasets quite often, so we note this as a direction for further improvement that might help the entire field of image retrieval. Note also that the InShop dataset is supposed to care about the cut and fashion of a clothing item rather than color, so the model is supposed to retrieve the same item in different colors as well, which often increases the ambiguity. ## 5 Conclusion In this work, we have presented a simple _ViT-Triplet_ model that uses the ViT backbone and the triplet loss and have shown that it consistently reaches or exceeds state of the art results in image retrieval. Thus, a straightforward solution with the best available backbone and a well-tuned training process still remains at the state of the art level in image retrieval. Moreover, we have presented a postprocessing approach called STIR that reranks top results by \begin{table} \begin{tabular}{l c|c c|c c} \hline \hline **Model** & **Emb.** & \multicolumn{2}{c|}{**SOP, mAP**} & \multicolumn{2}{c}{**In-Shop, mAP**} \\ & & **@5** & **@10** & **@5** & **@10** \\ \hline ViT-Triplet & 384 & 87.6 & 85.1 & 91.6 & 88.4 \\ STIR, \(n=5\) & 384 & 89.4 & 86.5 & 94.8 & 91.0 \\ STIR-Symmetric, \(n=5\) & 384 & **89.5** & **86.6** & **95.0** & **91.2** \\ \hline \hline \end{tabular} \end{table} Table 2: Mean average precision on SOP and In-Shop datasets. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline **Model** & **CMC@1** & **CMC@10** & **mAP@5** & **mAP@10** \\ \hline STIR, \(n=3\) & 94.4 & **98.5** & 93.1 & 89.6 \\ STIR, \(n=5\) & **94.9** & **98.5** & **94.8** & 91.0 \\ STIR, \(n=7\) & **94.9** & **98.5** & 94.7 & 92.0 \\ STIR, \(n=9\) & **94.9** & **98.5** & 94.5 & **92.7** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study for STIR, In-Shop dataset. an additional pass of ViT over concatenated query and gallery images; STIR is a heavyweight postprocessing method aimed at improving the top of the list. Our experimental study on SOP and In-Shop datasets has shown that STIR can indeed significantly improve retrieval results. We also release a library that implements our methods and can reproduce all our results4. Footnote 4: [https://github.com/OML-Team/open-metric-learning](https://github.com/OML-Team/open-metric-learning) We note several directions for further work. First, we only consider direct Figure 3: STIR postprocessing examples from the interactive demo. query-to-gallery interactions, while gallery-to-gallery interactions are left indirect. Second, STIR processes the original concatenated images, which makes it relatively slow, and there may be at least two ways to address the problem. First, our current backbone is ViT, which has quadratic complexity with respect to image size; replacing it with an architecture such as the Swin Transformer [11] that would reduce this complexity to linear may significantly speed up postprocessing. Second, one can replace original images with descriptors obtained from intermediate layers of the feature extractor during the first stage; much of the semantic information is already contained in these features but this would require a different architecture, which we leave for future work.
2302.10514
Spectrum of QCD with one flavor: A window for supersymmetric dynamics
We compute the spectrum of the low-lying mesonic states with vector, scalar and pseudoscalar quantum numbers in QCD with one flavour. With three colours the fundamental and the two-index anti-symmetric representations of the gauge group coincide. The latter is an orientifold theory that maps into the bosonic sector of $\mathcal{N} = 1$ super Yang-Mills theory in the large number of colours limit. We employ Wilson fermions along with tree-level improvement in the gluonic and fermionic parts of the action. In this setup the Dirac operator can develop real negative eigenvalues. We therefore perform a detailed study in order to identify configurations where the fermion determinant is negative and eventually reweight them. We finally compare results with effective field theory predictions valid in the large $N_C$ limit and find reasonably consistent values despite $N_C$ being only three. Additionally,the spin-one sector provides a novel window for supersymmetric dynamics.
Michele Della Morte, Benjamin Jäger, Francesco Sannino, Justus Tobias Tsang, Felix P. G. Ziegler
2023-02-21T08:42:32Z
http://arxiv.org/abs/2302.10514v2
# The spectrum of QCD with one flavour: A Window for Supersymmetric Dynamics ###### Abstract We compute the spectrum of the low-lying mesonic states with vector, scalar and pseudoscalar quantum numbers in QCD with one flavour. With three colours the fundamental and the two-index anti-symmetric representations of the gauge group coincide. The latter is an orientifold theory that maps into the bosonic sector of \(\mathcal{N}=1\) super Yang-Mills theory in the large number of colours limit. We employ Wilson fermions along with tree-level improvement in the gluonic and fermionic parts of the action. In this setup the Dirac operator can develop real negative eigenvalues. We therefore perform a detailed study in order to identify configurations where the fermion determinant is negative and eventually reweight them. We finally compare results with effective field theory predictions valid in the large \(N_{C}\) limit and find reasonably consistent values despite \(N_{C}\) being only three. Additionally, the spin-one sector provides a novel window for supersymmetric dynamics. + Footnote †: preprint: CERN-TH-2023-028 ###### Contents * I Introduction * II Simulation setup * III Eigenvalue analysis * IV Correlator analysis * IV.1 Construction of correlation functions * IV.2 Reweighting and vacuum expectation value subtraction * IV.3 Correlation function fits * V Analysis of the spectrum * V.1 Defining the chiral limit * V.2 Assignment of states * V.3 Extrapolation to zero quark mass * VI Discussion and Outlook * VII Acknowledgements * A Distribution of the topological charge * B Results of the correlation function fits ## I Introduction Understanding the dynamics of strongly coupled gauge theories, such as QCD, has motivated the construction of several expansions complementary to the standard, perturbative, weak coupling expansion. One of the most prominent examples is the large \(N_{C}\) limit (where \(N_{C}\) is the number of colours), introduced by 't Hooft in Ref. [1]. In this case one keeps quarks in the fundamental representation of the gauge group \(SU(N_{C})\) and organises an expansion in \(1/N_{C}\) using a diagrammatic approach. Several properties of QCD can then be understood in a simple way, suggesting that \(N_{C}=3\) is "large". However, since quark loops are suppressed in this expansion, the properties of the \(\eta^{\prime}\)-meson are not well reproduced in the 't Hooft large \(N_{C}\) limit. Baryons also become increasingly heavy as \(N_{C}\) grows. Partly motivated by that, Corrigan and Ramond (CR) introduced a different large \(N_{C}\) expansion in Ref. [2], in which quarks transform according to the two-index antisymmetric representation of the gauge group. While 't Hooft and CR expansions coincide for \(N_{C}=3\), they are very different in the large \(N_{C}\) limit. Notably, in the CR expansion, quark loops are not suppressed as \(N_{C}\to\infty\). A simple scaling of the dimensionality of the representations of the quark fields suggests that the CR large \(N_{C}\) limit may share non-trivial dynamical properties with supersymmetric theories. This relation has been made precise by Armoni, Shifman and Veneziano in Refs. [3; 4], where a connection between the mesonic sectors of the two-index (anti-)symmetric theories and of \(\mathcal{N}=1\) super Yang-Mills theory (sYM) is established. The subtle issues of the confinement properties and (in)equivalences at large \(N_{C}\) were investigated in Ref. [5]. Further developing the correspondence, in Ref. [6] supersymmetry inspired effective Lagrangians have been constructed for gauge theories featuring one Dirac fermion transforming either in the symmetric or in the anti-symmetric two-index representation of the gauge group \(SU(N_{C})\) (orientifold theories). At leading order in the \(1/N_{C}\) expansion such effective theories coincide with that of super symmetric gluodynamics restricted to its mesonic sector. These correspondences imply that non-perturbative quantities computed in orientifold theories can be related, up to \(1/N_{C}\) effects, to the analogous ones in sYM. By considering \(1/N_{C}\) supersymmetry breaking effects, including the explicit ones due to a finite quark mass, a number of predictions are made in Ref. [6] concerning the spectrum of the low-lying mesonic states. In this work we confront such predictions with non-perturbative results produced by means of lattice simulations. For simplicity, in this first study we only consider \(N_{C}=3\), which corresponds to one-flavour QCD. This has the advantage that available simulation packages for lattice QCD can be used without having to develop new code for handling representations of the fermionic fields different from the fundamental one. Future studies will be devoted to the extension to \(N_{C}>3\). Intriguingly, by flipping the point of view (cf. Refs. [5; 7]), we can use QCD results to learn about the spectrum and dynamics of supersymmetric theories, in particular \(\mathcal{N}=1\) sYM. Analytic and numerical studies can now be employed to investigate several dynamical properties, including the theta-angle [6]. One-flavour QCD has been the object of several previous lattice studies. In Ref. [8] the quark condensate has been computed by comparing the density of low-lying eigenvalues of the overlap Dirac operator to predictions from Random Matrix Theory [9; 10]. The result is consistent with the prediction for the gluino condensate in sYM obtained in Ref. [11]. Using Wilson fermions, Ref. [12] presents a computation of the low-lying hadronic spectrum of one-flavour QCD. We improve here on that computation by considering a finer lattice spacing, larger volumes and a tree-level improved fermionic action. In Ref. [13] the one-flavour \(SU(2)\) vector gauge theory with the fermion in the fundamental representation is studied as a possible composite model for Dark Matter. The Dirac operator is discretised using Wilson's regularisation. The fundamental representation of \(SU(2)\) is pseudo-real making the global symmetries and dynamics different from three colours QCD. In particular, the dark-matter model of Ref. [13] features a mass-gap with vector mesons being the lightest triplet of the enhanced \(SU(2)\) global symmetry. A similar DM model based on \(SU(2)\) gauge theory with scalar quarks was proposed in Ref. [14]. Finally, in Ref. [15] the single flavour \(SU(2)\) theory is considered with the fermion in the adjoint representation. The goal in this case is to gain insights on the emergence of the conformal window. Again the Wilson Dirac operator is used in the numerical simulations. As is highlighted by this brief review, one-flavour QCD is implemented on the lattice by adopting either overlap (or more generally Ginsparg-Wilson) or Wilson fermions. That is because in those cases the single-flavour lattice Dirac operator can be rigorously defined. Wilson fermions are computationally cheaper but in such regularisation the spectrum of the Dirac operator may contain real negative eigenvalues for positive (but small) quark masses. That might cause a sign problem as the fermion determinant may become negative on some configurations. Following Refs. [16; 17; 18; 19] we discuss in detail how we monitor such cases. A preliminary account of the results we present in this paper appeared in Refs. [20; 21]. The latter also contains some algorithmic exploratory studies for \(N_{C}=4,5\) and \(6\). The remainder of this paper is organised as follows. In Section II we describe our computational setup and provide algorithmic details. In Section III we investigate the consequences of the sign problem in our simulations. In Section IV we report on the correlation function fits required to extract the spectrum at non-zero quark masses, before extrapolating the meson spectrum to vanishing quark masses in Sec. V. Finally, in Section VI we confront the effective field theory predictions with our results and provide an outlook. ## II Simulation setup For the gauge part of the action, we employ the Symanzik improved gauge action [22] with a fixed value for the gauge coupling of \(\beta=4.5\). As fermion action we use one flavour of tree-level improved Wilson fermions [23] and set the parameter of the clover term to \(1\). The Wilson-Dirac operator \(D\) in clover improved form is defined as follows \[D(m_{0})= \frac{1}{2}\sum_{\nu=0}^{3}\left(\gamma_{\nu}(\nabla_{\nu}^{*}+ \nabla_{\nu})-a\nabla_{\nu}^{*}\nabla_{\nu}\right)\] \[+\ a_{\rm SW}\sum_{\nu,\rho=0}^{3}\frac{i}{4}\sigma_{\nu\rho}\hat {F}_{\nu\rho}+m_{0}\,, \tag{1}\] where \(a\) is the lattice spacing, \(m_{0}\) is the bare quark mass and \(\nabla_{\nu}^{(*)}\) denotes the covariant forward (backward) derivative. The hopping parameter \(\kappa\) is related to the bare mass \(m_{0}\) by \(1/\kappa\ =\ 2(am_{0}+4)\). In order to map out the relevant parameter space we generated \(19\) gauge field ensembles covering different hopping parameters \(\kappa\) between \(0.1350\) and \(0.1410\) and volumes ranging from \(12^{3}\times 64\) to \(32^{3}\times 64\). An overview of the simulation parameters can be found in Table 1. We measure the topological charge \(Q\) by integrating the Wilson flow [24] using a third-order Runge-Kutta scheme with a step-size of \(\epsilon=0.01\) and \(1600\) integration steps. The topological charge at the largest flow time (\(t/a^{2}=16\)) is shown for all ensembles in Fig. 18 in Appendix A. The topological charge behaves as expected: its distribution is narrower for lighter quark masses and broader for larger volumes [9]. The Wilson flow further allows us to estimate the lattice spacing (via the reference flow scale \(t_{0}\)) by studying the Yang-Mills gauge action density as a function of flow-time [24]. Since our goal is to determine dimensionless quantities, we only quote the lattice spacing in order to enable qualitative comparison with other lattice calculations. As there is no reference scale for a single flavour (\(N_{f}=1\)), we use the average of \(t_{0}\) from \(N_{f}=0\)[24] and \(N_{f}=2\)[25] as an estimate for the lattice spacing with \(N_{f}=1\). In practice, we use a value of \(\sqrt{8t_{0}}=0.45\,\mathrm{fm}\). This allows us to obtain an indicative value for the lattice spacing of \(a\approx 0.06\,\mathrm{fm}\). All configurations are generated using the openQCD software package [26]. Since we only simulate a single fermion in the sea, it is necessary to use the rational hybrid Monte Carlo (RHMC) algorithm [27]. In the rational approximation we adopt a Zolotarev functional of degree 10. In the absence of prior knowledge about the optimal Zolotarev approximation - in particular for just one flavour - we choose a conservative range of 0.002 and 9.0 as a lower and upper bound for the position of the poles. In comparison with Ref. [19] this is a rather loose approximation, which is relevant for the tunnelling between regions of configuration space with positive and negative determinants of the Dirac operator. In addition, we include frequency splitting, i.e. we factorise the Zolotarev rational into two terms, where the first factor contains the poles 1 to 5 and the second term the contribution from poles 6 to 10. Throughout the entire generation, we adopt three levels of integration schemes. The outermost employs a second-order Omelyan integrator [28] with \(\lambda=1/6\), which is used for the contributions from poles 6 to 10. For the inner two levels we use fourth-order Omelyan integrators, where the remaining fermion force is calculated in the second, and the gauge forces in the innermost level. We tune the number of fermion integration steps (ML steps) in the different levels to achieve a high acceptance (between 84% and 99.9%, c.f. Table 1). The pseudofermion actions and forces are obtained using a simple multi-shift conjugate gradient solver. For ensembles with a lighter quark mass, i.e. with larger values of \(\kappa\), we take advantage of the deflated SAP [29; 30] preconditioned solver given in the openQCD framework. The trajectory lengths of our ensembles are typically between 2 and 3 molecular dynamic (MD) units. In our analysis, we use every 32nd (or 40th) trajectory, which implies that configurations are at least 64 MD units apart from each other. For each ensemble the resulting number of configurations \(N_{\mathrm{config}}\) on which we perform all measurements is listed in Table 1. To increase the amount of statistics and to utilise smaller computing resources more efficiently, we branch our simulation stream into multiple replicas after thermalisation is reached. Since the Zolotarev approximation in the RHMC is not exact, we correct our observables by using a reweighting scheme. To achieve this, on each configuration we compute four estimators for the reweighting factors \(w_{i}\) using code from the openQCD package. The correctly reweighted gauge average of an observable \(O\) is then given by \[\left\langle O\right\rangle_{\mathrm{rew}}=\frac{\left\langle wO\right\rangle }{\left\langle w\right\rangle}=\left\langle w^{\prime}O\right\rangle\,, \tag{2}\] where we define \(w^{\prime}=w/\left\langle w\right\rangle\). Figure 1 shows these normalised reweighting factors \(w^{\prime}\) as a function of the trajectory length (excluding any thermalisation times) for the \(L/a=32\), \(\kappa=0.1390\) ensemble. In Fig. 2 we show the variation of the reweighting factors for all ensembles and observe that the fluctuations increase with volume, but are insensitive to the quark mass. As the phase space of this theory in the regularisation \begin{table} \begin{tabular}{c c c c c c c} \hline \(L/a\) & \(\kappa\) & ML steps & \(\tau_{MD}\) & \(\Delta\)cfg & MDU & \(N_{\mathrm{config}}\) & Acceptance \\ \hline \hline [MISSING_PAGE_POST] \hline 32 & 0.1390 & 1,1,6 & 2.0 & 64 & 180 & 0.979 \\ 32 & 0.1400 & 1,1,6 & 2.0 & 64 & 376 & 0.967 \\ \hline \end{tabular} \end{table} Table 1: Overview of the lattice ensembles generated in this study. All configurations are at a fixed gauge coupling of \(\beta=4.5\) and a fixed temporal extent of \(T/a=64\). The simulation parameters were tuned to achieve a high acceptance with a large trajectory length \(\tau_{MD}\). We refer the reader to the text for the definitions of the parameters. Figure 1: Normalised reweighting factors on an example ensemble. Figure 2: Typical spread of normalised reweighting factors as a function of volume and quark mass. we have chosen is a priori unknown, we computed the trace of the Polyakov loop. We find that the Polyakov loop vanishes within errors on each ensemble, which indicates that we are simulating in the confined phase. ## III Eigenvalue analysis The use of Wilson fermions for lattice QCD with an odd number of quark flavours or with non-mass-degenerate (light) quarks can introduce a sign problem. This occurs because the configuration space is divided into two sectors, one associated to a positive sign of the fermion determinant and one to a negative sign. These sectors are separated by a zero of the fermionic measure. Note that the latter translates into a pole of the fermionic force in the molecular dynamics algorithm. With exact integration and an exact expression for the square root function, the negative sector cannot be reached from the positive one. In practice the algorithmic choices for the rational approximation yield a finite (rather than infinite) barrier between the two sectors. In the thermodynamic and continuum limit the trajectory is expected to be constrained to the positive sector. However, at finite volume, the presence of the negative sector has to be accounted for by sign reweighting which requires knowledge of the sign of the fermion determinant \(\det(D)\). A direct computation is numerically (prohibitively) expensive. Instead we follow a strategy in which the sign of \(\det(D)\) is inferred from computing a few of the lowest eigenvalues of the Dirac operator. This can be achieved at a cost linear in the lattice volume and using the approach we will now sketch: Due to \(\gamma_{5}\)-Hermiticity of the Wilson-Dirac operator, i.e. \[D^{\dagger}=\gamma_{5}D\gamma_{5}\,, \tag{1}\] the matrix \(Q=\gamma_{5}D\) is Hermitian and its spectrum is real. Furthermore, it holds that \(\det(D)=\det(\gamma_{5})\det(D)=\det(Q)\) and that a zero eigenvalue of \(D\) is also a zero eigenvalue of \(Q\). Recalling that the eigenvalues of \(D\) come in complex conjugate pairs, for \(\det(D)\) to be negative there must be an odd number of negative real eigenvalues of \(D\). Since the fermion determinant \(\det(D)\) is assumed to be positive for large quark masses, we can infer that the determinant at the unitary mass \(m_{0}^{*}\), used in the actual simulation, is negative if and only if there is an odd number of eigenvalues that cross zero as the mass is decreased from large quark masses to \(m_{0}^{*}\). The idea is to locate (on each gauge configuration) the largest value \(m_{t}\) of the quark mass such that \(Q(m_{t})\), and therefore \(D(m_{t})\), has a zero eigenvalue. If \(m_{t}^{*}\) is larger than this value \(m_{t}\) then \(D(m_{0}^{*})=D(m_{t})+(m_{0}^{*}-m_{t})I\) has no negative eigenvalues. Conversely, if \(m_{0}^{*}<m_{t}\), we need to determine the number of zero crossings of the lowest eigenvalue(s) \(\lambda(m_{0})\) of \(Q(m_{0})\) by varying the bare mass \(m_{0}\) from above \(m_{t}\) down to \(m_{0}^{*}\). To that end we combine the PRIMME package with openQCD as mentioned in Ref. [19]. In practice we proceed in two steps: First we perform a _preselection_ to identify potential candidate configurations with a negative fermion determinant and for this subset of configurations we perform a _tracking analysis_ to identify the configurations that indeed display a negative fermion determinant. We start the _preselection_ by measuring the lowest O(10) eigenpairs \((\lambda_{i},\psi_{i})(m_{0}^{*})\) and their chiralities \(\chi_{i}(m_{0}^{*})\), defined by \[\chi_{i}(m_{0}^{*})=\langle\psi_{i}|\;\gamma_{5}\psi_{i}\rangle\,(m_{0}^{*})= \left.\frac{d\lambda_{i}(m_{0})}{dm_{0}}\right|_{m_{0}=m_{0}^{*}}\,, \tag{2}\] where the last equality follows from the Feynman-Hellman theorem [18; 19]. The chirality hence corresponds to the slope of the eigenvalue function. This allows to categorize the eigenvalues of \(Q\) into those which approach zero as \(m_{0}\) is increased and those which move away from it. In Figure 3 we plot the results of the eigenvalue-chirality analysis for the four lowest lying eigenvalues of the two \(L/a=16\) ensembles with \(\kappa=0.1405\) (left) and \(\kappa=0.1410\) (right). If a data-point falls into the north-east or south-west quadrant, the eigenvalue moves further away from zero when the quark mass is increased, implying that there is no zero crossing for values larger than \(m_{0}^{*}\). This is the case for all configurations with \(\kappa=0.1405\). Conversely, if a data-point falls into the north-west or south-east quadrant this implies that the eigenvalue approaches zero as the quark mass is increased and a zero crossing is possible. Configurations with eigenvalues which display this feature can potentially have a negative determinant and therefore require further monitoring. As can be seen in Fig. 3, on the \(\kappa=0.1410\) ensembles we find a small number of these cases for which the second step, the _tracking analysis_, is performed. On the configurations that displayed datapoints in the north-west or south-east quadrants we now measure the lowest 20 eigenpairs for several partially quenched masses around \(m_{0}^{*}\). The eigenvalue functions \(\lambda_{i}(m_{0})\) and the eigenbasis \(\{\psi_{i}\}\) are assumed to vary slowly and continuously with \(m_{0}\). Assuming that the different partially quenched masses are sufficiently close to each other it is possible to track how a particular eigenvalue behaves as a function of the quark mass as follows. For each set of neighbouring masses \(m_{0}\) and \(m_{0}+\Delta m_{0}\) we construct the matrix \(M_{ij}=\langle\psi_{i}(m_{0})|\;\psi_{j}(m_{0}+\Delta m_{0})\rangle\) of scalar products between the \(i\)th eigenvector \(\psi_{i}(m_{0})\) at \(m_{0}\) and the \(j\)th eigenvector \(\psi_{j}(m_{0}+\Delta m_{0})\) at \(m_{0}+\Delta m_{0}\). We determine the largest entry \(M_{ij}\) and interpret this to mean that the eigenvalue \(i\) at \(m_{0}\) evolves to be the eigenvalue \(j\) at \(m_{0}+\Delta m_{0}\). We then remove row \(i\) and column \(j\) from the matrix and iterate the procedure until each eigenpair at \(m_{0}\) has been assigned a corresponding eigenpair at \(m_{0}+\Delta m_{0}\). Figure 4 displays a configuration of the \(L/a=16\) and \(\kappa=0.1410\) ensemble where a negative determinant was detected. We observe that the line connecting the red downward facing triangles does cross zero as the mass \(m_{0}\) is increased from \(m_{0}^{*}\) (highlighted as the vertical dashed line). Since there is only a single eigenvalue crossing zero in the region \(m_{0}>m_{0}^{*}\), we conclude that the fermion determinant is negative on this particular configuration. We performed the above analysis for the two smallest values of the quark mass corresponding to \(\kappa=0.1405\) and \(0.1410\) for which we each have a \(L/a=16\) and a \(L/a=24\) ensemble. As discussed above (cf. left panel in Fig. 3) we did not observe any cases of a negative determinant for \(\kappa=0.1405\) on either of the two available volumes. Since negative eigenvalues are expected to have a higher likelihood to occur at small quark masses, we did not perform this analysis for any of the remaining larger masses. At \(\kappa=0.1410\) we found 6 configurations with a negative determinant for each of the two volumes. Furthermore, we observed that the negative sector is visited at most for the Monte Carlo time corresponding to two consecutive measurements. This might be related to our choice of parameters for the rational approximation of \(\sqrt{D^{\dagger}D}\) yielding a relatively low barrier between the two sectors. We conclude that in our computational setup the sign problem for \(N_{f}=1\) QCD is mild and the relative frequency of a negative determinant of the Dirac matrix is at the sub-percent level. ## IV Correlator analysis In order to obtain the spectrum of one-flavour QCD, we create mesonic correlation functions for states with a variety of quantum numbers. We are particularly interested in states with scalar (S), pseudoscalar (P) and vector (I) quantum numbers. We employ the Laplacian Heaviside (LapH) method [31, 32] which allows us to efficiently compute quark-line disconnected contributions that appear in the computation of mesonic quantities with a single flavour. ### Construction of correlation functions Following Ref. [31] and, where possible, using the same notation we compute the \(N_{v}\) lowest eigenpairs \((\lambda_{i},v_{i})\) of the three-dimensional gauge-covariant Laplacian using a stout smeared gauge field. On each time slice \(t\) we arrange these eigenvectors into a matrix \(V_{s}\) as \[V_{s}(t)=(v_{1},v_{2},\cdots,v_{N_{v}}) \tag{1}\] Figure 4: Tracking analysis of the lowest 20 eigenvalues on a \(L/a=16\), \(\kappa=0.1410\) configuration with a negative fermion determinant. Figure 3: Scatter plot of the lowest four eigenvalues and chiralities for \(L/a=16\) and \(\kappa=0.1405\) (left) and \(\kappa=0.1410\) (right). from which we then define the Hermitian smearing matrix as a function of the number of eigenpairs that were computed as \[\mathcal{S}(N_{v},t)=V_{s}(t)V_{s}^{\dagger}(t). \tag{10}\] Using a low number of eigenpairs corresponds to a broad smearing profile, whereas using a large number of eigenpairs corresponds to "less" smearing and taking the limit of all eigenpairs recovers the identity. Quark lines \(\mathcal{Q}\) are computed as \[\begin{split}\mathcal{Q}(t_{0},t)&=\mathcal{S}(t)( \gamma_{4}D)^{-1}\mathcal{S}(t_{0})\\ &=V_{s}(t)\left[V_{s}^{\dagger}(t)(\gamma_{4}D)^{-1}V_{s}(t_{0}) \right]V_{s}^{\dagger}(t_{0}).\end{split} \tag{11}\] The inversion \((\gamma_{4}D)^{-1}V_{s}(t_{0})\) is done by solving the equation \[(\gamma_{4}D)_{\alpha\beta}(t_{0},t)y_{\beta}^{i}(t)=v_{i}(t_{0}) \tag{12}\] for \(y_{\beta}^{i}(t)\). This is done for each eigenvector \(v_{i}\) (\(i=1,\ldots,N_{v}\)), each spin component (\(\alpha=1,\ldots,4\)) and each time slice (\(t_{0}=0,\ldots,T-1\)), amounting to \(N_{t}\times N_{v}\times 4\) inversions per configuration. In our simulation, we keep the number of eigenvalues \(N_{v}=20\) fixed for all ensembles. However, from these inversions we can construct operators which use fewer than 20 eigenvalues by truncating the elements of the square matrix \(V_{s}^{\dagger}(t)(\gamma_{4}D)^{-1}V_{s}(t_{0})\). Using this we compute meson correlation functions for \(N_{v}\in\{1,2,3,4,5,6,7,8,9,10,12,15,17,20\}\), which describe the same spectrum but have different smearing functions. In all three channels (P, S, I), we use the appropriate interpolation operator (\(\mathcal{P}\), \(\mathcal{S}\), \(\mathcal{I}\)) in the finite volume irreducible representation. For the S-channel we additionally construct a purely gluonic operator \(\mathcal{G}\)[33] which induces the same quantum numbers as the \(\mathcal{S}\) operator1. We consider all mutual combinations of \(\mathcal{G}\) and \(\mathcal{S}\) in the'scalar-glue' system. Footnote 1: To avoid confusion we use the calligraphic notation for specific operators and Roman letters to indicate the induced quantum numbers. ### Reweighting and vacuum expectation value subtraction The vacuum subtracted correlation function \(C_{\mathcal{X}\mathcal{Y}}\) can be derived from the un-subtracted correlation function \(C_{\mathcal{X}\mathcal{Y}}^{\text{raw}}(t)\) and the vacuum expectation values (vevs) \(v_{\mathcal{X}}\) and \(v_{\mathcal{Y}}\) as \[C_{\mathcal{X}\mathcal{Y}}(t)=\left\langle C_{\mathcal{X}\mathcal{Y}}^{\text{ raw}}(t)\right\rangle-\left\langle v_{\mathcal{X}}\right\rangle\left\langle v_{ \mathcal{Y}}\right\rangle\,, \tag{13}\] where \(\left\langle\cdot\right\rangle\) denotes the gauge average. Whilst the vev is exactly zero for the \(\mathcal{P}\) operator and numerically zero for the \(\mathcal{I}\) operator, it is sizable for the \(\mathcal{S}\) and \(\mathcal{G}\) operators. We find that the statistical signal for correlation functions including \(\mathcal{G}\) or \(\mathcal{S}\) deteriorates when reweighting (cf. Sec. II) is combined with the naive vacuum expectation value subtraction defined in Eq. (13). This is due to delicate cancellations between the correlation function and the vevs which are reduced by the reweighting. Since the vacuum expectation value is time-independent, an alternative way to perform the vev subtraction is to take the temporal derivative of the un-subtracted correlation function. We find that this results in a significantly better signal when combined with reweighting and are therefore utilising this. Figure 5 displays the effect of reweighting for the example of the \(N_{v}=20\) correlation functions on the \(L/a=20\), \(\kappa=0.1390\) ensemble. The figure shows the relative uncertainties of the correlation function for the \(\mathcal{P}\mathcal{P}\) (red), \(\mathcal{II}\) (blue) and the time derivative of the \(\mathcal{SS}\) (cyan) operators. The dotted lines connect the un-reweighted data points, whilst the solid lines connect the reweighted ones. We observe that only for the earliest time slices the uncertainty of the reweighted data is limited by the accuracy of the reweighting factors. ### Correlation function fits For a given channel (P, I or S), the correlation function \(C\) of operators \(O_{\mathcal{X}}^{n}\) with \(\mathcal{X}\in\{\mathcal{S},\mathcal{P},\mathcal{I},\mathcal{G}\}\) using \(n\) eigenvalues can be approximated by the first \(N\) states \(X_{i}\) as \[C_{\mathcal{X}\mathcal{Y}}^{n}(t)=\sum_{i=0}^{N}\left|(Z_{\mathcal{X}}^{n})_{ i}^{*}(Z_{\mathcal{Y}}^{n})_{i}\right|\frac{e^{-m_{i}^{X}t}+e^{-m_{i}^{X}(T-t)}}{2m_ {i}^{X}}\,, \tag{14}\] where \((Z_{\mathcal{X}}^{n})_{i}=\left\langle X_{i}\right|(O_{\mathcal{X}}^{n})^{ \dagger}\left|0\right\rangle\). We emphasise that the induced masses \(m_{i}^{X}\) depend on the channel \(X\), rather Figure 5: Impact of the reweighting on the relative uncertainties of the correlation functions. than the specific operator \(\mathcal{X}\), in particular all combinations of \(\mathcal{S},\mathcal{G}\) induce the same spectrum \(m_{i}^{S}\). We extract the three lowest-lying states of the spectrum by performing simultaneous correlated fits to the symmetrised correlation functions \(C^{n}_{\mathcal{X}\mathcal{Y}}(t)\) for several choices of \(n\) (between 2 and 4). We illustrate two such fits for the example of the vector channel in Fig. 6. We defer the discussion on the slow approach to the ground state for the bottom panel to Sec. V.2. In order to assess systematic uncertainties associated with the choice of smearing radii, we vary which \(n\) enter into a particular fit. In particular, for the vector and pseudoscalar channels we perform three different fits, simultaneously fitting \(N_{v}=(20,12,6),(17,10,3)\) or \((20,15,10,5)\) and labelled 'fit1', 'fit2' and 'fit3', respectively.2 For the scalar-glue basis we simultaneously fit \(N_{v}=(20,3)\) or \(N_{v}=(17,5)\) ('fit1' and 'fit2') but jointly fitting \(C_{\mathcal{S}\mathcal{S}}\), \(C_{\mathcal{S}\mathcal{G}}\) and \(C_{\mathcal{G}\mathcal{G}}\). In all cases, we fit three states (\(N=2\) in (4.6)), but only the lowest two potentially enter any subsequent analysis. We list the numerical results for the lowest two states ('gr' and 'ex', respectively) in Table 2 in Appendix B. In all further steps of the analysis we consider all choices of 'fit1', 'fit2' and 'fit3' to propagate any systematic uncertainties. Footnote 2: One of the fit choices of the pseudoscalar meson on the \(L/a=24\), \(\kappa=0.1410\) ensemble did not yield an invertible covariance matrix and was therefore excluded. However, as will be discussed later on, this ensemble does not enter the final analysis. Finally, we also compute the connected correlation function for the pseudoscalar meson, which corresponds to a non-existent state in a \(N_{f}=1\) theory and in the following is therefore referred to as "fake pion". As we will discuss in the following section, \(m_{\pi}^{\text{fake}}\to 0\) can be used as a proxy for the massless limit (see also Ref. [13]). These correlation functions are generated from standard point sources and follow the same functional form as Eq. (4.6) with the replacement \(Z_{\mathcal{X}}^{n}\to\left\langle\pi\right|\left(\bar{q}\gamma_{5}q\right)^{ \dagger}\left|0\right\rangle\). For these states we perform fits with \(N=0\) and \(N=1\). We note that for both \(\kappa=0.1410\) ensembles we expect large finite size effects as \(m_{\pi}^{\text{fake}}L<3\) and therefore discard them from the subsequent analysis. ## V Analysis of the spectrum The goal of this section it so extrapolate the results for the meson spectrum (\(m_{P}\), \(m_{I}\) and \(m_{S}\)) to the chiral and infinite volume limit to provide results for ratios of these masses. ### Defining the chiral limit We start by determining what the best proxy for the quark mass is. Figure 7 shows the lowest lying state for the pseudoscalar channel. The left panel displays this as a function of the bare quark mass, the right panel as a function of the fake pion mass. By comparing the two panels, it is evident that the fake pion mass is the more suitable choice to define the massless limit as the bare quark mass suffers from large finite volume effects. In the following we therefore choose the fake pion mass to define the massless limit. ### Assignment of states To understand the behaviour of the spectrum we induced by means of our chosen interpolating fields, we investigate how the hadron masses vary as a function of quark mass and volume. We are predominantly interested in mesonic states dominated by \(q\bar{q}\) contributions3. These are expected to display a strong quark mass dependence but at most a mild dependence on the volume, whereas any glueball state should only depend weakly on quark mass and volume. Contrary to these, states that depend mildly on the quark mass but strongly on the Figure 6: Example fit for the vector two point function for the \(L/a=32\), \(\kappa=0.1400\) (top) and the \(L/a=16\), \(\kappa=0.1390\) (bottom) ensembles. The datapoints show the effective masses of the underlying correlation functions, whilst the correspondingly coloured bands show the effective mass obtained from the results of the correlation function fits. Finally the magenta horizontal band (dashed line) show the results for the extracted ground (excited) state energies. volume do not correspond to physical states and might be interpreted to be torelon states [34; 35]. In Section V.1 we noted that the pseudoscalar mass is largely volume independent, but depends smoothly and strongly on the quark mass set by \(m_{\pi}^{\rm fake}\). We therefore identify this with the desired \(q\bar{q}\)-state. In the case of the scalar and vector channels, the situation is more complicated. When comparing results of simulations at the same \(\kappa\) but on different volumes, there are cases that display significant volume dependence on smaller volumes. For example, the top panel of Figure 8 shows the spectrum as a function of the inverse spatial volume but at fixed \(\kappa=0.1390\). We observe that the three largest volumes yield very consistent ground state masses. Contrary, for the two smallest volumes, we see that a lighter state is present in the spectrum, which displays a strong volume dependence. We note that the first excited state on these two volumes is numerically close to the ground state mass extracted on the larger volumes. This picture is further substantiated by investigating the behaviour of the amplitude for the matrix element as we will illustrate with the example of \((Z_{2}^{20})_{i}\): In the bottom panel of Figure 8 we show these values for the three states we are fitting. For the three largest volumes, which are displaying a consistent ground state mass, we find that the ground state matrix element (left three magenta circles) is of similar size or larger than the other matrix elements. In contrast to this, for the smallest two volumes the situation is reversed and we find the matrix element of the lowest lying state (right two magenta circles) to be significantly smaller than that of the first and second excited states. We further note that for these two smallest volumes, the matrix element corresponding to the first excited state (rightmost two red diamonds) shows a qualitatively similar behaviour to that of the ground state for the larger volumes. In other words, for the smallest two volumes, the correlation function couples more strongly to the first excited state than the ground state. This is also the reason for the slow approach to the plateau for example in the case of the \(L/a=16\) and \(\kappa=0.1390\) ensemble (c.f. bottom panel of Fig. 6). The strong volume dependence and qualitatively different behaviour with respect to the matrix element indicate that the lowest lying state for the small volumes is not the \(q\bar{q}\)-state we are interested in. Instead, as indicated by the values of the mass and the amplitudes we identify the first excited state with the \(q\bar{q}\) state. In summary, for the vector channel at fixed \(\kappa=0.1390\), the \(q\bar{q}\) state corresponds to the lowest lying state for \(L/a=32,24,20\) and to the first excited state for \(L/a=16,12\). Corresponding analyses for the other quark masses yield a similar picture. Figure 9 addresses the scalar channel. The top panel shows the mass dependence at fixed volume \(L/a=16\). The lowest lying state is mass independent in the range of masses we simulate, but the first excited state displays a strong mass dependence. The bottom panel shows the volume dependence at fixed \(\kappa=0.1390\). Again, for small volumes, we find a state whose energy increases as the volume increases (lowest state at \(L/a=12,16\)), as well as a volume insensitive state (lowest state at \(L/a=32,24,20\) and first excited state at \(L/a=16,12\)). Furthermore, the latter coincides with the state that displayed the strong mass dependence in the top panel. In analogy with the discussion of the vector meson, we conclude that those correspond to a (mass dependent, volume independent) scalar meson state and a (mass independent, volume dependent) torelon state. By means of similar investigations of the volume and quark mass dependence, we categorise the two lowest lying states on each ensemble and in each channel into the Figure 7: The spectrum of the pseudoscalar meson as a function of the bare quark mass (left) and as a function of the fake pion mass (right). Here and in the following, shown triplets (or pairs) of points correspond to the fit results of ‘fit1’, ‘fit2’, ‘fit3’, respectively. lowest quark mass dependent state (\(q\bar{q}\)) and the remaining state, which in principle can be a torelon, an excited \(q\bar{q}\) or a glueball state. Figure 10 shows the state that has been identified as the relevant \(q\bar{q}\) state for the vector (top) and scalar (bottom) channels. For the large volumes, good agreement is found for all quark masses, whereas for light quark masses and small volumes finite size effects are sizable. We therefore exclude the \(L/a=12\) and \(L/a=16\) from our subsequent analysis. Summarising the discussion in this Section, the \(q\bar{q}\) states we are interested in are easily identified at large volumes and small quark masses as the lowest lying states in the respective channels. Such determinations have the largest impact in the chiral and infinite volume extrapolations we discuss next. However, especially for small volumes, the identification required a more detailed study of the volume and mass dependence of both the energy levels and the overlap factors describing the correlation functions. Those are important lessons we will take into account for future studies at large values of \(N_{C}\). ### Extrapolation to zero quark mass We are interested in the spectrum at vanishing quark mass. Since we have not performed a scale setting analysis we focus on ratios of masses in the chiral limit. As discussed above, we will use the fake pion mass to define the zero quark mass limit. Figure 8: Volume dependence of the vector meson at fixed \(\kappa=0.1390\). The top panel shows the dependence of the spectrum, the bottom panel the dependence of the corresponding matrix elements for the \(N=20\) correlation function. Figure 9: The spectrum of the scalar meson as a function of the quark mass at fixed volume \(L/a=16\) (top) and as a function of the volume at fixed \(\kappa=0.1390\) (bottom). The fit functions we explore for this extrapolation are \[M(m_{\pi}^{\rm fake},L)=\left[\sum_{i=0}^{n_{\rm pow}}c_{i}\left(m_{\pi}^{\rm fake }\right)^{i}\right]\left(1+f_{0}e^{-m_{\pi}^{\rm fake}L}\right)\,, \tag{15}\] where \(M\) is either a mass (\(m_{P}\), \(m_{S}\), \(m_{I}\)) or ratios thereof. We consider the choices \(n_{\rm pow}\in\{1,2\}\) and either leaving \(f_{0}\) as a free parameter or setting it to zero. In addition to varying the fit function, we consider cuts to the data, in particular removing the smallest volumes and/or the lightest and/or heaviest masses. An example fit for the case of the pseudoscalar mass (top) and the scalar mass (bottom) is shown in Figure 11. In both of these cases we take the results obtained by 'fit1', keep \(f_{0}\) as a free parameter and choose \(n_{\rm pow}=2\). Due to concerns about the finite volume effects, we exclude the smallest volumes (\(L/a=12,16\)) and the lightest quark mass (\(\kappa=0.1410\)). We repeat all extrapolations for the various choices of the correlation function fits, whether or not \(f_{0}\) is kept as a free parameter and for different choices of \(n_{\rm pow}\). For the lowest order polynomial we restrict the mass range that enters the fit. The datapoints in Fig. 12 show the results for these variations for the pseudoscalar (top) and the scalar (bottom). Only fits with an acceptable \(p\)-value of \(p>0.05\) are shown. The green band in these plots is derived by taking the 68th percentile of the distribution of the underlying bootstrap samples of all the fits which produced an acceptable \(p\)-value. We interpret this number to be a good approximation of systematic effects due to correlator fit choices, variations of the chiral fit ansatz and the data included in such a fit. Ultimately we are interested in the ratio of masses in the chiral limit. We can obtain this in two ways as we will now illustrate on the example of the ratio of the pseudoscalar to the scalar mass: We can either build the ratio \(m_{P}/m_{S}\) at finite \(m_{\pi}^{\rm fake}\) and then extrapolate this to the massless limit (method 1), or we can separately Figure 11: Extrapolation to the chiral limit for a given fit ansatz for the pseudoscalar mass (top) and the scalar mass (bottom). Figure 10: Mass dependence of the states identified as \(q\bar{q}\) states for the vector (top) and the scalar (bottom). extrapolate the pseudoscalar and the scalar masses and then build their ratio (method 2). One example fit of the former is shown in Fig. 13. We observe that part of the mass dependence cancels in the ratios, resulting in a less steep curve than that observed in the individual fits (cf. Fig 11). The coloured stars in Fig. 14 show different variations of the fit ansatz, analogous to Fig. 12. In addition to the extrapolation of the ratio of masses (method 1), we also show ratios of the chirally extrapolated values (orange circles; method 2). Here we computed all mutual combinations of acceptable fits displayed in Fig. 12. The green (orange) band is the result of taking the 68th percentile of all the bootstrap samples for the fits of method 1 (method 2) that produced an acceptable \(p\)-value. In general, we notice that the ratio of separate chiral extrapolations leads to larger variations than the extrapolation of the ratio of masses. This is unsurprising as, ensemble by ensemble, the underlying datapoints are statistically correlated, and therefore statistical fluctuations are reduced for the individual ratios of datapoints. Furthermore the extrapolation of the individual datapoints is more difficult to control since the slope with the fake pion mass is steeper. Our preferred number is therefore the direct extrapolation (green band in Fig. 14) whilst the orange band provides a sanity check. Figure 14: Comparison of fit results for different choices of the extrapolation of the ratio of \(m_{P}/m_{S}\). Figure 12: Comparison of the fit results when varying the correlator fit choice and the fit ansatz for the pseudoscalar (top) and the scalar mass (bottom). Figure 13: Example extrapolation to the chiral limit of the ratio of pseudoscalar to scalar mass via method 1. In addition to \(m_{P}\), \(m_{S}\) we have data for the vector mass \(m_{I}\). An example fit for the extrapolation of the vector mass is shown in Fig. 15 (cf. Fig. 11) whilst different fit variations are shown in Fig. 16 (cf. Fig. 12). Finally, we can also construct the ratios \(m_{P}/m_{I}\) and \(m_{I}/m_{S}\) in the chiral limit via the two methods described above. The results of both methods are shown in Fig 17. ## VI Discussion and Outlook We have presented a detailed study of the spectrum of one-flavour QCD using Wilson fermions with tree-level O(\(a\)) improvement. Results are obtained at one single lattice spacing (approximatively 0.06 fm) for different volumes (up to \(32^{3}\times 64\)) and several quark masses. After extrapolating to the massless limit we obtain \[\frac{m_{P}}{m_{S}}=0.357(54)\, \tag{10}\] for the pseudoscalar to scalar meson mass ratio and \[\frac{m_{P}}{m_{I}}=0.486(50)\, \tag{11}\] for the pseudoscalar to vector ratio. In reference [6] a prediction using an effective field theory approach and a \(1/N_{C}\) expansion was derived. In the massless limit this reads \[\frac{m_{P}}{m_{S}}=1-\frac{22}{9N_{C}}-\frac{4}{9}\beta+O\left(\frac{1}{N_{C }^{2}}\right)\, \tag{12}\] where \(\beta\) is a positive constant of order \(1/N_{C}\). The equation above therefore provides an estimate for an upper bound, that for \(N_{C}=3\) reads \[\frac{m_{P}}{m_{S}}\lesssim 0.185\, \tag{13}\] up to higher order effects starting at \(1/N_{C}^{2}\). Our results are somewhat larger than this bound, but considering their uncertainty and terms of size \(O(1/N_{C}^{2})\), they are reasonably close. This might indicate that \(1/N_{C}^{2}\) corrections and the parameter \(\beta\) are small. Obviously this finding needs to be corroborated by extending our studies to larger values of \(N_{C}\). We have provided an improved estimate compared to previous results that appeared as Proceedings in Ref. [36] (based on Ref. [12]), where a value of \(0.410(41)\) was found for the pseudoscalar to scalar mass ratio. Besides having tested the predictions made in Ref. [6] for the spin-zero one flavour QCD mesonic state, we further provided information on the vector spectrum that can be interpreted as the leading order prediction for the \(\mathcal{N}=1\) super Yang-Mills vector states. In order to assess the size of higher order effects we are extending the computation considering \(N_{C}=4,5\) and \(6\). A preliminary account appeared in Ref. [21]. ## VII Acknowledgements We thank John Bulava for discussions and his early work. We thank the members of the SDU lattice group for useful discussions. The project leading to Figure 16: Variations of the extrapolation of the vector mass to the chiral limit, analogous to Fig. 12. Figure 17: Variations of the extrapolation of the ratio of the pseudoscalar and the vector mass (top) and of the vector and the scalar mass (bottom). this application has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 894103 and by the Independent Research Fund Denmark, Research Project 1, grant number 8021-00122B. F.P.G.Z. acknowledges support from UKRI Future Leader Fellowship MR/T019956/1. This work was partially supported by DeiC National HPC (g.a. DeiCSDU-N5-202200006). Part of the computation done for this project was performed on the UCloud interactive HPC system and ABACUS2.0 supercomputer, which is managed by the eScience Center at the University of Southern Denmark. ## Appendix A Distribution of the topological charge In Figure 18 we show the normalised distributions of the topological charge on all ensembles. The number \(N\) corresponds to the number of distinct configuration on which all measurements have been performed and which are spaced by a minumum of 32 trajectories (cf. Sec II). We clearly observe that the topological charge becomes more peaked as the volume is decreased and as the quark mass is lowered (larger values of \(\kappa\)) [9]. ## Appendix B Results of the correlation function fits Table 2 shows the relevant results obtained by fitting the reweighted and vacuum-subtracted correlation functions as described in Sec IV.
2310.10705
Observational and Experimental Insights into Machine Learning-Based Defect Classification in Wafers
This survey paper offers a comprehensive review of methodologies utilizing machine learning (ML) classification techniques for identifying wafer defects in semiconductor manufacturing. Despite the growing body of research demonstrating the effectiveness of ML in wafer defect identification, there is a noticeable absence of comprehensive reviews on this subject. This survey attempts to fill this void by amalgamating available literature and providing an in-depth analysis of the advantages, limitations, and potential applications of various ML classification algorithms in the realm of wafer defect detection. An innovative taxonomy of methodologies that we present provides a detailed classification of algorithms into more refined categories and techniques. This taxonomy follows a three-tier structure, starting from broad methodology categories and ending with specific techniques. It aids researchers in comprehending the complex relationships between different algorithms and their techniques. We employ a rigorous Observational and experimental evaluation to rank these varying techniques. For the Observational evaluation, we assess techniques based on a set of four criteria. The experimental evaluation ranks the algorithms employing the same techniques, sub-categories, and categories. Also the paper illuminates the future prospects of ML classification techniques for wafer defect identification, underscoring potential advancements and opportunities for further research in this field
Kamal Taha
2023-10-16T14:46:45Z
http://arxiv.org/abs/2310.10705v4
Machine Learning Classification Techniques for Identifying the Defective Patterns in Semiconductor Wafer Maps: A Survey, Empirical, and Experimental Evaluations ###### Abstract This survey paper offers a comprehensive review of methodologies utilizing machine learning (ML) classification techniques for identifying wafer defects in semiconductor manufacturing. Despite the growing body of research demonstrating the effectiveness of ML in wafer defect identification, there is a noticeable absence of comprehensive reviews on this subject. This survey attempts to fill this void by amalgamating available literature and providing an in-depth analysis of the advantages, limitations, and potential applications of various ML classification algorithms in the realm of wafer defect detection. An innovative taxonomy of methodologies that we present provides a detailed classification of algorithms into more refined categories and techniques. This taxonomy follows a four-tier structure, starting from broad methodology categories and ending with specific sub-techniques. It aids researchers in comprehending the complex relationships between different algorithms and their techniques. We employ a rigorous empirical and experimental evaluation to rank these varying techniques. For the empirical evaluation, we assess techniques based on a set of four criteria. The experimental evaluation ranks the algorithms employing the same sub-techniques, techniques, sub-categories, and categories. This integration of a multi-layered taxonomy, empirical evaluations, and comparative experiments provides a detailed and holistic understanding of ML techniques and algorithms for identifying wafer defects. Additionally, the paper illuminates the future prospects of ML classification techniques for wafer defect identification, underscoring potential advancements and opportunities for further research in this field. _Note to Practitioners_--ML methodologies are being continuously enhanced and tailored to identify wafer defects, and research on the subject is proliferating across numerous conferences and journals. However, the scattered nature of this information makes it difficult for researchers and practitioners to gain a comprehensive understanding of the best techniques and how they perform under various conditions. This survey paper attempts to rectify this issue by providing an in-depth review of ML approaches used for identifying and classifying defects on wafers. Our objective is to amalgamate available literature to underscore the advantages, drawbacks, and potential uses of various ML algorithms such as deep learning, convolutional neural networks, support vector machines, random forests. Machine Learning, Semiconductor Wafer Maps, Defective Patterns Identification, Survey, Pattern Recognition. ## I Introduction Integrated circuits (ICs), essential for technologies like AI [1], IoT [2], the automotive industry, and 5G networks [3], are densely packed electronic circuits on silicon chips, which are produced from semiconductor wafers. To satisfy the growing demand for semiconductors, fabrication companies need to implement efficient manufacturing automation, focusing on reducing defects during the wafer fabrication process as they can lead to chip failure [4]. Wafer Bin Maps (WBM) are generated to visually represent and categorize defective dies on a wafer, using different colors to indicate their status [5]. Wafer maps are vital for diagnosing defects, spotting patterns, uncovering causes, and tracking semiconductor production. Defective chips tend to cluster and show spatial correlations that can indicate the causes of flaws [6]. These clusters, with defects sharing similar characteristics, allow for classification of patterns [7, 8]. By studying these patterns, improvements can be made in process engineering to enhance product quality and increase the yield of defect-free chips [9]. Effective defect monitoring is key to production yield in chip fabrication, with traditional manual inspections proving costly and less accurate [10]. Image processing and machine learning techniques offer more cost-effective and accurate solutions [11, 12]. However, they face challenges with data preprocessing that can be resource-intensive and potentially distort information. Deep learning methods are being adopted to address these limitations, marking a significant shift in research and application for defect monitoring in semiconductor manufacturing [13]. The adoption of deep learning algorithms is widespread due to technological advancements, significantly benefiting the semiconductor industry by improving flaw detection and analysis [14, 15, 16, 17]. Deep learning excels in automatic feature learning, with its deep architectures and nonlinear processing units adept at extracting features from large, noisy, or incomplete datasets [18]. This makes it effective for tasks such as wafer map defect detection, identification, and classification. Machine learning (ML) algorithms have been successfully applied in various fields, including wafer defect detection, by learning from large datasets and leveraging computational power to identify complex defect patterns. Despite their effectiveness, there's a gap in comprehensive reviews in this area. Our work aims to bridge this by providing a thorough survey of ML classification algorithms, detailing their sub-techniques, techniques, sub-categories, and categories. This taxonomy facilitates a clearer assessment and comparison of algorithms, highlighting their pros and cons, and sets a foundation for future research to refine and evaluate new ML approaches. This survey not only presents a detailed framework for categorizing ML classification algorithms but also includes _empirical_ and _experimental_ evaluations to measure the effectiveness of different approaches. Our _empirical evaluation_ focuses on techniques for identifying wafer defects based on four criteria. Through _experimental evaluation_, we compare and rank various algorithmic categories and techniques, including those utilizing the same sub-technique, different sub-techniques within the same technique, different techniques within the same sub-category, different sub-categories within the same category, and different categories. Our methodology for paper selection in this survey involved: (1) targeting sources like IEEE, ACM, and Elsevier, (2) performing searches with keywords related to ML and wafer defect detection, (3) reviewing titles and abstracts against our criteria, and (4) in-depth evaluation of each chosen paper's content, methodology, and relevance to our research. ### Key Contributions The key contributions of this paper are as follows: \(\bullet\) _Development of a Novel Methodological Classification System:_ Grouping algorithms into broad categories that lack detail can cause misunderstandings when unrelated algorithms are classified together and evaluated using the same metrics. In order to address this issue, we suggest a novel classification system that hierarchically organizes algorithms into more granular categories and specific techniques. This taxonomy comprises a four-tier structure, starting with the broader methodology category and culminating with the specific methodology techniques. This systematic approach to categorizing algorithms assists researchers in comprehending the interconnections between different algorithms and their respective techniques. By adhering to this taxonomy, researchers can compare and evaluate algorithms more accurately, resulting in a more precise understanding of their strengths and limitations. The proposed classification system also lays down a blueprint for future research, guiding the creation and evaluation of new algorithms. \(\bullet\)_Providing Technique and Algorithm Evaluation through Empirical Measures:_ We perform empirical evaluations to gauge the efficacy of different methodologies. Our assessment scrutinizes techniques for identifying defects based on four evaluation criteria. Specifically, we execute experimental comparisons and rankings for the following: (1) various algorithms using a common analysis approach, (2) different categories within the same analysis approach, (3) disparate sub-categories of analysis within a specific category, (4) multiple analysis techniques within a certain sub-category, and (5) diverse sub-techniques of analysis within the same technique. \(\bullet\)_Providing Experimental Assessment of Techniques and Algorithms:_ Through experimental evaluations, we compare and rank diverse algorithmic categories and techniques. These include those that utilize the same sub-technique, different sub-techniques within the same technique, different techniques within the same sub-category, different sub-categories within the same category, and different categories overall. The thorough evaluation methodology helps researchers discern minor differences between similar algorithms and techniques. The combination of our methodological taxonomy, empirical evaluations, and experimental comparisons provides researchers with a complete and nuanced understanding of the machine learning algorithms available for wafer defects identification. This ultimately enables them to make informed decisions when choosing techniques. ### Our Proposed Methodology-Based Taxonomy We have classified ML algorithms for wafer defects identification into two main categories based on the techniques they employ. These categories are classification-based and clustering-based. Each of the ML classification methods is divided into four tiers, with each tier becoming more specific than the previous one. Our taxonomy is structured hierarchically, starting from the methodology category, followed by the methodology sub-category, methodology techniques, and finally, methodology sub-techniques. This hierarchical structure enables us to identify specific techniques or sub-techniques at the final level, as depicted in Figure 1. Figure 1: The figure illustrates our hierarchical methodology-based taxonomy for classifying ML classification algorithms utilized in wafer defect identification. The taxonomy categorizes the algorithms into fine-grained classes, progressing from methodology category to methodology sub-category, methodology technique, and finally, methodology sub-technique. Additionally, the figure provides the corresponding section numbers in the manuscript that discuss each category, sub-category, technique, and sub-technique, ensuring easy reference and navigation. ## II Deep Learning-Based Classification ### Neural Network-Based Classification **1. Artificial Neural Network-Based Classification** #### Ii-A1 Self-Organizing Maps-Based Classification Self-Organizing Maps (SOMs) are a type of unsupervised machine learning method utilized for pattern identification in data. They have proven to be effective in analyzing semiconductor wafer maps and detecting irregularities. Anomalies or patterns within the wafer can indicate defects. By leveraging similarities, SOMs aid in recognizing these faulty patterns. A SOM consists of a grid containing nodes, each representing a prototype vector. Once trained, the SOM can be visually inspected to examine the acquired patterns. Typically, the SOM is displayed as a two-dimensional grid, where each node signifies a cluster or prototype vector. Color coding visualizes patterns captured by nodes. Analyzing the SOM helps identify clusters indicating defective patterns, which are areas with higher densities of anomalies or defects. Li and Huang [19] suggested a method where self-organized maps were trained to identify representative defect patterns, and subsequently, these clusters were employed as rigid labels for training support vector machines (SVMs). Li et al. [20] employed various techniques, including a self-organizing map (SOM) neural network, a statistical homogeneity test, and interactive explorative data analysis, to analyze WBM data. The aim was to develop a robust and efficient in-line measurement sampling method. Yang and Sun [21] introduced a deep learning framework known as Self-Proliferating Neural Network (SPNet) in their research. This architecture, based on SOMs, offers a solution to the problem of defect map and defect pattern classification. The utilization of the Self-Proliferating Module allows for an efficient augmentation of feature maps while minimizing computational expenses. #### Ii-A2 Autoencoder-Based Classification Autoencoders, a type of artificial neural network, can effectively detect defective patterns in semiconductor wafer maps. These networks consist of an encoder that compresses the input data into a lower-dimensional representation known as the latent space. Subsequently, the decoder attempts to reconstruct the original input from this latent representation. After training the autoencoder, it can be employed to identify defective patterns in new and unseen semiconductor wafer maps. During the inference phase, the autoencoder encodes the input wafer map into a compressed representation and then decodes it to reconstruct the map. By comparing the input and the reconstructed output, the reconstruction error is calculated. Deviant areas with high reconstruction errors on the wafer map indicate defects. A threshold is set to identify defective patterns, flagging regions as defective if the error surpasses it Nakazawa and Kulkarni [22] introduced a technique four times, which involves utilizing deep convolutional encoder-decoder autoencoder neural network structures to identify and separate unusual defect patterns on wafer maps. They employed a defect pattern generation model to generate synthetic wafer maps representing eight fundamental defect patterns. Nakazawa and Kulkarni [23] employed a deep convolutional autoencoder, which follows an encoder-decoder architecture, for the purpose of segmenting anomaly defects. They utilized the Poisson point process to generate patterns on the wafer map and subsequently employed a convolutional encoder-decoder autoencoder to accurately identify and segment clusters of defects. Yu [24] devoted significant attention to the development of a deep learning model that aims to acquire valuable distinguishing characteristics from wafer maps for the enhancement of wafer map pattern recognition (WMPR). This model utilizes an advanced stacked denoising autoencoder (ESDAE) alongside manifold regularization. Alawieh et al. [25] introduced a selective deep learning technique, mentioned nine times, that utilized an autoencoder. Their approach involved reporting a class label only when the classifier exhibited high confidence. As a result, only a portion of the wafers could be classified, achieving an accuracy of over 90%. To augment the training data, the researchers employed an autoencoder. Zeng et al. [26] developed a CNN-based variational autoencoder to transform images into a manifestation space. They trained the network on the MNIST dataset and found that images of the same digit formed separate clusters in this space. They also used an SVM classifier for classification. To classify wafer maps, they trained another encoder to map wafer images (e.g., "ring") to specific digit clusters (e.g., "1"). ### Convolutional Neural Network-Classification #### Ii-A1 CNN for single-label defect classification CNN (Convolutional Neural Network) has emerged as a powerful tool for the identification of defective patterns in semiconductor wafer maps. The advantages of using CNNs for defect classification in wafers include their ability to automatically learn relevant features from the data, their robustness to variations in defect appearance, and their potential for high accuracy. CNNs have shown promising results in identifying various types of defects, such as scratches, particles, and pattern irregularities on wafers. The CNN architecture consists of multiple layers, including convolutional layers, pooling layers, and fully connected layers. The convolutional layers perform feature extraction by applying a set of learnable filters to the input images, capturing important local patterns and structures. Pooling layers downsample the feature maps to reduce computational complexity and improve the network's ability to generalize. Fully connected layers are responsible for the final classification decision based on the extracted features. Training a CNN for defect classification uses backpropagation to optimize the network's weights. Backpropagation minimizes the difference between predicted and actual labels. Sejune et al. [27] presented an approach, outlined in their study six times, for automating the identification of wafer defects. They employed a CNN model specifically designed for single-label detection classification to extract relevant features and classify different types of wafer defects. To train the model, they utilized a dataset consisting of 1486 sample images. Notably, the model achieved a test accuracy of 96.2%. Kong and Ni [28] utilized semi-supervised ladder networks to perform single-label defect classification. They employed a variational autoencoder to analyze single-type defect patterns. #### Ii-A2 CNN for multi-label defect classification CNNs have demonstrated promise in the classification of multi-label defects in semiconductor wafer maps. Their capacity to learn hierarchical features and comprehend intricate patterns makes them highly suitable for this purpose. As deep learning techniques continue to advance and larger annotated datasets become more available, CNNs are expected to maintain their crucial role in accurately identifying defective patterns in semiconductor wafer maps. Several techniques can be employed to enhance the performance of CNNs in multi-label defect classification. One approach is the utilization of data augmentation techniques, including rotation, scaling, and flipping, to increase the diversity of the training set and enhance the model's generalization capability. Transfer learning utilizes fine-tuning of a pre-trained CNN on a related task, allowing the network to leverage knowledge from vast image datasets. Wang et al. [29] developed an improved deformable convolutional (DC) network for recognizing mixed-type defect patterns in wafer maps. The network incorporates a deformable convolution unit for selective sampling and data representation. A specialized multi-label output layer is utilized to identify the presence or absence of each defect pattern in the wafer map. Wen et al. [30] proposed a technique to classify surface defects on semiconductor wafers. Their method involved three steps: (1) They developed a novel feature pyramid network with atrous convolution (FPNAC) to extract features and generate feature maps. (2) A region proposal network (RPN) utilized these feature maps to generate proposals for various regions of interest. (3) The region proposals were aligned to the input size and fed into a deep multi-branch and multi-label neural network for classification. Lee and Kim [31] proposed a semi-supervised deep convolutional generative model for classifying mixed-type defect patterns. They treated the task as a multi-label classification problem and used multiple latent class variables, each dedicated to a distinct pattern. Kyeong and Kim [32] employed CNNs to classify WBMs with mixed-type defect patterns, eliminating the need for pre-removing random defects or clustering systematic defects. They applied multi-label classification by using separate CNN models for each label. Their study demonstrated the CNNs' robustness against global random defects and achieved good accuracy compared to other methods. Hwang and Kim [33] introduced a novel approach called Dirichlet process variational autoencoder mixture models for clustering wafer defect patterns. #### Ii-A3 Pre-defined CNN and Transfer Learning Pre-determined CNNs refer to CNN architectures that are purposefully designed and trained specifically to detect faulty patterns in semiconductor wafer maps. These models incorporate convolutional layers to extract features, pooling layers to reduce dimensions, and fully connected layers for classification purposes. Transfer learning is a technique that leverages a pre-trained CNN model, typically trained on an extensive dataset such as ImageNet, as a starting point to identify defective patterns in wafer maps. Instead of training a CNN from scratch, the pre-trained model's knowledge is adapted and fine-tuned using the wafer map dataset (see Fig. 2) Chen et al. [34] introduced a predefined CNN model along with transfer learning, which leverages pre-trained parameters to assist the network in capturing the fundamental patterns found in the wafer map defect pattern. Shen and Zheng [35] introduced a deep transfer learning model called the joint feature and label adversarial network (JFLAN). JFLAN offers a unique feature learning approach using transfer learning. It directly extracts transferable features from wafer maps. It employs multilayer domain adaptation through adversarial training. A.R and James [36] developed an automated system for wafer defect classification using a CNN combined with a memristor crossbar structure. Pre-trained neural network weights are implemented within the crossbar structure, and classification is based on softmax layer's output probabilities. #### Ii-A4 Networks with self-calibrated-Based Classification Self-calibrated networks employ deep learning architectures like CNNs or RNNs to analyze wafer maps. These networks are trained on large datasets with labeled wafer images, enabling them to recognize patterns and distinguish defective areas from non-defective ones. A key feature of these networks is their self-calibration mechanism, which allows them to adapt and improve by leveraging additional data and feedback. When encountering new wafer maps, the model compares its predictions with ground truth labels and adjusts its parameters accordingly, reducing false positives and false negatives. These networks often incorporate other techniques to enhance their performance, such as data augmentation to increase the training dataset's size and improve generalization and transfer learning to utilize pre-trained models for faster and accurate learning. Liu et al. [37] introduced a novel self-calibrated convolution that enables heterogeneous utilization of convolutional filters within a convolutional layer. They introduced an adaptive response calibration operation to encourage filters to exhibit diverse patterns. Chen et al. [38] proposed a CNN-based Fig. 2: The figure demonstrates the processing of pre-defined CNN and transfer learning classification knowledge distillation technique to improve defect detection. They introduced a multi-head attention layer into their CNN model, belonging to the category of Networks with self-calibrated. This layer allows the model to focus on different input sequence segments, capturing diverse dependencies and enhancing local and global feature information. ## 3 Residual Neural Network-Based Classification Residual Neural Networks (ResNets) have shown promise in detecting defects in wafer maps. ResNets are deep neural networks (DNN) that address the vanishing gradient issue by incorporating residual connections. These connections enable the training of deep networks by focusing on learning the difference between the input and output of a layer, rather than the desired mapping itself. This approach allows the network to concentrate on optimizing the residuals, which are typically easier to handle. During training, the loss function is minimized using techniques like stochastic gradient descent. In inference, the ResNet analyzes a wafer map, generating a probability map for each pixel to detect defects by applying a threshold. He et al. [39] addressed the degradation problem by introducing a deep residual learning framework. Empirical evidence showed that increasing the depth of these residual networks leads to higher accuracy. Li and Wang [40] introduced an enhanced mask R-CNN model that combines the residual network and feature pyramid network to enhance the recognition capability of small targets. Amogne et al. [41] introduced the Opt-ResDCNN model, a deep convolutional neural network with residual blocks. This model was designed for identifying and classifying defect patterns in wafer maps. Inspired by ResNet, the method enhances the model by incorporating additional convolutional layers and residual blocks. ## Appendix B Recurrent Neural Network-Based Classification ### Generative Adversarial Network-Based Classification Generative Adversarial Networks (GANs) are deep learning models that have two components: a generator and a discriminator. After training, GANs can be employed for defect detection by creating artificial maps from unlabeled wafer data. By comparing these synthetic maps with real ones, any notable differences can suggest potential defects. Analyzing the areas where the generated maps deviate from the real maps allows semiconductor manufacturers to accurately identify and locate defective patterns. Fig. 3 demonstrates the processing of GAN. Wang et al. [42] utilized generative adversarial networks (GANs) to classify wafer defect patterns in the presence of class-imbalanced data. Byun and Baek [43] developed a deep convolutional GAN that synthesizes wafer maps to generate composite defects by combining single-type patterns through pixel-wise summation. Li and Jiang [44] introduced an enhanced ensemble GAN for wafer surface defect detection. It includes three generators, a discriminator, and a convolutional encoder-decoder architecture with skip connections. ### Adversarial Training-Based Classification Adversarial training involves training a deep neural network (DNN) to differentiate between defective and non-defective patterns in wafer maps. The training process includes a generator and a discriminator. The generator creates synthetic wafer map patterns, both defective and non-defective, to expand the training dataset. It aims to generate realistic patterns that are hard for the discriminator to distinguish from real ones. The generator's goal is to enhance the discriminator's ability by providing challenging examples. The discriminator is a DNN model initially trained on real wafer map patterns and later exposed to the generator's synthetic patterns. It learns to distinguish between real and synthetic patterns, prompting the generator to generate more realistic synthetic patterns over time Tzeng et al. [45] introduced a technique that integrates adversarial training into transfer learning. It involves creating a feature extractor and classifier based on the source domain, which is used to generate a new feature extractor by mapping the data from the target to the source domain. Wang et al. [46] introduced an adaptive balancing GAN technique for imbalanced learning by combining adversarial training and domain adaptation. Ganin and Lempitsky [47] introduced DANN, which employed min-max adversarial training to reduce the discrepancy between source and target domains. ## Appendix C Hopfield Artificial Neural Network-Based Classification The Hopfield Artificial Neural Network (ANN) is a recurrent neural network used for pattern recognition, particularly in identifying defects on semiconductor wafers. It utilizes an energy function to gauge network stability, minimizing it to reach a stable state representing a stored pattern. To detect defects in a new wafer map, the network starts with the input map and iteratively updates neuron states based on neighboring neurons and connection weights. Chang et al. [48] proposed an automated die inspection approach using a contextual-Hopfield neural network. The inspection is performed in multiple steps, targeting different regions, and the results are recorded in a die map. By following a simple-to-complex sequence, this method reduces redundant inspections, improving efficiency. Chang et al. [49] proposed a novel method using a Hopfield neural network to accurately classify wafer images by incorporating spatial information. They extended the 2-D Hopfield network to a two-layer 3-D architecture [50], enabling the detection of defective regions and integrating spatial information during pixel classification. Figure 3: The figure demonstrates the processing of GAN ## III Traditional-Based Classification ### Ensemble Learning-Based Classification #### Iii-1.1 XGBoost-Based Classification XGBoost, also known as eXtreme Gradient Boosting, is a popular machine learning algorithm used to detect defects in semiconductor wafer maps. It employs gradient boosting, an ensemble technique that combines several weak models (usually decision trees) to create a robust predictive model. By iteratively training new models to correct errors made by previous models, XGBoost improves overall accuracy. It leverages statistical features such as mean, standard deviation, and spatial features like neighboring pixel intensities to capture essential information for defect detection. Yuan-Fu [51] utilized XGBoost and CNN to tackle wafer map retrieval tasks and the classification of defect patterns. Chen et al. [52] introduced a methodology that combines a defect situation classification model, constructed using the random forest and XGBoost methods, with a multi-objective parameter optimization model employing the PSO method. #### Iii-1.2 Decision Tree-Based Classification Decision trees are widely used in machine learning to detect defects in semiconductor wafer maps. The algorithm constructs a decision tree model using a training set. It selects the most informative features and divides the dataset into subsets based on these features, creating a tree structure. The objective is to accurately classify defective and non-defective patterns by creating decision nodes. The model follows the tree's branches and applies the learned rules to classify patterns as defective or non-defective using input. Fig. 4 demonstrates the processing of Decision Tree. Piao et al. [53] used a decision tree ensemble and Radon transform-based features derived from raw wafer map data to recognize failure patterns and identify defect patterns in wafer maps. The final decision combines predictions from the ensemble. Chou et al. [54] developed a system using a decision tree and neural network to classify defects in chip-scale package images. The system preprocesses wafer surface images, extracting size, shape, location, and color features of defects for classification. Li et al. [55] presented a decision tree that incorporates DNNs for ADC. The decision tree utilizes defect images as the training dataset and attains an impressive classification accuracy of 100% for 12 defect classes. #### Iii-1.3 Adaptive Boosting (AdaBoost)-Based Classification AdaBoost (Adaptive Boosting) is an ensemble method that combines multiple weak classifiers (e.g., decision trees) to create a strong classifier. The basic idea behind AdaBoost is to iteratively train weak classifiers on different subsets of the training data and assign higher weights to misclassified examples to focus on the difficult instances. It assigns importance to each weak classifier based on its performance, with lower error rates leading to higher importance. The final classification is achieved using a weighted majority vote. Yuan-Fu [56] introduced a technique involved employing singular value decomposition to extract features for non-CNN models, namely AdaBoost, SVM, and XGBoost. Through the utilization of data extraction and hyperparameter tuning, significant improvements were achieved in the performances of AdaBoost, SVM, and XGBoost. Zuo et al. [57] applied AdaBoost Tree to enhance wafer testing by reducing false failures and improving minority class accuracy. Their approach handles imbalanced wafer test datasets. Lee et al. [58] proposed a defect classification method using AdaBoost classifier in semiconductor fabrication. By extracting features from segmented local regions of wafer images, they achieved specific defect type identification. #### Iii-1.4 Random Decision Forests-Based Classification Random Decision Forests (RDF) utilize multiple decision trees in an ensemble. Each tree is built using a random subset of training data and available features, reducing overfitting and enhancing generalization. The RDF model classifies new wafer maps. Each decision tree in the forest evaluates the wafer map, and the final prediction is determined through majority voting or averaging of individual tree predictions. This prediction indicates the existence or absence of defects in wafer regions. Saqlain et al. [59] presented a technique involving the training of a soft voting ensemble classifier, which combines Random Decision Forests with density and geometry-based feature. F. Adly et al. [60] introduced Random Decision Forests, a robust learning model with randomized bootstrap aggregation applied to the dataset. It effectively classified wafers with four defect patterns. Kwon and Kang [61] presented a defect detection approach capable of identifying surface irregularities on various surfaces using Random Decision Forest. #### Iii-2.1 Kernel-Based Classification #### Iii-2.2 Support Vector Machine (SVM)-Based Classification The SVM algorithm is utilized on the training data to discern a decision boundary that distinguishes between defective and non-defective patterns. It locates an optimal hyperplane that maximizes the margin between the support vectors, which are the data points closest to the decision boundary. SVMs can handle high-dimensional feature spaces. They are capable to handle both linear and non-linear classification tasks using different kernel functions. The SVM algorithm finds the optimal hyperplane that maximally separates the different classes, leading to effective wafer defect identification. #### Iii-2.3 Figure 4: The procedure of Decision Tree is illustrated in the figure. Wu et al. [62] proposed a methodology that involves the combination of SVMs with radon-based feature extraction techniques for the purpose of predicting failure patterns. Kingma et al. [63] proposed deep generative models, including the latent-feature discriminative model and the semi-supervised deep generative model (SS-DGM). They utilized SVM for classification and the SS-DGM as an end-to-end trainable generative model, incorporating latent class and continuous variables for data characterization. Li and Huang [64] used SOM and SVM algorithms for defect spatial pattern recognition. They employed the log odds ratio test to distinguish between systematic and random defects. Clustering of WBMs was performed with SOM, and SVM was used for classification. Baly and Hajj [65] utilized nonlinear SVMs for early wafer classification. Their objective was to categorize wafers as good or bad by using a predetermined yield threshold as a differentiating boundary between the two classes. ## 2 Logistic Regression (LR)-Based Classification Logistic regression works by estimating the probability that a given wafer image belongs to the defective or non-defective class. The algorithm learns a set of weights and biases that define the decision boundary between the two classes (see Fig. 5). To train a logistic regression model, a labeled dataset of wafer images with corresponding defect labels is used. The features extracted from the wafer images are used as input to the logistic regression algorithm. The algorithm optimizes the weights and biases by minimizing a loss function, such as the cross-entropy loss, through techniques like gradient descent. Once trained, the logistic regression model can classify new, unseen wafer images by computing the probability that the image belongs to the defective or non-defective class. A threshold can be set to determine the predicted class based on these probabilities. Logistic regression is known for its simplicity, interpretability, and efficiency. It can handle both linear and non-linear classification tasks, making it suitable for identifying wafer defects that may exhibit complex patterns. Krueger et al. [67] devised a methodology using generalized linear models to predict yield in semiconductor manufacturing. Their study revealed the effectiveness of logistic regression (LR) in modeling yield based on defect data. The nested die-level LR models demonstrated superior predictive capabilities. Saqlan et al. [66] used an ensemble-based classification approach, combining logistic regression (LR), random forest (RF), and SVM algorithms. The success of these techniques relied on skilled feature engineering and domain expertise. They extracted three types of features (density, geometry, and radon-based) from raw wafer images. ### Nearest Neighbor-Based Classification #### 2.1.1 K-Nearest Neighbor (KNN)-Based Classification The K-Nearest Neighbor (KNN) algorithm measures the distances between patterns using predetermined similarity metrics like the Euclidean distance (see Fig. 6). Then, it assigns a label to the pattern by considering the labels of its closest neighbors and employing majority voting as the classification method. The user-defined parameter, K, plays a crucial role in determining the number of neighbors to be considered. By taking a majority vote among the K nearest neighbors, the class label for the test data point is determined. Each neighbor's class label carries equal weight in the voting process, and the test data point is assigned the class label that receives the highest number of votes. Cheon et al. [68] integrated a combination of CNN and k-NN to perform classification of defect patterns, thereby enabling the detection of unknown classes. Kim et al. [69] propose a method for categorizing failure patterns on DRAM wafers. They use matrix factorization to extract features from binarized FBMs. These features are used in a KNN classifier to distinguish between single bit and non-single bit failure maps on FBMs. Cheon et al. [70] created a hybrid model by combining a four-layer convolutional CNN with a k-NN algorithm. Yuan et al. [71] used the KNN method to eliminate random defects and grouped defect patterns into clusters based on similarity. They then classified the defect clusters into specific patterns using different model selection criteria. #### 2.1.2 Learning Vector Quantization-Based Classification Learning Vector Quantization (LVQ) employs a competitive learning method to update the prototype vector that closely matches the input pattern. This iterative process is performed for all patterns in the training set until convergence. After training, the updated prototype vectors create a codebook for classifying unfamiliar wafer map patterns. Classification is accomplished by assigning the new pattern to the nearest prototype vector in the codebook. Chang et al. [72] proposed a method using the LVQ neural network to inspect defects in LED wafers. The approach involves obtaining die images and their regions of interest (ROI) from the wafer image. Geometric and texture features are extracted from each ROI and used to train the LVQ neural network. Su et al. [73] developed a neural network method for inspecting semiconductor wafers after sawing. They used learning vector quantization and achieved inspection times of less than one second per die, proving its efficiency. Figure 5: The figure depicts the process of Logistic Regression. Figure 6: The figure depicts the process of the KNN classifier. ## IV Comparative Evaluations In this section, we scrutinize the various machine learning classification strategies presented in this survey, all designed for the detection of defective patterns in semiconductor wafer maps. We evaluate each technique using the following four principal criteria: the core idea behind the technique, the rationale behind its implementation, the necessary conditions for achieving its best performance, and its limitations. Table I evaluates the techniques based on deep learning for classification. Table II assesses the techniques rooted in traditional methods for classification. Our objective is to deliver an all-encompassing insight into the advantages and disadvantages of each technique, along with their appropriateness for certain tasks. CNNs use convolutional layers to extract features from input images. These layers capture local patterns and have learnable filters that convolve across the image, detecting patterns like edges, features, and features. Pooling layers then reduce the feature maps' size while preserving important information, using operations like max pooling or average pooling. The learned complexes are flattened and passed to fully connected layers, which learn to map them to defect classes. These layers often use activation representations like ReLU to introduce non-linearity. The network is trained with a loss function like categorical cross-entropy, comparing predicted probabilities to true labels. In multi-label defect classification, defects must be identified and localized within an image. CNNs can incorporate additional layers or links task. They handle multiple detection algorithms (e.g., Faster R-CNN or YOLO) to accurately locate and classify defects on the wafer. These approaches enable effective knowledge on the classification and their respective labels. For multi-label classification tasks, multiple loss functions can be used to predict the presence of each defect class independently, or a softmax entropy loss, sigmoid activation, and specialized loss functions like focal loss. These loss functions enable the network to handle multiple detection and fine-tunes the weights with the target dataset accordingly. \begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline & CNNs use convolutional layers to extract features from input images. These layers have learnable filters that convolve across the image, detecting patterns like edges, features, and features. Pooling layers then reduce the feature maps’ size while preserving important information, using operations like max pooling or average pooling. The extracted features are flattened and passed to fully connected layers, which learn to map them to defect classes. These layers often use activation representations like ReLU to introduce non-linearity. The network is trained with a loss function like categorical cross-entropy, comparing predicted probabilities to true labels & \begin{tabular}{p{142.3pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline & CNNs are well-suited for wafer & Data augmentation techniques increase dataset size and prevent overfitting by introducing variability through rotation, flipping, and noise addition. Choosing the right CNN architecture (e.g., ResNet) is crucial for defect classification. Fine-tuning a pre-trained CNN model (e.g., ImageNet) on the wafer defect dataset leverages learned features. The CNN should model complex relationships patterns without overfitting. The choice of optimizer (e.g., Adam) helps them learn general image impact oversegene and performance. \\ \hline \end{tabular} \begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline & \begin{tabular}{p{142.3pt}|p{142.3pt}|} \hline & \begin{tabular}{p{142.3pt}|} \hline & \begin{tabular}{p{142. The self-calibration mechanism is integrated to improve accuracy and adaptability, minimizing errors caused by changing to analyze using training variations. A calibration dataset is collected, covering diverse imaging conditions and potential sources of variation. The network detects defects, corrects biases or errors introduced by the (2) imaging system, and adjusts predictions by comparing them with actual defects in the calibration dataset. A feedback loop continuously evaluates and refines the network's predictions based on the calibration data, enhancing defect detection and identification requires manual calibrated, the network can identify defects in real-time on new wafers. A calibration loss function is defined to automatically extract relevant features from wafer map data, the model's predictions and the calibration data. This loss function guides the self-calibration process by optimizing the model's parameters to minimize the calibration error. The network is trained in an iterative manner, where it initially learns from the labeled training data and then undergoes multiple rounds of self-calibration using the calibration data. ResNet introduces residual connections to bypass layers and learn residual functions by capturing input-output differences. These skip connections propagate gradients effectively, addressing the vanishing gradient problem. ResNet uses residual blocks with stacked convolutional layers. Each block applies convolutions to the input and combines it with the original input through element-wise addition, enabling direct layer. This enables the network output dimension mismatch, 1x1 convolutional layers and adjust the input before addition. ResNet's exceptional utilization of residual connections significantly streamlines the training process of deep networks, making it highly efficient in identifying and capturing intricate patterns and sophisticated features in the demanding domain of wafer defect tasks. By leveraging these residual connections, ResNet enhances the model's capacity to effectively learn and represent the complex relationships between between different layers. The technique can be improved by: (1) Designing the network architecture to effectively capture intricate patterns in wafer maps. CNNs are widely employed for image analysis tasks and are well-suited for this purpose, (2) Carefully selecting and fine-tuning hyperparameters, such as learning rate, batch size, and regularization parameters, to optimize the network's performance. Techniques like grid search or automated hyperparameter optimization algorithms can aid in this process, (3) Employing regularization techniques like dropout, batch normalization, or weight decay to prevent overfitting and enhance the network's ability to feature engineering, where balanced distribution of defective and non-defective patterns in the training data is crucial. Class imbalance can introduce bias towards the majority class and adversely impact the network's prediction performance for the minority class (defective patterns), and (5) Evaluating the network's performance by employing separate validation and testing sets. This evaluation procedure assesses the network's performance significantly. The limitations are: (1) Neural networks require extensive labeled training data, which can be difficult to generate for rare or complex defects. Limited training data reduces accuracy and generalization ability, (2) Complex and subtle defects in wafer maps pose challenges for neural networks. If these defects are not well-represented in the training data, accurate identification and classification may be limited, (3) Neural networks are often black-box models, lacking transparency in explaining predictions. This hinders model validation and improvement by impeding understanding of the network's reliance on specific features, (4) Networks with self-calibration demand significant computational resources, restricting real-time analysis of large-scale wafer maps. This limits scalability and efficiency, particularly with growing wafer map size and complexity, (5) Networks with self-calibration may struggle to identify new or unseen defect types, as they rely on existing labeled data. Performance may be compromised until the network is retrained with additional labeled data, (6) Networks with self-calibration can produce verifies its accuracy in identifying negatives, incorrectly identifying non-defective patterns as defective maps. \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & GANs generate synthetic wafer images resembling defect-free wafers using a generator network that takes random noise as input. The training involves an adversarial relationship between the generator and discriminator networks. The generator aims to deceiving its images as real, while the discriminator strives to differentiate between real and synthetic images. Through an iterative feedback, the generator refines its images based on the discriminator’s feedback, improving both networks over time. The generator network generates synthetic wafer images for comparison with real images. & Annotated wafer defect datasets are often scarce and expensive to create. GANs can generate synthetic wafer defect samples, augmenting the invariant developed data and the discriminator networks. The generator aims to determine the network’s ability to discriminate networks. The generator aims to determine two classifying its images as real, while the discriminator strives to differentiate between real and synthetic images. Through an iterative feedback, the generator refines its images based on the discriminator’s feedback, improving both the generator network over time. The generator network generates synthetic wafer images for comparison with real images. & Annotated wafer defect datasets are often scarce and expensive to create. GANs can generate synthetic wafer defect samples, augmenting the invariant developed data and the discriminator networks. The generator aims to determine two classifying its images as real, while the discriminator strives to differentiate between real and synthetic images. Through an iterative feedback, the generator refines its images based on the discriminator’s feedback, improving both the generator network over time. The generator network generates synthetic wafer images for comparison with real images. & Limitations: (1) Wafer defect occurrences are less frequent than non-defective wafers, posing challenges for GANs. Imbalanced data distributions may cause a bias towards learn complex patterns and differentiate subtle defects. CNNs can be trained in an unsupervised or semi-supervised information capturing ability. The learn from unannotated or partially annotated data. This is useful when obtaining fully labeled data is challenging or costly. GANs, with their ability to learn complex data distributions, can capture subtle variations and generate realistic defect samples. By training on normal and defect-free samples, GANs techniques like grid search or gauge system optimization, finds the best parameter combination. & Limitations: (1) Wafer defect occurrences are less frequent than non-defective wafers, posing challenges for GANs. Imbalanced data distributions may cause a bias towards learn complex patterns and differentiate subtle defects. CNNs are commonly used for discriminator due to their spatial information capturing ability. The generator should produce realistic defects representative of the training data. Training GANs can be challenging due to instability identify defects accurately in unseen test data. (3) Accurate classification of wafer defects is challenging due to intricate patterns, shapes, and variations. GANs may struggle to generate highly detailed defect samples or capture the full complexity of defects during training \\ \hline \end{tabular} ## IV Conclusion In this paper, we have proposed a novel approach to generate synthetic wafer images using a generator network that takes random noise as input. The training involves an adversarial relationship between the generator and discriminator networks. The generator aims to deceiving its images as real, while the discriminator strives to differentiate between real and synthetic images. Through an iterative feedback, the generator refines its images based on the discriminator’s feedback, improving both the generator network over time. The generator network generates synthetic wafer images for comparison with real images. & Limitations: (1) Wafer defect occurrences are less frequent than non-defective wafers, posing challenges for GANs. Imbalanced data distributions may cause a bias towards learn complex patterns and differentiate subtle defects. CNNs are commonly used for discriminator due to their spatial information capturing ability. The generator should produce realistic defects representative of the training data. Training GANs can be challenging due to instability identify defects accurately in unseen test data. (3) Accurate classification of wafer defects is challenging due to intricate patterns, shapes, and variations. GANs may struggle to generate highly detailed defect samples or capture the full complexity of defects during training \\ \hline \end{tabular} ## V Conclusion In this paper, we have proposed a novel approach to generate synthetic wafer images using a generator network that takes random noise as input. The training involves an adversarial relationship between the generator and discriminator networks. The generator aims to deceiving its images as real, while the discriminator strives to differentiate between real and synthetic images. Through an iterative feedback, the generator refines its images based on the discriminator’s feedback, improving both the generator network over time. The generator network generates synthetic wafer images for comparison with real images. & Limitations: (1) Wafer defect occurrences are less frequent than non-defective wafers, posing challenges for GANs. Imbalanced data distributions may cause a bias towards learn complex patterns and differentiate subtle defects. CNNs are commonly used for discriminator due to their spatial information capturing ability. The generator should produce realistic defects representative of the training data. Training GANs can be challenging due to instability identify defects accurately in unseen test data. (3) Accurate classification of wafer defects is challenging due to intricate patterns, shapes, and variations. GANs may struggle to generate highly detailed defect samples or capture the full complexity of defects during training \\ \hline \end{tabular} ## References * [1]M. A. Abbeel, A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A A. A. A. A A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A. A A. A. A. A A. A. A A. A. A. A A. A A. A A. A A. A ## References * [1]M. Al-H. * [5] AdaBoost starts by training a weak classifier on the dataset. The weak classifier minimizes classification error or maximizes accuracy. Weights are assigned to each data point, giving higher weights to misclassified points. Weak classifiers minimize weighted classification error, adjusting weights based on importance of misclassified examples. AdaBoost combines weak classifiers into a strong classifier, assigning weights based on performance. Weights are determined by classification accuracy during training. AdaBoost applies weak classifiers to input features, assigning weighted votes to predictions. Final classification is determined by summing weighted votes and considering majority decision. * [6] For each data subset, a decision tree is created using a selected set of features. Randomness is introduced by considering only a random subset of features at each node for splitting. This prevents strong correlations between trees. Predictions are made by combining the outputs of all decision trees. Binary classification employs a majority voting scheme, where each tree "votes" for a class label and the most voted label becomes the final prediction. The trained RDF model can be used to identify defects in test wafers. The features of a test wafer are fed into each decision tree, and the majority voting scheme determines the final prediction. * [7] SVMs, are binary classifiers used to categorize data into two classes. In this case, the classes are "defective" and "non-defective" wafers. SVMs aim to create a hypemlance, acting as a boundary, that separates these classes. In a two-dimensional space, hyperplane is a line, while in higher dimensions, it becomes a hyperplane. The algorithm maximizes the margin, the distance between the hyperplane and the closest data points from each class. Support vectors, the hyperplane's position, optimizing it for maximum margin. SVM training involves solving an optimization problem to find the optimal hyperplane, maximizing the margin. Constraints are applied to identify hyperplane parameters that classify the training data * [8] Select weak classifiers that capture different aspects of wafer defects, providing complementary information. They should perform better than random guessing, preferably with accuracy above combining weak classifiers, significantly affects AdaBoost's final performance. Optimal number of iterations or weak classifiers must be determined to prevent overfitting or underfitting, using techniques like cross-validation or validation set monitoring. Balance class distribution and prevent performance, reducing errors and increasing accuracy. The Emphasizing misclassified instances in each iteration helps AdaBoost generalize to unseen data, making it suitable for real-world applications like wafer detection. * [9] Random Decision Forests use an ensemble learning technique to enhance the accuracy and reliability of defect identification. They randomly select a subset of features at each node for splitting. This mitigates the influence of irrelevant or noisy features, preventing overfitting and promoting generalization. Random Decision Forests can capture intricate relationships and patterns, including non-linear dependencies between features and defects. They are robust to outliers and missing data. Random Decision Forests excel in handling large datasets through parallelization and reduce overfitting with randomization techniques like feature subsampling and bootstrap aggregating. * [10] We search for classifiers that capture different aspects of wafer defects, providing complementary information. They should perform better than random guessing, preferably with accuracy above combining weak classifiers, significantly affects AdaBoost's final performance. Optimal number of iterations or weak classifiers must be determined to prevent overfitting or underfitting, using techniques like cross-validation or validation set monitoring. Balance class distribution and prevent biased learning by oversampling the minority class (defective) or underfitting. The majority majority of the minority class (no-) dataBoost necessitates multiple iterations, involving training weak classifiers and updating their weights, making the iterative and improves computationally expensive * [11] Random Decision Forests use an ensemble learning technique to enhance the accuracy and reliability of defect identification. They randomly select a subset of features at each node for splitting. This mitigates the influence of irrelevant or noisy features, preventing overfitting and promoting generalization. Random Decision Forests can capture intricate relationships and patterns, including non-linear dependencies between features and defects. They are robust to outliers and missing data. Random Decision Forests excel in handling large datasets through parallelization and reduce overfitting with randomization techniques like feature subsampling and bootstrap aggregating. * [12] We search for classifiers that capture different aspects of wafer defects, providing complementary information. They should perform better than random guessing, preferably with accuracy above combining weak classifiers, significantly affects AdaBoost's final performance. Optimal number of iterations or weak classifiers must be determined to prevent overfitting or underfitting, using techniques like cross-validation or validation set monitoring. Balance class distribution and prevent biased learning by oversampling the minority class (defective) or underfitting. The majority of the minority class (defective) or underfitting, including the majority of the minority class (no-) dataBoost necessitates multiple iterations, involving training weak classifiers and updating their weights, making the iterative and improves computationally expensive * [13] Random Decision Forests use an ensemble learning technique to enhance the accuracy and reliability of defect identification. They randomly select a subset of features at each node for splitting. This mitigates the influence of irrelevant or noisy features, preventing overfitting and promoting generalization. Random Decision Forests can capture a majority voting scheme (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost maintains a high-dimensional feature subsampling (no-) dataBoost To ensure logistic regression compatibility, raw features need processing and transformation, including scaling, in normalization, handling missing values, and encoding variables. Feature engineering extracts meaningful information, enhancing logistic regression. To train the model, split the dataset into training and test sets. Training optimizes weights and bias via gradient descent, minimizing predicted and actual label differences. The trained logistic regression or predicts defects in new, unseen wafers. Probability scores determine defect likelihood. Applying a threshold classifies wafers as defective or non-defective, balancing precision and recall based on specific requirements. Logistic regression produces continuous probabilities between 0 and 1. The KNN algorithm computes the dissimilarity between the feature vectors of the wafer and the feature vectors relationships without assuming the data distribution. It focuses on local information, which is particularly useful for identifying wafer defects based on significant disintaries and patterns within a small vicinity. By examining the classification. Small k overfits detects subtle patterns or similarities and allows points, operations and thus boundaries. Use cross-validation to find the best k for validation performance. Handling class imbalance if present, When defective and non-defective wafer counts differ instantaneously, k-NN may favor it adaptable to changing data distribution, but real-world data is often imbalanced, leading to miscaltering on minority class balancing. The performance of LVQ can be significantly influenced by the careful selection of relevant features and the application of effective feature extraction techniques. It is crucial to identify the features that are most effective in distinguishing between different types of defects. Techniques like PCA reduce dimensionality and improve feature discriminative power. LVQ uses prototypes per class. If the classification. Proper initialization aligns with data distribution, capturing defect and enhancing performance. Appropriate learning rates ensure involves capturing intricate patterns and subtle variations, which LVQ may struggle to model. LVQ can encounter challenges with imbalanced datasets where the sample count varies between classes Experimental Evaluations ### Compiling Datasets for the Evaluations The experimental dataset comprises a combination of two types of data: 1. Real-world wafer maps provided by Samsung Electronics in Korea for our previous research papers [74, 75, 76]. The dataset encompasses the fabrication information of 26 lots, comprising 843 wafer maps obtained during the fabrication process at different stages. For data transformation, graph visualization, and data analysis, we utilized the statistical software STATISTICS, SAS, and Scenario. 2. data generated by Jeong et al. [77] using the methodology proposed by DeNicolao et al. [78]. This dataset encompasses the most prevalent wafer defect patterns, namely spot, circle, repetitive, and cluster. To represent the position of a defective die in each major process zone of semiconductor wafer fabrication, a distinct probabilistic model was employed for each defect pattern. In the dataset created by Jeong et al. [77], eight levels of random noise were incorporated, commencing from 0.05 and incrementing by 0.05. For each of the four defect patterns, ten wafer maps were generated per noise level. The probabilistic expressions used to depict the position of a defective die on a simulated process zone are presented in equations 1-6. The probabilistic expressions utilized to represent the position of a defective die on a simulated process zone are demonstrated in equations 12-17 [9]. * _Spot:_ The controlling parameters for the wafer are defined as follows: \(\sigma\) represents the width, while (x, y) denotes the coordinates of the wafer's center. Additionally, the distance between the centers of the defect and the wafer is represented by the variable \(r\). \[\scriptsize\begin{array}{c}p\left(x,y\right)=\exp\left(x^{2}/2\sigma^{2} \right),\;\;r^{2}=\left(x-x_{c}\right)^{2}+\left(y-y_{c}\right)^{2}\end{array}\] (1) * _Circle:_ The controlling parameters for the circle are defined as follows: \(\sigma\) represents the radius, while (x, y) denotes the coordinates of its center. \[\scriptsize\begin{array}{c}p\left(x,y\right)=1-\exp\left(x^{2}/2\sigma^{2} \right),\;r^{2}=\left(x-x_{c}\right)^{2}+\left(y-y_{c}\right)^{2}\end{array}\] (2) * _Repetitive:_ The controlling parameter pertains to the positioning of the row \(T\) and the column \(\varphi\). \[\scriptsize\begin{array}{c}\mbox{(horizontal):}p(x,y)=(1+\sin(2\pi \,y/T+\phi\Phi))/2\end{array}\] (3) \[\scriptsize\begin{array}{c}\mbox{(vertical):}p(x,y)=(1+\sin(2\pi \,x/T+\phi\Phi))/2\end{array}\] (4) * _Cluster:_ The generation process involves the application of logical operators, specifically the "OR" or "AND" operators. \[\scriptsize\begin{array}{c}\mbox{(\emph{`AND}r):}p(x,y)=p_{1}(x,y)y_{2}(x,y) \end{array}\] (5) \[\scriptsize\begin{array}{c}\mbox{(\emph{`OR"}):}p(x,y)=p_{1}(x,y)+p_{2}(x,y )-p_{1}(x,y)p_{2}(x,y)\end{array}\] (6) ### Evaluation Setup Cross-validation is widely regarded as the predominant and widely accepted statistical approach for appraising classifier performance. In order to evaluate the predictive capability of the models, we conducted a 10-fold cross-validation. The dataset is randomly divided into ten distinct subsets. The models undergo ten rounds of evaluation, during each of which a unique subset of data is held out for testing while the remaining nine subsets are employed for model training. We utilized the subsequent metrics for assessment: * _Classification accuracy (Acc):_ It refers to the measure of the proportion of correct predictions made by a classification model. It is expressed as follows: \[\scriptsize\begin{array}{c}\mbox{Classification Accuracy = (Number of Correct Predictions) / (Total Number of Predictions)}\end{array}\] (7) _Coefficient of determination (R2):_ The coefficient of determination, denoted as R2, serves as a metric to assess the ability of a model to accurately elucidate and forecast future clustering outcomes. The measurement of R2 is derived utilizing Equation 7. \[\scriptsize\begin{array}{c}R^{2}=100\times\left(1-\frac{\sum_{i=1}^{*}(x_{i }-m_{i})^{2}}{\sum_{i=1}^{*}(x_{i}-\overline{x}^{2})}\right)\\ \mbox{where }m_{i}\mbox{ is the predicted output}\end{array}\] (8) _F1-measure:_ It is a performance metric utilized for evaluating classification models. It harmonizes precision and recall by generating a single score. The calculation of the F1-measure is achieved through the following equation: F1-measure= 2*(Precision * Recall)/(Precision + Recall) (8) Where, precision represents the ratio of true positive predictions to the total number of positive predictions, while recall denotes the ratio of true positive predictions to the total number of actual positive instances present in the dataset. _Adjusted Rand Index (ARI):_ It is a measure that quantifies the similarity between two data clusterings, adjusting for the chance grouping of elements. It provides a correction for chance over the Rand Index, which is a measure of the similarity between two data clusterings. The Rand Index can be interpreted as the probability that a pair of datapoints will be in the same or different clusters in both clusterings, while the ARI adjusts for expected chance agreement. Here is the formula for the Adjusted Rand Index: \[\scriptsize\begin{array}{c}\mbox{ARI = (RI - Expected\_RI) / (Max\_RI - Expected\_RI)}\end{array}\] (9) where: \[\scriptsize\begin{array}{c}\mbox{RI is the Rand Index. RI = (a + b) / (a + b + c + d), where \(a\) and \(b\) are the number of pairs of elements that are in the same subset and different subsets, respectively, in both clusterings, \(c\) is the number of pairs of elements that are in the same subset in one clustering but in different subsets in the other clustering, and \(d\) is the number of pairs of elements that are in different subsets in one clustering but in the same subset in the other clustering. \(\scriptsize\begin{array}{c}\mbox{Expected\_RI}\mbox{ is the expected value of the Rand Index, assuming that the clusterings are randomly assigned}\end{array}\) \(\scriptsize\begin{array}{c}\mbox{Max\_RI}\mbox{ is the maximum possible value of the Rand Index * Clustering accuracy (\(\gamma\)): The parameter \(\gamma\) indicates the effectiveness of a model in accurately grouping defect patterns. As depicted in Equation 10, this is computed by contrasting the projected cluster outcome with the real outcome. \[\gamma=\frac{\text{length }\left(X\ =\hat{x}\right)}{\text{length }\left(X\ \right)}\] (10) \(X\) is the correct value and \(\hat{\hat{x}}\) is the estimated one * _Normalized Mutual Information (NMI):_ It is used as a measure in clustering to determine the quality of the clusters. It is defined as: \[\text{NMI(X, Y)}=2\ \text{* MI(X, Y)}\ /\ \left[\text{H(X)}+\text{H(Y)}\right]\] (11) Where: \(\text{H(X)}=\text{-}\Sigma\ \text{P(i)}\ \text{* log P(i)}\) for all i \(\text{H(Y)}=\text{-}\Sigma\ \text{P(j)}\ \text{* log P(j)}\) for all j \(\text{MI(X, Y)}=\sum\ \text{P(i, j)}\ \text{* log (P(i, j)}\ /\ \left[\text{P(i)}\ \text{* P(j)}\right]\). \(\text{P(i, j)}\) is the joint probability mass function of X and Y. \(\text{P(i)}\) and \(\text{P(j)}\) are the marginal probability mass functions of X and Y respectively. Our Methodology for Selecting a Representative Paper for each Technique, Ranking the different Sub-Techniques, Techniques, and Sub-Categories The following methodology was utilized in conducting the experimental evaluations: **Evaluating individual sub-techniques:** After a comprehensive review of papers presenting algorithms employing specific sub-techniques, we identified the paper with the greatest impact. The algorithm detailed in this influential paper was chosen as the representative for its respective sub-technique. To determine the most significant paper among those reporting algorithms using the same sub-technique, we considered various factors including its innovative contributions and date of publication. The selected papers are displayed in Table 3. **Ranking the sub-techniques within a same technique**: We calculated the mean scores of the selected algorithms which made use of the same sub-technique. Following this, we ranked these sub-techniques that are part of the same main technique, according to their scores. **Ranking the techniques within a same sub-category:** The mean scores of the selected algorithms applying the same technique were computed. Following this, we ranked these techniques that belong to the same sub-category, based on their scores. **Ranking the sub-categories within a same category:** We calculated the mean scores of the chosen algorithms that operated under a common sub-category. Subsequently, these sub-categories that are part of the same primary category were ranked according to their scores. ### The Experimental Results We conducted an extensive search for publicly available codes corresponding to the algorithms we selected to represent their respective techniques. Unfortunately, we were only able to obtain codes for only three papers: [7, 8, 9]. The codes for these papers are available at: [37] [https://github.com/MCG-NKU/SCNet](https://github.com/MCG-NKU/SCNet) [29] [https://github.com/Junliangwangdhu/WaferMap](https://github.com/Junliangwangdhu/WaferMap) [62] [http://mirlab.org/dataSet/public/](http://mirlab.org/dataSet/public/) For the remaining representative papers, we developed our own implementations using TensorFlow, as described by Sinaga and Yang [79]. We trained these implementations using the Adam optimizer, as suggested by Sinaga and Yang [79]. TensorFlow's APIs provide users with the flexibility to create their own algorithms [80]. Our development language was Python 3.6, and we utilized TensorFlow 2.10.0 as the backend for the models. The results are presented in Tables 5-7 and Figs. 8 and 9 as follows: * Tables 5, 6, and 7 display the scores for the chosen deep learning-based classification algorithms, traditional-based classification algorithms, and clustering algorithms, respectively. These tables also include rankings of sub-techniques within their respective techniques, rankings of techniques within the same sub-category, and rankings of sub-categories within the same category. * Figs. 8 and 9 illustrate the individual scores of the classification and clustering algorithms, respectively. The algorithms in each figure are grouped based on the common underlying techniques they employ. \begin{table} \begin{tabular}{c c c} & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Technique** & **Sub-Cat** \\ & **Sub-Cat** & **Sub-Cat** & **Sub-Cat** \\ & **Sub-Cat** & **Sub-Cat** \\ & **Sub-Cat** & **Sub-Cat** & **Sub-Cat** \\ Discussion of the Experimental Results ### Self-Organizing Maps The experimental findings highlight the notable advantages of the method's topological data representation in exploratory data analysis, though drawbacks such as absence of probabilistic interpretation and sensitivity to initial conditions reduce its accuracy. The model effectively identified a wide range of semiconductor manufacturing defects and maintains input space topological properties, aiding in defect-dense areas identification. Its adaptability to various datasets, absence of assumptions about data's statistical distribution, and notable computational efficiency underscore its potential for real-time defect detection in semiconductor manufacturing lines. However, the method's performance was significantly influenced by the initial random weights and data presentation order, which led to inconsistent results with identical data and caused potential confusion. Moreover, challenges in determining the network's optimal size and topology, including the number and arrangement of neurons, further impacted the method's effectiveness. ### Autoencoder-Based Classification Compared to SVMs, decision trees, and standard machine learning, the autoencoders method revealed a flexible but more complex strategy for detecting defects. It self-learned non-linear transformations, yielded more meaningful data representations. These were often superior in finding complex patterns overlooked by the other techniques. This could aid in detecting less obvious defects. Transformed into denoising autoencoders, it could eliminate noise and condense high-dimensional data, reducing computational requirements for subsequent steps, beneficial for noisy real-world data. However, experimental results highlighted limitations. It required substantial data to function well, not always available in semiconductor manufacturing. It sometimes failed to generalize to new data and risked overfitting to training data without proper regularization. ### Convolutional Neural Network The method demonstrated superior precision in identifying and categorizing wafer map defects, particularly when handling extensive datasets. It was notably proficient at interpreting the geographical distribution of flaws on the wafer surface, a key element in detecting defective patterns. Compared to this method, traditional machine learning approaches fell short when faced with intricate patterns or when there was a wealth of labeled training data available. Its inherent translational invariance enabled it to recognize patterns irrespective of their location within the image. This characteristic is advantageous in detecting defects on wafer maps, as flaws could appear anywhere on the wafer. The method's hierarchical learning ability, which allows it to grasp low-level attributes in initial layers and high-level attributes in later layers, strengthened its competency in discerning complex patterns within the wafer maps. It empowered a profound understanding of the intrinsic structure of wafer map ### Pre-Defined CNN and Transfer Learning CNNs and Transfer Learning proved superior, especially with smaller datasets, by using features from an existing model and achieving higher accuracy. Transfer learning used large datasets to adjust a pre-trained model for defect identification, lessening the need for vast amounts of labeled training data. Despite the slight time increase due to extra model layers, the significant accuracy gain outweighed this. Nonetheless, there were some limitations. The method required substantial labeled training data and the labeling process can be laborious and time-consuming. When data distribution between target and source tasks varied significantly, transfer learning underperformed, due to its inherent assumption of similarity in data distribution. Overfitting also occurred when fine-tuning was implemented on a small dataset. ### Networks with Self-Calibrated The method demonstrated considerable precision provided ample training data was available. This heightened predictive accuracy was a result of its ongoing parameter adjustments in line with the accessible data, thereby improving the identification of defective patterns and reducing both false positives and negatives. The problem of overfitting could be minimized by the method by adjusting the model's complexity to match that of the data. However, the experimentation identified certain shortcomings. The method was computationally intensive compared to other methods, which delayed the detection process, particularly with larger datasets. Its performance was also heavily reliant on its initial calibration. If the network commenced with inadequate calibration, it resulted in less than optimal performance, regardless of several self-calibration iterations. ### Hopfield Artificial Neural Network The Hopfield-based method exhibited robust performance when large defects altered wafer map patterns, enabling effective differentiation between normal and defective maps. However, it faced challenges when defects introduced minor changes or the data had significant noise. Despite these challenges, the method was noise-tolerant, accurately converging even with imperfect or incomplete input - a useful feature for handling real-world data. This resilience stemmed from the method's unique distributed and associative memory, boosting its resilience to partial failures, enabling it to recall and recognize learned patterns. However, the method sometimes became trapped in local minima, misidentifying defective patterns due to locally optimum matches with distorted or noisy patterns. ### Residual Neural Network-Based Classification ResNets, with its deep learning capabilities, excelled in identifying complex patterns in wafer maps, outperforming traditional machine learning and standard CNNs. However, it sacrificed interpretability and computational efficiency. Its ability to counter vanishing gradients led to consistent loss reduction, indicating efficient learning. But the training demands surpassed simpler methods due to ResNets' complexity, with training duration and resources being influenced by ResNet depth and data volume. The duration required for training were directly affected by the depth of the ResNet and the volume of wafer map data. The impact of weight initialization and hyperparameter selection was significantly noticeable in the performance of ResNets. XGBoost-Based Classification XGBoost excelled in performance and flexibility, although careful tuning and preprocessing were required, given the complexity of semiconductor wafer map data. The model's predictions were somewhat challenging to interpret. XGBoost outperformed other gradient boosting methods with a quicker path to minimum error, faster convergence, and optimized computations for increased speed and lower computational costs. It efficiently handled missing data, significantly reducing preprocessing time. The algorithm detected and learned from non-linear patterns and prevented overfitting using various regularization penalties. However, precise hyperparameter tuning was crucial for an optimal model. ## 9 Decision Tree-Based Classification The algorithms rooted in classical methods, specifically those utilizing gradient-boosted decision trees, demonstrated high effectiveness in correctly detecting flawed patterns in wafer maps and dealing with missing data. This resulted in commendable accuracy rates. The ability of the gradient boosting framework to efficiently optimize complex loss functions is seen as the reason for this prediction advantage. This framework also has a built-in mechanism to deal with absent values and a structured method for comprehending the significance of various features in predictions. This helps to identify the most impactful factors that determine whether a wafer map reveals defects. Nonetheless, the examination of experimental results revealed that these algorithms have a propensity to overfit when applied to smaller datasets. The enhanced performance can be attributed to their ability to prioritize instances with large gradients. ### AdaBoost-Based Classification Generally, the experiment's outcomes suggest that the AdaBoost-based approach provides a desirable balance between efficiency and stability, despite its higher computational resource demands during the learning phase. The AdaBoost model demonstrated greater resilience to fluctuations in the dataset, which implies a less pronounced performance drop than other models when faced with new, marginally different data. This characteristic is advantageous in practical situations where data evolve over time. Owing to the iterative nature of AdaBoost, its training duration surpassed that of SVM and elementary Neural Network models. Yet, the time taken for predictions can be similar. The model was found to be sensitive to noisy data and outliers. ### Random Decision Forests-Based Classification Generally, the RDF technique accurately recognized most wafer maps in the test group as defective or non-defective. Yet, its accuracy flattered with imbalanced datasets where one class significantly outstripped the other. Experiment outcomes showed the RDF method's high efficiency, even with minor hyperparameter tweaks. It particularly stood out in dealing with skewed datasets, commonly seen in defect detection where non-defective wafers far exceed defective ones. The approach skillfully handled non-linear feature interactions and identified feature interplays. Its ensemble nature made it less prone to overfitting than individual decision trees. By averaging results across numerous trees, the RDF technique effectively managed data noise and outliers. Nevertheless, the computational demands of this approach were considerable, particularly when working with larger datasets. The increased computational load stemmed from the necessity to construct and integrate numerous decision trees within the model. This process involves calculating the best split points in the dataset and distributing data across multiple branches, which can be especially resource-intensive for large datasets. ### Support Vector Machine Compared to Decision Trees, Neural Networks, and Random Forests, the SVM technique showed higher accuracy and generalization in some cases for identifying faulty wafer map patterns due to its proficiency in handling high-dimensional data. But, it struggled with high complexity, difficulty processing extremely large datasets, lack of interpretability, and needed careful parameter calibration. Its strength was in its flexibility in handling diverse data patterns, which was achieved by using various kernel functions for creating non-linear decision boundaries and complex data transformations. The SVM's kernel trick allowed modeling of non-linear decision boundaries, crucial for spotting complex defects. Its regularization parameter prevented overfitting, giving it resilience when data dimensionality exceeded sample number. ### K-Nearest Neighbor-Based Classification The technique achieved decent precision rates by integrating distance-related classification and normalizing the dataset used for training. However, the presence of unrelated features and inconsistent feature scaling significantly hindered the method's efficiency. The model's detection capabilities varied across distinct types of defects, excelling at identifying specific kinds due to the distinct distribution and density of various defect types within the feature space. The method was also computationally demanding when processing large datasets, as it required the calculation of the distance between a given test point and all points in the dataset for prediction purposes. Although the technique could predict the class label, it didn't offer any measure of confidence for that prediction. ### Learning Vector Quantization (LVQ) Classification The LVQ method outperformed clustering in terms of accuracy due to its ability to harness label information. Nonetheless, it necessitated extensive data preparation, given that labels are a prerequisite for training data. Deep learning techniques such as CNNs surpassed the LVQ method in terms of accuracy by identifying more intricate patterns within the data. However, these methods were more challenging to decipher, required greater data volumes, and were computationally demanding. LVQ proved to be an effective tool in managing noisy data, which is crucial given the frequent occurrence of noise in semiconductor wafer map data. Furthermore, LVQ adeptly managed complex and non-linear classification problems, and the optimization of its learning rate and other parameters was possible. One significant drawback of LVQ was its heavy reliance on substantial volumes of labeled training data. Potential Future Perspectives for Identifying the Defective Patterns in Semiconductor Wafer Maps We present in this section some potential future improvements for identifying the defective patterns in WBM using classification machine learning techniques. ### Identifying Defective Patterns based on Machine Learning Classification #### Vi-A1 Deep Learning-Based Classification 1. Artificial Neural Network-Based Classification * _Synthetic Data Generation:_ Utilizing generative models like GANs can produce artificial wafer map data, boosting model performance by diversifying training data. * _Interpretable AI (XAI):_ Given the 'black box' nature of neural networks, it's vital to make these models understandable, particularly in semiconductor production. Future efforts should aim to enhance model transparency, providing engineers with predictive insights. * _Automated Hyperparameter Tuning:_ Techniques like grid search, random search, Bayesian optimization, and evolutionary algorithms can automate hyperparameter tuning, significantly boosting neural network performance. * _Reinforcement Learning (RL):_ RL algorithms can persistently improve neural network performance through a reward/penalty system, allowing dynamic adaptation to changes in the wafer manufacturing process. #### Vi-A2 Convolutional Neural Network-Classification Here are some potential future improvements: * _Convolutional Layers Enhancement:_ Complex structures such as dilated or depthwise separable convolutions can optimize CNNs' learning capacity. * _Ensemble Learning:_ By pooling predictions from diverse CNN models, we can expand defect identification and improve accuracy. * _Advanced Training Techniques:_ Modern methods like cyclic learning rates, snapshot ensembles, and knowledge distillation can boost model training and performance. * _Automated Model Selection and Hyperparameter Tuning:_ Utilizing AutoML tools can streamline the selection of optimal model structures and hyperparameters, saving both time and expertise. * _Transfer Learning:_ Pretraining CNN models on extensive datasets like ImageNet, and fine-tuning them on defect data, can notably enhance performance, especially when defect data is limited. * _Architecture Design Innovation:_ Tailoring CNN architectures for wafer defect classification using mechanisms such as attention, skip connections, or varied layer configurations can yield better results #### Vi-A3 Residual Neural Network-Based Classification * _Network Architecture Enhancement:_ By adding depth wise separable convolutions or squeeze-and-excitation blocks to ResNet, its learning capacity from data may be improved. * _Transfer Learning Application:_ Training ResNets on related tasks before using them for semiconductor defect detection could increase their performance. This approach, known as transfer learning, often leads to superior models. * _Few-shot Learning Implementation:_ Few-shot learning can be useful, particularly with limited examples of certain defect types, enabling learning from few examples per class. * _Multi-scale Feature Extraction Adoption:_ By integrating multi-scale feature extraction in the architecture, ResNet's ability to detect varied defect patterns, especially sizable differences, can be enhanced. * _Unsupervised Learning Integration:_ Autoencoder-like unsupervised learning can learn regular patterns, helping in anomaly detection. The representations learned can then feed into the ResNet for final classification. * _Attention Mechanisms Employment:_ Attention mechanisms, similar to the Transformer model's self-attention, can help the network focus on vital parts of wafer maps. #### Vi-A4 Generative Adversarial Network-Based Classification * _GAN Design Evolution:_ Innovations in GAN architectures like StyleGANs and BigGANs could enhance wafer defect detection. Customizing GANs to specific tasks can significantly boost performance. * _Conditional GAN Use:_ Conditional GANs, which provide extra information like defect nature or location to generator and discriminator, could refine defect detection. * _CycleGAN for Data Augmentation:_ Employing CycleGANs can benefit defect detection, especially for underrepresented defects, by widening the defect variety during training. * _Hybrid Model Integration:_ Combining GANs with methodologies like reinforcement learning or attention mechanisms can improve defect detection accuracy. * _Training Stability Enhancement:_ While techniques like gradient penalty and spectral normalization have addressed GANs' stability issues, further enhancements can increase detection accuracy. * _Few-shot and Zero-shot Learning:_ Given that traditional GANs need extensive datasets, future improvements may enable learning from few examples or category descriptions, benefiting rare defect detection. * _Focus on Multi-scale and Hierarchical Features:_ Future GAN advancements may involve using multi-scale and hierarchical features for better defect identification accuracy. #### Vi-A5 Adversarial Training-Based Classification * _Intericate Adversarial Attacks:_ Developing complex adversarial attack strategies like FGSM, PGD can cultivate resilient models by generating varied adversarial examples. * _Improved Adversarial Defence:_ Adversarial defense techniques, including adversarial training, defensive distillation, and feature squeezing, should be refined and tailored for defect detection in semiconductors. * _Multi-modal Adversarial Training:_ Extending adversarial training to multi-modal data, generating adversarial examples from different data types, increase robustness. * _Uncertainty Quantification:_ Incorporating uncertainty quantification into adversarial training helps gauge the model's prediction confidence, enhancing anomaly detection. * _Robust Optimizers:_ Using robust optimization techniques can enhance the model's extrapolation ability from adversarial examples to new data. * _Generative Adversarial Networks (GANs):_ Leveraging GANs in adversarial training, with the generator creating adversarial examples and the discriminator detecting defects, can be advantageous. * _Active Learning:_ Applying active learning techniques with adversarial training can progressively refine the model by selecting the most valuable instances for labelling. 6) Hopfield Artificial Neural Network-Based Classification * _Exploring Model Upgrades:_ Contemporary updates to Hopfield Networks, including continuous and complex-valued Hopfield Networks, could be scrutinized for their ability to enhance defect detection performance. * _Scalability Enhancements:_ Hopfield Networks' application has been limited to smaller problems due to computational requirements. Investigating efficient training methods and hardware acceleration could facilitate their use on larger, more intricate semiconductor wafer maps. * _Integration with Other Neural Networks:_ Combining Hopfield Networks with other neural network types, such as Convolutional Neural Networks (CNNs), could enhance feature extraction and pattern recognition. * _Reinforcement Learning Infusion:_ Incorporating reinforcement learning could augment Hopfield Networks' pattern recognition prowess, especially useful in dynamically changing environments. * _Hybrid Model Deployment:_ Pairing Hopfield Networks with other AI techniques like swarm intelligence or genetic algorithms enhances the detection of complex defect patterns * _Active Learning Inclusion:_ Merging active learning strategies with Hopfield Networks could enable the selection of the most informative training samples, improving the model's precision and efficiency ## 2 Traditional-Based Classification * XGBoost-Based Classification * _Hyperparameter Fine-Tuning:_ Enhancing XGBoost's performance through methods like Bayesian optimization or AutoML, targeting parameters such as learning rate, max depth, and estimators. * _Addressing Imbalanced Data:_ Resolving imbalanced data using techniques like SMOTE or ADASYN. * _Multimodal Learning:_ Boosting semiconductor data by merging various types (images, time-series sensor data) in a multimodal XGBoost approach. * _Ensemble Methods:_ Boosting performance by pairing XGBoost with other models through stacking, bagging, or boosting, for different defects or manufacturing stages. * _Active Learning:_ Iteratively improving XGBoost model via active learning, selecting informative samples for labeling * _Early Stopping:_ Preventing overfitting and saving computational resources by incorporating early stopping. * Decision Tree-Based Classification * _Merging Deep Learning Methods:_ Combining deep learning techniques like CNNs and RNNs with decision trees can enhance their handling of complex data. For example, deep learning can extract features for decision tree classification. * _Broadening IoT Interactions:_ With increasing digitalization, decision tree algorithms can directly engage with manufacturing machinery, learning from real-time data to quickly predict and identify defects. * _Ensemble Techniques:_ Leveraging ensemble methods like Random Forest, Gradient Boosting, or AdaBoost enhances classification with better generalization and less overfitting. * _Hyperparameter Optimization:_ Sophisticated techniques like Grid Search, Random Search, or Bayesian Optimization improve the effectiveness of decision tree models. * _Deep Learning Integration:_ Combining deep learning algorithms with decision trees enhances handling of high-dimensional data and feature extraction. * _Data Augmentation:_ Techniques like transformations, cropping, or noise addition improve model's generalization, especially with limited data. * _Decision Tree Improvement:_ Refining splitting criteria, pruning techniques, and managing missing data enhances model accuracy. * _Multi-objective Decision Trees:_ Future work could focus on optimizing accuracy, depth, and interpretability concurrently * _Hybrid Models:_ Exploring combinations of machine learning strategies like unsupervised and semi-supervised learning enhances defect detection. * _Incremental Learning:_ Adapting to changing real-world data makes the model more suitable for real-time or evolving environments. * Adaptive Boosting (AdaBoost)-Based Classification * _Deep Learning in AdaBoost:_ Using deep learning models as weak learners in AdaBoost enhances performance, especially with high-dimensional data like wafer defect images. * _Hybrid Models:_ Integrating AdaBoost with machine learning techniques like random forests or support vector machines can boost accuracy and robustness. * _Enhanced Robustness:_ AdaBoost's susceptibility to noise and outliers can be mitigated using refined versions like RobustBoost, especially in wafer defect identification. * _Transfer Learning:_ With wafer defects, transfer learning can address variations in data distributions due to different production methods. * _Imbalanced Data:_ Innovative strategies are needed to manage the imbalance typically seen in defect detection data within AdaBoost, marking a valuable research direction * Random Decision Forests-Based Classification * _Hyperparameter Optimization Enhancement:_ Exploring advanced hyperparameter optimization methods like Bayesian Optimization or Genetic Algorithms could refine hyperparameters like tree count, maximum depth, and feature divisions. * _Time-Series Data Integration:_ Considering temporal relationships in data points could enhance RDFs' performance, especially in fields like wafer manufacturing where data show time-based correlations. * _Model Hybridization:_ Merging RDFs with other machine or deep learning models in a collective model could heighten prediction accuracy. * _Deep Learning Use:_ Applying deep learning methods like CNNs could help identify features from wafer images, aiding RDFs in improving defect detection * Support Vector Machine (SVM)-Based Classification * _Kernel Function Optimization:_ Tailor kernel functions to wafer map data by designing new kernels or refining existing ones for better SVM performance. * _Hyperparameter Refinement:_ Use automated techniques like grid search, random search, or Bayesian optimization to fine-tune SVM hyperparameters, improving performance. * _SVM and Deep Learning Fusion:_ Integrate SVMs with deep learning such as CNNs for feature extraction to create more robust models and enhance data compatibility. * _Ensemble Technique Application:_ Construct robust, precise models by building an ensemble of SVMs using techniques like bagging or boosting, each trained on different data subsets or features. * _Data Augmentation Implementation_: Enhance the training set's diversity and size for better model training and generalization by applying data augmentation methods like rotation, scaling, or flipping. * _Active Learning Utilization_: In costly data-labeling scenarios, use SVMs within an active learning framework to identify the most informative unlabeled instances for the next training iteration * Logistic Regression (LR)-Based Classification * _Fusion Models:_ Leveraging logistic regression with other machine learning methods like decision trees or neural networks can boost predictive accuracy by combining simple interpretability with complex predictiveness. * _Regularization Progress:_ Advancements in overfitting-preventing techniques like L1 or L2 may further improve logistic regression models' performance. * _Quantum Computing:_ With quantum computing evolution, complex, computationally heavy versions of logistic regression or other machine learning models can be feasible. * _IoT Integration:_ Advancing IoT technology could enable defect identification through real-time data analysis, continually updating and enhancing logistic regression model's performance. * K-Nearest Neighbor (KNN)-Based Classification * _Weighted KNN_: Adopt a weighted KNN algorithm to improve defect prediction, weighting neighbors based on their distance from the query point. * _Automated Hyperparameter Tuning_: Utilize automated methods like Grid Search, Random Search, or Bayesian Optimization for optimal tuning of KNN's parameters. * _Adaptive KNN:_ Implement an adaptive KNN algorithm that adjusts the k-value based on the data density of the region, improving accuracy in sparse regions. * _Integration with Deep Learning_: Leverage deep learning for feature extraction from wafer maps, then use KNN on this learned feature space for improved performance. * _Incremental KNN:_ Deploy incremental KNN in production to adapt to continuously generated new wafer map data without full retraining, enabling real-time applicability. * _Unsupervised Learning for Anomaly Detection_: Use unsupervised learning methods with KNN for detection of new, unseen defective patterns * Learning Vector Quantization-Based Classification * _Deep Learning Fusion:_ By merging deep learning's prowess in identifying complex, high-dimensional patterns with LVQ, we can create a more effective and accurate model. * _Flexible Learning Rates:_ Adjusting the essential hyperparameter, the learning rate, in LVQ as per learning progression can boost performance and hasten training. * _Improved Initialization Methods:_ Utilizing advanced initialization techniques for prototypes can amplify LVQ's learning proficiency, leading to more accurate classification. * _Hybrid Models:_ Integrating LVQ with other machine learning algorithms can result in a combined model that harnesses the advantages of multiple techniques. * _Scalability Enhancement:_ By boosting LVQ's ability to handle large, high-dimensional datasets, it could better suit large-scale applications like wafer defect detection ## VII Conclusion ML algorithms have proven highly capable in wafer defect detection, despite the lack of a comprehensive review in this field. In this survey paper, we amalgamate existing studies to highlight the strengths, limitations, and potential applications of different ML classification algorithms in defect detection on wafer maps. We reviewed algorithms utilizing distinct sub-techniques, methods, sub-groups, and groups, providing a classification system to facilitate algorithm comparison and to guide future research. This survey not only presented a detailed framework for categorizing wafer defects algorithms but also included _empirical_ and _experimental_ evaluations to measure the effectiveness of different approaches. Our _empirical evaluation_ focused on ML classification techniques for identifying defect patterns in wafer maps based on four criteria. Through _experimental evaluation_, we compared and ranked various methodology categories and techniques, including those utilizing the same sub-technique, different sub-techniques within the same technique, different techniques within the same sub-category, different sub-categories within the same category, and different categories. Based on our experimental results, the CNN-Based classification was superior, especially with large datasets. It excelled at interpreting wafer surface imperfections and recognizing patterns regardless of their image location due to its hierarchical learning ability and translational invariance.
2304.03180
Exceptional Point Perspective of Periodic Leaky-Wave Antennas
Over the past decade, the issue of gain degradation at broadside in periodic leaky-wave antennas (P-LWAs) has been resolved, using a circuit modeling approach, by introducing proper asymmetry in the unit cell of the antenna structure. This paper provides a more fundamental and insightful perspective of the problem by showing, using a simple coupled-mode analysis, that the optimal level of structural asymmetry corresponds to an exceptional point of the coupling parameter between the two eigenmodes of the P-LWA. This contribution represents a key step towards the development of a full electromagnetic resolution of the broadside issue.
Amar Al-Bassam, Dirk Heberling, Christophe Caloz
2023-03-06T15:16:50Z
http://arxiv.org/abs/2304.03180v1
# Exceptional Point Perspective of ###### Abstract Over the past decade, the issue of gain degradation at broadside in periodic leaky-wave antennas (P-LWAs) has been resolved, using a circuit modeling approach, by introducing proper asymmetry in the unit cell of the antenna structure. This paper provides a more fundamental and insightful perspective of the problem by showing, using a simple coupled-mode analysis, that the optimal level of structural asymmetry corresponds to an exceptional point of the coupling parameter between the two eigenmodes of the P-LWA. This contribution represents a key step towards the development of a full electromagnetic resolution of the broadside issue. ## I Introduction Offering the benefits of high directivity and simple scanning, periodic leaky-wave antennas (P-LWAs) represent important, powerful and versatile radiators in the modern microwave, terahertz and optical technology [1]. Unfortunately, they have been plagued by the issue of gain degradation at broadside [1]. Following a preliminary resolution that consisted of a transmission-line network procedure for unit-cell impedance matching [2], a more general solution was developed, based on the fulfillment of the twofold condition of frequency-balancing and \(Q\)-balancing, with the latter being related to the transverse asymmetry of the unit cell structure [3]. The solution introduced in [3] resolves the gain degradation issue, but it is based on circuit modeling that is intricate and that offers little insight into the physics of the problem. This paper presents a more fundamental perspective based on coupled-mode theory and connecting optimal asymmetry with an exceptional point [4]. ## II \(\mathcal{PT}\)-Symmetry and Exceptional Points Exceptional points are singularities in the parameter space of a non-Hermitian system where complex eigensolutions coalesce [5] as a result of some fundamental symmetry condition. They typically appear in \(\mathcal{PT}\)-symmetric problems, which involve a specific balance between loss and gain. They may attentively occur in problems involving eigensolutions with distinct levels of dissipation or radiation, which are related to the gain-loss problems by a simple gauge transformation [6], as will be shown later to be the case for P-LWAs. ## III Exceptional Point Derivation Given their periodicity, P-LWAs are conveniently analyzed in terms of periodic or Floquet-Bloch boundary conditions, as illustrated in Fig. 1. The enforcement of these conditions with a unit-cell phase shift (\(\Phi\)) spanning the Brillouin zone provides dispersion diagrams in terms of complex eigenfrequencies (\(\Omega\)). If the P-LWA unit cell is symmetric with respect to its transverse axis (see Fig. 1), it supports two orthogonal eigenmodes, a series mode, with complex eigenfrequency \(\Omega_{\text{se}}\), and a shunt mode, with eigenfrequency \(\Omega_{\text{dh}}\), whose electric fields are directed along the longitudinal (propagation) and transverse axes of the structure, respectively, and the broadside gain degradation issue (\(\Phi=0\)) unavoidably occurs [3]. Breaking the transversal asymmetry, couples the series and shunt modes into new, coupled modes, \(\psi_{1}\) and \(\psi_{2}\), which may be described by the coupled-mode equations [7] \[\frac{d\psi_{1}}{dt} =j\Omega_{\text{se}}\psi_{1}+j\kappa_{12}\psi_{2}, \tag{1a}\] \[\frac{d\psi_{2}}{dt} =j\kappa_{21}\psi_{1}+j\Omega_{\text{dh}}\psi_{2}, \tag{1b}\] where \(\kappa_{12}\) and \(\kappa_{21}\) are the coupling coefficients between the two modes, which are related by reciprocity as \(\kappa_{12}=\kappa_{12}^{*}=\kappa\) The system (1) admits solutions of the time-harmonic form \(\psi_{1,2}\propto\exp(j\Omega t)\). In addition, under the usual frequency-balancing condition [1, 3], the uncoupled-mode eigenfrequencies take the form \(\Omega_{\text{se}}=\Omega_{0}+\Im\{\Omega_{\text{se}}\}\) and \(\Omega_{\text{sh}}=\Omega_{0}+\Im\{\Omega_{\text{sh}}\}\). Substituting all these expressions into (1) and solving for \(\Omega\) yields the coupled eigenfrequency solutions \[\Omega_{1,2}=\Omega_{0} +j\frac{\Im\left\{\Omega_{\text{se}}\right\}+\Im\left\{\Omega_{ \text{sh}}\right\}}{2} \tag{2a}\] \[\pm\sqrt{\kappa^{2}-\left(\frac{\Im\left\{\Omega_{\text{se}} \right\}-\Im\left\{\Omega_{\text{sh}}\right\}}{2}\right)^{2}}\] and corresponding coupled \[Q\] -factors \[Q_{1,2}=\frac{\Re\left\{\Omega_{1,2}\right\}}{2\Im\left\{\Omega_{1,2}\right\}}. \tag{2b}\] According to Sec. II, the exceptional point of our problem corresponds to the value of \(\kappa\) for which the two eigensolutions (2) coalesce into a single eigensolution, i.e., \[\kappa_{\text{opt}}=\frac{\Im\left\{\Omega_{\text{se}}\right\}-\Im\left\{ \Omega_{\text{sh}}\right\}}{2}, \tag{3}\] where we have introduced the subscript "opt", for optimal. Finally, inserting (2a) with (3) into (2b) leads to \[Q_{\text{opt}}=\frac{\Omega_{0}}{\Im\{\Omega_{\text{se}}\}+\Im\{\Omega_{\text {sh}}\}}=\frac{2}{\frac{1}{Q_{\text{se}}}+\frac{1}{Q_{\text{sh}}}}, \tag{4}\] which is exactly the \(Q\)-balancing formula derived by transmission-line circuit modeling in [3]. This reveals that the solution to the broadside gain degradation issue in P-LWAs coincides with the exceptional point of their level of structural transverse asymmetry. ## IV Validation Results Figure 2 validates the exceptional point theory of Sec. III for the particular case of the series-fed patch P-LWA in Fig. 1, with Figs. 2(a) and 2(a) plotting the coupled eigenfrequencies and \(Q\)-factors, respectively. The curves are obtained as follows. Starting with a symmetric structure, \(w_{1}=w_{2}=w_{0}\) (and therefore \(\kappa=0\)), adjust the lengths of the patch, \(\ell_{\text{p}}\), and of the inter-connecting lines, \(\ell_{\text{l}}\), in a full-wave eigenfrequency solver (here CST Microwave Studio) so as to frequency-balance the structure, i.e., close up the gap at \(\Phi=0\), hence obtaining \(\Re\{\Omega_{\text{se}}\}=\Re\{\Omega_{\text{sh}}\}=\Omega_{0}\) and \(\Im\{\Omega_{\text{se,sh}}\}\). Inserting these parameters into (2) and plotting the result as a function of \(\kappa\) already provides the analytical curves. Then introduce transversal asymmetry in the full-wave eigenfrequency solver by making the widths of the two feeding lines Fig. 1: Unit cell of a P-LWA within periodic boundary condition walls (phase shift of \(\Phi\) between the two walls). While the structure shown here (and used in the numerical results of Sec. IV) is a series-fed patch structure, the theoretical results of the paper are general and apply to any P-LWA. different from each other, specifically setting \(w_{1}=w_{0}+w_{\text{d}}/2\) and \(w_{2}=w_{0}-w_{\text{d}}/2\) with increasing \(w_{\text{d}}\), with \(w_{\text{d}}\) increasing from zero, and plot the corresponding full-wave complex eigenfrequencies and \(Q\)-factors. The full-wave results in Fig. 2 closely match the analytical predictions1, with the exceptional point appearing at the junction of the two forks formed by the real and imaginary eigenfrequencies and at the junction of the corresponding quality factors, hence validating the central thesis of the paper. Footnote 1: The slight observed discrepancy, increasing with increasing level of asymmetry, is due to the fact that the frequency-balancing condition, assumed to be fixed in the analytical model, is progressively altered as \(w_{\text{d}}\) is increased. Fig. 2: Validation of the exceptional point theory established in Sec. III for the series-fed patch P-IWA in Fig. 1. (a) Complex eigenfrequencies. (b) Corresponding \(Q\)-factors. The optimal coupling factor, given by Eq. (3), corresponds indeed to the exceptional point, highlighted by the dash-dotted green line. The unit-cell dimensions are, in mm: \(\ell_{\text{p}}=3.05\), \(\ell_{\text{f}}=3.5\), \(w_{\text{p}}=3.3\) and \(w_{0}=0.3\).
2308.12358
An introduction to infinite projected entangled-pair state methods for variational ground state simulations using automatic differentiation
Tensor networks capture large classes of ground states of phases of quantum matter faithfully and efficiently. Their manipulation and contraction has remained a challenge over the years, however. For most of the history, ground state simulations of two-dimensional quantum lattice systems using (infinite) projected entangled pair states have relied on what is called a time-evolving block decimation. In recent years, multiple proposals for the variational optimization of the quantum state have been put forward, overcoming accuracy and convergence problems of previously known methods. The incorporation of automatic differentiation in tensor networks algorithms has ultimately enabled a new, flexible way for variational simulation of ground states and excited states. In this work we review the state-of-the-art of the variational iPEPS framework, providing a detailed introduction to automatic differentiation, a description of a general foundation into which various two-dimensional lattices can be conveniently incorporated, and demonstrative benchmarking results.
Jan Naumann, Erik Lennart Weerda, Matteo Rizzi, Jens Eisert, Philipp Schmoll
2023-08-23T18:03:14Z
http://arxiv.org/abs/2308.12358v4
# variPEPS - a versatile tensor network library for ###### Abstract Tensor networks capture large classes of ground states of phases of quantum matter faithfully and efficiently. Their manipulation and contraction has remained a challenge over the years, however. For most of the history, ground state simulations of two-dimensional quantum lattice systems using (infinite) projected entangled pair states have relied on what is called a time-evolving block decimation. In recent years, multiple proposals for the variational optimization of the quantum state have been put forward, overcoming accuracy and convergence problems of previously known methods. The incorporation of automatic differentiation in tensor networks algorithms has ultimately enabled a new, flexible way for variational simulation of ground states and excited states. In this work, we review the state of the art of the variational iPEPS framework. We present and explain the functioning of an efficient, comprehensive and general tensor network library for the simulation of infinite two-dimensional systems using iPEPS, with support for flexible unit cells and different lattice geometries. ###### Contents * I Introduction * II Variational iPEPS * II.1 iPEPS setup * II.2 CTMRG backbone * II.2.1 Absorption of iPEPS tensors * II.2.2 Calculation of projectors * II.2.3 Convergence and CTMRG fixed-points * II.3 Energy expectation values * II.3 Automatic differentiation * II.3 Calculation of the gradient at the CTMRG fixed-point * F Optimization * G Pitfalls and practical hints * G.1 Iterative SVD algorithm * G.2 Stability of the CTMRG routine * G.2.1 Prevention of local minima * G.2.2 Recycling of environments * G.2.3 Analysing iPEPS data at finite bond dimensions * G.2.4 Degenerate singular values * III Extension to other lattices * III.1 Honeycomb lattice * III.2 Kagome lattice * III.3 Square-Kagome lattice * III.4 Triangular lattice * III.4 Comments about different structures * IV Benchmarks and discussions * IV.1 Comments on lower bounds in variational principles * IV.2 Honeycomb lattice * IV.3 Kagome lattice * IV.4 Square-Kagome lattice * IV.5 Triangular lattice * IV.6 Comments on excited states * IV.7 Comments on fermionic systems * V Conclusion and prospects * V.1 Code release * V.2 CO\({}_{2}\)-emissions table * Acknowledgments * A Appendix: Background on automatic differentiation * A Adjoint functions and variables * B Automatic differentiation for complex variables * C The implicit function theorem and its use at the CTMRG fixed-point * D Automatic differentiation in the language of differential geometry Introduction Tensor networks are at the basis of a wealth of methods that are able to efficiently capture systems with many degrees of freedom, primarily in the context of interacting quantum systems, but also in a wide range of other fields. They have a long history: The beginnings can be seen [1] as originating from work on transfer matrices [2] for two-dimensional classical Ising models and methods of corner transfer matrices again in the context of classical spin models [3]. In more recent times, the rise of tensor networks to describe interacting quantum many-body systems can be traced back to at least two strands of research. On the one hand, the now famous _density matrix renormalization group_ (DMRG) approach [4; 5] can be regarded as a variational principle over _matrix product states_[6; 7; 8], a particularly common class of one-dimensional tensor network states. What are called _finitely-correlated states_[9] have later been understood as a Heisenberg picture variant of essentially the same family of states. These families of quantum states could further be interpreted as basically parametrizing gapped phases of matter in one spatial dimension. In a separate development, _tensor trains_ became a useful tool in numerical mathematics [10]. These strands of research had been developing independently for quite a while before being unified in a common language of _tensor networks_ (TN) as it stands now as a pillar of research on numerical and mathematical quantum many-body physics [11; 12; 13; 14; 15]. Two-dimensional tensor networks, now known as _projected entangled pair states_[16], again have a long history. The intuition why they provide a good ansatz class for describing ground states of gapped quantum many-body Hamiltonians [17; 18] - as well as other families of states - is the same as for matrix product states: Such states are expected to be part of what is called the _"physical corner"_ of the Hilbert space. These states feature local entanglement compared to the degrees of entanglement unstructured states would exhibit. Ground states of gapped phases of matter are thought to satisfy _area laws for the entanglement entropy_[15]. Even though some of the rigorous underpinning of this mindset is less developed in two spatial dimensions compared to the situation in one spatial dimension, there is solid evidence that projected entangled pair states provide an extraordinarily good and powerful ansatz class for meaningful states of two-dimensional quantum systems. There is a new challenge arising in such two-dimensional tensor networks. In contrast to matrix product states, they cannot be exactly efficiently _contracted_: On general grounds, there are complexity theoretic obstructions against the efficient contraction of projected entangled pair states in worst case [19] - and even in average case [20] - complexity. The burden can be lessened by acknowledging that projected entangled pair states can be contracted in quasi-polynomial time [21]. These more conceptual insights constitute an underpinning of a quite practically minded question: This shows that to develop ways of efficiently and feasibly approximating tensor network contractions in two spatial dimensions is at the heart of the method development in the field. Consequently, over the years, several numerical methods of approximately contracting projected entangled pair states have been developed. In fact, much of the method development has been along these lines. In the focus of attention in this work are projected entangled pair states directly in the thermodynamic limit, commonly referred to as _infinite projected entangled pair states_ (iPEPS) [22; 23; 24]. The contraction necessary to compute expectation values of local observables gives rise to the challenge of approximately calculating effective environments. Over the years, several methods have been introduced and pursued, including methods based on boundary matrix product states [22], corner transfer matrix methods [25; 26; 24] - particularly important for the method development presented here - and tensor coarse-graining techniques [27; 28; 29; 30]. Variational optimization algorithms for uniform matrix product states have been developed that combine density matrix renormalization group methods with matrix product state tangent space concepts to find ground states of one dimensional quantum lattices in the thermodynamic limit [31; 32], building on earlier steps of devising geometrically motivated variational principles for tensor network states [33; 34]. The pursuit of such variational optimization has been particularly fruitful in the two dimensional case of iPEPS. Initially proposed methods constructed the gradient of the energy explicitly using specialized environments [35; 36]. Recently, as an element of major method development, the programming technique called _automatic differentiation_, widely used in the machine learning community, has been utilized for the task of calculating the gradient [37] in tensor network optimization. This step drastically simplifies the programming involved and allows one to use variational ground state search on, e.g., more exotic lattice geometries with little additional effort. Such variational approaches for iPEPS constitute the basis for this work. Automatic differentiation has also been employed in further fashions in the tensor network context in several works recently [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49], some of which are accompanied by publicly available code libraries [50; 51; 52; 53]. As a novel programming paradigm, automatic differentiation composes parameterized algorithmic components in such a way that the program becomes differentiable and its components can be optimized using gradient search. It is a sophisticated way to evaluate the derivative of a function specified by a computer program, specifically by applying the chain rule to elementary arithmetic operations. Again, it has only recently been appreciated how extremely powerful such tools are in the study of interacting quantum matter by means of tensor networks. This work is intended as complementing and extending this emergent body of work, and at the same time for the first time comprehensively present ideas of variational iPEPS method based on automatic differentiation. It provides a detailed description of the methodology with practical insights for implementations. The key contribution is to introduce a general and versatile framework that is general enough to allow for arbitrary unit cells and lattices, exploiting the tools of automatic differentiation. At the same time, this work is a documentation of a numerical library. We refer to this library as _variPEPS_ - a versatile tensor network library for variational ground state simulations in two spatial dimensions. The content of this work is organised in three main sections. In Sec. II, we describe the main building blocks of the variational iPEPS algorithm, including remarks on technical issues like the contraction and renormalization of environment tensors, and the use of automatic differentiation for the ground-state search. In Sec. III, we then turn to explaining how to conveniently map generic lattice structures to a square one, over which the variPEPS setup naturally operates. Following up on this, in Sec. IV, we present numerical benchmarks of variPEPS in comparison to other customary methods, like exact diagonalization, iPEPS imaginary-time evolution and variational Monte Carlo methods. ## II Variational iPEPS We seek to find the the TN representation of the state vector \(\ket{\psi}_{\mathrm{TN}}\) that best approximates the true ground state vector \(\ket{\psi_{0}}\) of an Hamilton operator of the form \[H=\sum_{j\in\Lambda}T_{j}(h)\,, \tag{1}\] where \(T_{j}\) is the translation operator on the lattice \(\Lambda\), and \(h\) is a generic \(k\)_-local Hamiltonian_, i.e., it includes an arbitrary number of operators acting on lattice sites at most at a (lattice) distance \(k\) from a reference lattice point. Such a situation is very common in condensed matter physics, to say the least. To this aim, we employ the variational principle \[\frac{\bra{\psi}H\ket{\psi}}{\bra{\psi}}\geq E_{0}\,\,\,\,\,\,\,\,\forall \,\ket{\psi}\,, \tag{2}\] and use an energy gradient with respect to the tensor coefficients to search for the minimum - the precise optimization strategy being discussed later. Such an energy gradient is accessed by means of tools from _automatic differentiation_ (AD), a set of techniques to evaluate the derivative of a function specified by a computer program that will be summarized below. Since we directly target systems in the thermodynamic limit, a _corner transfer matrix renormalization group_ (CTMRG) procedure constitutes the backbone of the algorithm, and also will come in handy for AD purposes. This is used to compute the approximate contraction of the infinite lattice, which is crucial in order to compute accurate expectation values in the first place. Importantly, the CTMRG routine is _always_ performed on a regular square lattice, for which it can be conveniently defined. Support for other lattices, also non-bipartite ones, is possible by different lattice mappings, as we will demonstrate. The method we will present in this section gives rise to an upper bound of the ground state energy in the sense of the variational principle as stated in Eq. (2). But we wish to point out at this point that for that to be strictly true it would be necessary to choose the CTMRG refinement parameter \(\chi_{E}\), introduced in detail in Sec. II.2, to be \(\chi_{E}\rightarrow\infty\). However, in practice we increase this refinement parameter \(\chi_{E}\) until all observables are converged. ### iPEPS setup As introduced in the last section, we aim to simulate quantum many-body systems directly in the thermodynamic limit. To this end, we consider a unit cell of lattice sites that is repeated periodically over the infinite two-dimensional lattice. Reflecting this, the general configurations of the iPEPS ansatz are defined with an arbitrary unit cell of size \((L_{x},L_{y})\) on the square lattice. The lattice setup, denoted by \(\mathcal{L}\), can be specified by a single matrix, which uniquely determines the different lattice sites as well as their arrangement. Let us consider a concrete example of an \((L_{x},L_{y})=(2,2)\) state with only two and all four individual tensors, denoted by \[\mathcal{L}_{1}=\begin{pmatrix}A&B\\ B&A\end{pmatrix},\,\,\,\,\,\,\,\,\,\,\mathcal{L}_{2}=\begin{pmatrix}A&C\\ B&D\end{pmatrix}. \tag{3}\] The corresponding iPEPS ansatze are visualized in Fig. 1. Here, the rows/columns of \(\mathcal{L}\) correspond to the \(x\)/\(y\) lattice directions. The unit cell \(\mathcal{L}\) is repeated periodically to generate the full two-dimensional system. As usual, the bulk bond dimension of the iPEPS tensors, denoted by \(\chi_{B}\), controls the accuracy of the ansatz. An iPEPS state with \(N\) different tensors in the unit cell consists of \(Np\chi_{B}^{4}\) variational parameters, which we aim to optimize such that the iPEPS wave function represents an approximation of the ground state of a specific Hamiltonian. The parameter \(p\) denotes the dimension of the physical Hilbert space, e.g., \(p=2\) for a system of spin-\(1/2\) particles. The right choice of the unit cell is crucial in order to capture the structure of the targeted state. A mismatch of the ansatz could not only lead to a bad estimate of the ground state, but also to no convergence in the CTMRG routine at all. Different lattice configurations have to be evaluated for specific problems to find the correct pattern. ### CTMRG backbone One major drawback of two-dimensional TNs such as iPEPS is that the contraction of the full lattice can only be computed approximately. This is due to complexity theoretic obstructions [19; 20] and - practically speaking - the lack of a canonical form, which can only be found in loop-free tensor networks, for instance in matrix product states [8]. In order to circumvent the unfeasible exact contraction of the infinite 2d lattice, we employ an approximation scheme, Figure 1: iPEPS ansatze with a unit cell of size \((L_{x},L_{y})=(2,2)\) and only two (left) and four (right) different tensors as defined in Eq. (3). the directional _corner transfer matrix renormalization group_ (CTMRG) routine for iPEPS states with arbitrary unit cells of size \((L_{x},L_{y})\). The CTMRG method approximates the calculation of the norm \(\langle\psi|\psi\rangle\) of the quantum state on the infinite square lattice by a set of effective environment tensors. This is achieved by an iterative coarse-graining procedure, in which all (local) iPEPS tensors in the unit cell \(\mathcal{L}\) are successively absorbed into the environment tensors towards all lattice directions, until the environment converges to a fixed-point. We will present a summary of the directional CTMRG methods for an arbitrary unit cell, following the state-of-the-art procedure [54, 55, 56]. The effective environment is displayed in Fig. 2, here for simplicity for a square lattice with a single-site unit cell \(\mathcal{L}=\big{(}A\big{)}\). It consists of a set of eight fixed-point tensors, four corner tensors \(\{C_{1},C_{2},C_{3},C_{4}\}\) as well as four transfer tensors \(\{T_{1},T_{2},T_{3},T_{4}\}\), the latter sometimes also called edge tensors. In case of a larger unit cell, such a set of eight environment tensors is computed for each individually specified iPEPS tensor in the unit cell. The unavoidable approximations in the environment calculations are controlled by a second refinement parameter, the environment bond dimension \(\chi_{E}\). In one full CTMRG step, the complete iPEPS unit cell is absorbed into the four lattice directions, such that the eight CTMRG tensors are updated for every iPEPS tensor. This is done column-by-column or row-by-row, depending on the direction. In each absorption step the environment bond dimension \(\chi_{E}\) grows by a factor of \(\chi_{B}^{2}\). To avoid an exponential increase in memory consumption and computation time, we need a method to truncate the bond dimension back to \(\chi_{E}\). In order to do this, we calculate renormalization projectors for each row or column. Projectors are computed from a suitable patch of the iPEPS state including the effective environments, to find a best-possible truncation of the bond dimension. Different approaches for their calculations have been proposed in the literature, which we will discuss in detail below, especially in the context of AD. In the following description of the CTMRG procedure we focus on a left absorption move, which grows all left environment tensors \(\{C_{4},T_{4},C_{1}\}\). The main steps of insertion, absorption and renormalization are shown in Fig. 3. In Sec. II.2.1, we will explain the full absorption procedure including renormalization, as it is done in practise. Although projectors need to be calculated before the absorption, their motivation and the calculation of different projects is discussed later in Sec. II.2.2. #### ii.2.1 Absorption of iPEPS tensors In order to generate the CTMRG environment tensors, such that they converge to a fixed-point eventually, the iPEPS tensors are absorbed into them. To this end, we start with the network of one iPEPS tensor in the unit cell and its accompanying environment tensors. This is depicted in Fig. 3 in the top left. As shown on the top right of this figure, the network is extended by inserting one column, consisting of an iPEPS tensor and the top and bottom transfer tensors. While we depict the case of a single-site unit cell in Fig. 3, we note that the column of tensors to be inserted is generally dictated by the unit cell structure of the iPEPS ansatz, i.e., the left neighbor with the corresponding environment tensors for a left move. This crucial positional information for multi-site unit cells is specified by the coordinate superscripts in the descriptions below. As indicated by the dashed line in Fig. 3, we absorb the inserted column into the left environment tensors by contracting all left pointing edges. This yields new environment tensors whose bond dimensions have grown by a factor \(\chi_{B}^{2}\) due to the virtual iPEPS indices, thus we need a way to truncate the dimension back to the CTMRG refinement parameter \(\chi_{E}\). This is done using the projectors we will discuss and compute in the next section. For now we introduce them as abstract objects labeled \(P\) that implement the dimensional reduction (i.e., the renormalization step) in an approximate but numerically feasible way. The updated tensor \(C_{1}^{\prime}\) is then given by the contraction in Fig. 4. As discussed before, the correct tensors and projectors have to be used in accordance with the periodicity of the unit cell. The iPEPS tensor is now absorbed into the left transfer matrix \(T_{4}^{\prime}\), where two projectors are needed Figure 4: Update of the corner tensor \(C_{1}\) in a left CTMRG step. Figure 3: Main steps of a left CTMRG move. One column of tensors is inserted into the network. Upon absorption of these tensors, the environment bond dimension grows rapidly, requiring a renormalisation step. Figure 2: The norm of an iPEPS (here with a single-site unit cell) at a bulk bond dimension \(\chi_{B}\) is approximated by a set of eight fixed-point environment tensors. The environment bond dimension \(\chi_{E}\) controls the approximations in the CTMRG routine. to truncate the enlarged environment bond dimension. This is visualized in Fig. 5. Finally, the lower corner tensor \(C_{4}^{\prime}\) is updated, by absorbing a transfer matrix \(T_{3}\) and using another projector. The three absorption steps in Figs. 4, 5 and 6 are performed for all rows \(x\) at a fixed column \(y\), before moving to the next column \(y+1\). The process of computing projectors and growing the environment tensors is repeated for each column of the iPEPS unit cell, until the complete unit cell of \(L_{x}\times L_{y}\) tensors has been absorbed into the left environment. This yields updated tensors \(C_{1}^{\prime}\), \(T_{4}^{\prime}\) and \(C_{4}^{\prime}\) for all \([x,y]\). The absorption of a full unit cell is then performed for the other three directions. In a top move the tensors \(C_{1}\), \(T_{1}\) and \(C_{2}\) are grown, in a right move the tensors \(C_{2}\), \(T_{2}\) and \(C_{3}\) and in a bottom move the tensors \(C_{3}\), \(T_{3}\) and \(C_{4}\). This completes a _single_ CTMRG step, which is then repeated in the directional procedure until convergence is reached. In Sec. II.2.3 we discuss appropriate convergence measures. #### ii.2.2 Calculation of projectors In order to avoid an exponential increase of the bond dimension while growing the environment tensors, projectors are introduced to keep the bond dimension at a maximal value of \(\chi_{E}\). Here, we will describe a common scheme to compute those projectors [55] and discuss some properties of their use in combination with AD [42]. The task of finding good projectors essentially comes down to finding a basis for the virtual space, whose bond dimension we aim to reduce, that can be used to distinguish between "more and less important" sub-spaces. This way, we can ideally reduce the dimension while keeping the most important sub-space. In what follows, we consider the lattice environment of the virtual space that we aim to truncate using the CTMRG environment tensors. To this end, we use a _singular value decomposition_ (SVD) to identify the basis, in which the bond is optimally truncated such that we keep the most relevant information of this lattice environment. The lattice environment that we consider is shown in Fig. 7, where the red dotted line identifies the bonds that we aim to optimally truncate, illustrated for the example of a left absorption step. The arrangement of the tensors in the network of Fig. 7 follows the unit cell definition \(\mathcal{L}\). For the trivial, single-site unit cell \(\mathcal{L}=\big{(}A\big{)}\), all four iPEPS tensors are the same. We note that for a larger unit cell, cf. Fig. 1, the iPEPS tensors and their adjacent environments have to be chosen according to its periodicity. This setup for the arrangement is favorable, since it incorporates the (approximated) effect of the infinite environment by including all CTM tensors for the different lattice directions. The projectors are used to renormalize the three left open tensor indices with combined bond dimension \(\chi_{E}\chi_{B}^{2}\) back to the environment bond dimension \(\chi_{E}\) in a left absorption step. In order to compute them, we start by defining the matrix \[\mathcal{M}=\rho_{\text{B}}\cdot\rho_{\text{T}} \tag{4}\] that represents the lattice environment of the virtual bond that we would like to truncate, as visualized in Fig. 8. The procedure outlined here aims to find projectors \(P_{LT}\) and \(P_{LB}\), such that the truncated matrix \[\mathcal{M}_{\text{trunc}}=\rho_{\text{B}}\cdot P_{LT}\cdot P_{LB}\cdot\rho_{ \text{T}}, \tag{5}\] is an optimal approximation to \(\mathcal{M}\). To achieve this, we perform a singular value decomposition on \(\mathcal{M}\), i.e., \[\mathcal{M}=U_{L}S_{L}V_{L}^{\dagger}. \tag{6}\] This factorization introduces a basis which allows for a separation of more relevant and less relevant sub-spaces. To this end, we choose the largest \(\chi_{E}\) singular values and their corresponding singular vectors for the construction of the projectors. Furthermore, we define \[S_{L}^{+}=\operatorname{inv}\left(\sqrt{S_{L}}\right), \tag{7}\] where a pseudo-inverse with a certain tolerance is used. To increase the numerical stability, a threshold of typically \(10^{-6}\) (corresponding to a threshold of \(10^{-12}\) for the singular values) is used. Smaller singular values are set to zero. The use of a pseudo-inverse in the generation of the projectors is equivalent to the construction of a projector with lower environment Figure 5: Update of the transfer matrix \(T_{4}\) in a left CTMRG step. Here the projectors generally belong to different subspaces, unless the system is one-site translational invariant. Figure 6: Update of the corner tensor \(C_{4}\) in a left CTM step. Figure 7: Network of \(2\times 2\) iPEPS tensors and the corresponding CTMRG tensors, used as a starting point to compute the truncation projectors. For a left CTMRG step the top and bottom part is contracted into the matrices \(\rho_{\text{T}}\) and \(\rho_{\text{B}}\) with dimension \((\chi_{E}\chi_{B}^{2})\times(\chi_{E}\chi_{B}^{2})\). The red dashed line indicates the bonds that are renormalized back to a bond dimension \(\chi_{E}\). bond dimension. Finally, the projectors to renormalize the left absorption step are constructed as \[\begin{split} P_{LT}&=\rho_{T}\cdot V_{L}\cdot S_{L}^{ +},\\ P_{LB}&=S_{L}^{+}\cdot U_{L}^{\dagger}\cdot\rho_{B}. \end{split} \tag{8}\] Here \(\rho_{T}\) and \(\rho_{B}\) again denote the top and bottom part of \(\mathcal{M}\) as introduced in Fig. 7. We would like to point out the fact that without a truncation in the SVD above, the product of the projectors we create in this way assembles the identity \[\begin{split} P_{LT}\cdot P_{LB}&=\rho_{T}\cdot V _{L}\cdot S_{L}^{-1}\cdot U_{L}^{\dagger}\cdot\rho_{B}\\ &=\rho_{T}\cdot(\rho_{\mathsf{B}}\cdot\rho_{\mathsf{T}})^{-1} \cdot\rho_{B}=\mathds{1}.\end{split} \tag{9}\] We stress again, that the choice of truncation in the calculations of the projectors is optimal in order to approximate the lattice environment \(\mathcal{M}\). A graphical representation of these projectors is given in Fig. 9. During a left-move, described in the previous section, we absorb the iPEPS tensors in the unit cell column-by-column into the left environments. A renormalization step is required for each of those moves, resulting in projectors that are specific to every bond. We therefore label them by the positions in the unit cell, i.e., \(P_{LT}^{[x,y]}\) and \(P_{LB}^{[x,y]}\). The process to generate the projectors described above uses the full lattice environment \(\mathcal{M}\), and thus we call them _full projectors_. It should be noted that Fishman et al. have proposed a scheme to calculate equivalent projectors in a fashion that is numerically more stable, at the cost of being computationally more expensive [56]. Their method is particularly useful in the case of a singular value spectrum of \(\mathcal{M}\) that decays very fast. Finally, different lattice environments of the virtual bond in question can be used to generate projectors. A very practical version are the so called _half projectors_. For those we choose a lattice environment as illustrated in Fig. 10. These projectors are computationally less costly, as they require a smaller network to be contracted. They only take into account correlation within one half of the network, however this proves to be sufficient in many different applications. Lately, there have been proposals for even cheaper alternatives of lattice environments and projector calculations [57], which yet have to be tested in the context of automatic differentiation and variational iPEPS optimization. #### ii.2.3 Convergence and CTMRG fixed-points The CTMRG routine as described above is a power-method that eventually converges to a fixed-point. At this fixed-point, the set of environment tensors describes the contraction of the infinite lattice with an approximation controlled by the environment bond dimension \(\chi_{E}\). Convergence of the CTMRG tensors to the fixed-point can be monitored in different ways. In regular applications (those that do not involve automatic differentiation and gradients) the singular value spectrum of the corner tensors is typically a good quantity. Once the norm difference of the spectrum between two successive CTM steps converges below a certain threshold, the environment tensors are assumed to be converged. One peculiarity that is however not incorporated in this convergence check is sign or phase fluctuation for real or complex tensor entries, respectively. This means that, while projectors and hence the CTMRG tensors converge in absolute value, their entries can have different signs/phases in consecutive CTM steps. For reasons that become clear in Sec. II.5 it is however required to reach _element-wise convergence_ in the environment tensors for them to represent an actual fixed-point [42]. Those fluctuations originate from the gauge freedom in the SVD performed in Eq. (6). This is reflected in the freedom of introducing a unitary (block-)diagonal matrix \(\Gamma\) in an SVD, \[\mathcal{M}=USV^{\dagger}=\left(U\Gamma\right)S\left(\Gamma^{\dagger}V^{ \dagger}\right), \tag{10}\] which leaves the expression invariant. The gauge freedom from the SVD directly affects the calculation of the projectors, such that we aim to fix the phases while computing these projectors. By eliminating this gauge freedom, at the true fixed-point, both projectors and environment tensors should be converged element-wise. Figure 8: Matrix \(\mathcal{M}\) as defined by Eq. (4) in graphical TN notation. The red dashed line indicates the bonds that are renormalized back to a bond dimension \(\chi_{E}\). Figure 10: Network of \(2\times 1\) iPEPS tensor and the corresponding CTMRG tensors, which is used as a reduced network to calculate the half projectors for a left CTMRG step. The red dashed line indicates the bonds that are renormalized back to a bond dimension \(\chi_{E}\). Figure 9: Calculation of top and bottom projectors for a left CTMRG absorption step. The red dashed line indicates the bonds that are renormalized back to a bond dimension \(\chi_{E}\). To fix the gauge, we introduce a diagonal unitary matrix \(\Gamma\) that redefines the phase of the largest entry (in absolute value) of every left singular vector to place it on the positive real axis [42]. To avoid instabilities of this gauge-fixing procedure due to numerical quasi-degeneracies, we always pick the first of such largest elements in basis order. Other choices, like addressing the first element with magnitude above a fixed threshold, are also possible. ### Energy expectation values Computing the energy expectation value required for the energy minimization is straightforward using the CTMRG environment tensors. Assuming a Hamiltonian with only nearest-neighbour interaction terms, individual bond energies can be computed as shown in Fig. 11. The full energy expectation value, \(\left\langle\psi|H|\psi\right\rangle/\left\langle\psi|\psi\right\rangle\), is obtained by collecting all different energy contributions, i.e., all different terms in the Hamiltonian. Longer-range interaction can be treated as well, by simply enlarging the diagrams of Fig. 11 and performing more expensive contractions, which however occur only once per optimization step. In order to formulate a variational optimization of the tensor coefficients parametrizing the wave function, a gradient for the energy expectation value - including the foregone fixed-point CTMRG routine - is required. This is achieved by the concept of automatic differentiation, as we will describe next. ### Automatic differentiation _Automatic differentiation_ (AD), sometimes also referred to as _algorithmic differentiation_ or _automated differentiation_, is a method for taking the derivative of a complicated function which is evaluated by some computer algorithm. It has been an important tool for optimization tasks in machine learning for many years. An introduction can be found in e.g. Ref. [58]. After its initial introduction in a foundational work [37], AD has found increasing applications in numerical TN algorithms in recent years [38; 39; 41; 42; 45; 46]. For the sake of simplicity, let us consider a function \(E:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) for which we would like to evaluate the derivative. Noticeably, extensions to complex numbers are possible, and we provide some additional comments in Appendix B. We have the particular use-case of the energy expectation value \(E(\left|\psi\right\rangle)=\left\langle\psi|H|\psi\right\rangle/\left\langle \psi|\psi\right\rangle\) of an iPEPS in mind, in which case the co-domain of the function \(E\) is \(\mathbb{R}\). As we explain below, this has some important consequences for the use of AD. Automatic differentiation makes use of the fact that many functions and algorithms are fundamentally built by concatenating elementary operations and functions like addition, multiplication, projection, exponentiation and taking powers, whose derivatives are known. The central insight is now that we can build up the gradient of a more complicated function from the derivatives of its elementary constituents by the _chain rule of differentiation_. In principle this even allows for a computation of the gradient to machine precision. It should be noted however, that it is neither necessary nor useful to deconstruct every function into its most elementary parts. Rather it is advantageous to deconstruct the function at hand only into a minimal amount of constituent-functions for which a derivative can be determined. These functions are often referred to as the _primitives_ of the function of interest \(E\). Primitives might themselves be a composition of many constituents but the derivative of the primitives themselves is known as a whole. An illustrative example for a primitive is a function that takes two matrices as an input and outputs the multiplication of them. On an elementary level this function is composed out of many multiplications and additions, but one can write down the derivative w.r.t. its inputs immediately. The choice of primitives describes the level of coarseness on which the AD process needs to know the details of the function \(E\) to compute the desired gradient. Defining large primitives of a function can reduce memory consumption, as well as increase performance and numerical stability of the AD process, e.g., by avoiding spurious divergences. Once the high-level function \(E\) has been decomposed into its minimal number of primitives, we can represent this decomposition with a so called _computational graph_. The computational graph is a directed, a-cyclic graph whose vertices represent the data generated as intermediate results by the primitives and the edges represent the primitives themselves, that transform the data from input to output. As an example let us suppose we are able to decompose the function \(E\) into three primitives \(f_{1}\), \(f_{2}\) and \(f_{3}\), such that \(E=f_{3}\circ f_{2}\circ f_{1}\). The primitives are maps between intermediate spaces \[E:\mathbb{R}^{n_{1}}\overset{f_{1}}{\longmapsto}\mathbb{R}^{n_{2}}\overset{f _{2}}{\longmapsto}\mathbb{R}^{n_{3}}\overset{f_{3}}{\longmapsto}\mathbb{R}^{ n_{4}} \tag{11}\] and we refer to the variables in theses spaces as \(\vec{x}_{i}\in\mathbb{R}^{n_{i}}\). The computation graph illustrating this situation is shown in Fig. 12. AD can be performed in two distinct schemes, often called _forward_- and _backward-mode_ AD. In the following we will demonstrate the two AD modes with the example of Figure 11: Expectation values of a (horizontal) nearest-neighbour Hamiltonian term \(\left\langle\psi|h_{i,j}|\psi\right\rangle/\left\langle\psi|\psi\right\rangle\) in tensor network notation, using the fixed-point CTMRG environments. Figure 12: Example of a computational graph for the function decomposition in Eq. (11). our previously introduced function \(E\) and its primitives. This will also serve to illustrate the computational cost of these AD schemes for the iPEPS use-case. Since \(f_{1},f_{2}\) and \(f_{3}\) are said to be primitives, their Jacobians \[\begin{split} J^{i}&\ :\ \mathbb{R}^{n_{i}}\to \mathbb{R}^{n_{i+1}}\times\mathbb{R}^{n_{i}},\\ J^{i}(\vec{x}_{i}^{0})&=\left(\frac{\partial f_{i} }{\partial\vec{x}_{i}}\right)\bigg{|}_{\vec{x}_{i}\neq\vec{x}_{i}^{0}}\end{split} \tag{12}\] are known. An AD evaluation of the gradient of \(E\) at a specific point \(\vec{x}_{i}^{0}\) is then given by the chain rule, the concatenation of the Jacobians of the primitives \[\nabla E(\vec{x}_{1}^{0})=J^{3}(\vec{x}_{3}^{0})\cdot J^{2}(\vec{x}_{2}^{0}) \cdot J^{1}(\vec{x}_{1}^{0}), \tag{13}\] with \(f_{i}(\vec{x}_{i}^{0})=\vec{x}_{i+1}^{0}\). The difference between the _forward-_ and _backward-mode AD_ essentially comes down to the question from which side we perform the multiplication of the Jacobians above. In the _forward-mode_ AD scheme, the gradient is built up simultaneously with the evaluation of the primitives \(f_{1},f_{2}\) and \(f_{3}\), according to the following prescription for the \(i\)-th step: \[\begin{split} f_{i}(\vec{x}_{i}^{0})&=\vec{x}_{i+ 1}^{0}\\ G_{i}&=J^{i}(\vec{x}_{i}^{0})\cdot G_{i-1}\end{split} \tag{14}\] with the starting condition \(G_{0}:=\mathds{1}_{n_{1}\times n_{1}}\) and the final result \(G_{3}=\nabla E(\vec{x}_{1}^{0})\in\mathbb{R}^{n_{4}\times n_{1}}\). We see that in this case we build up Eq. (13) from right to left or "along the computational graph" as illustrated in Fig. 13. At first sight, such a procedure offers the potential advantage of not requiring to store intermediate results of the primitives in memory. However, if the dimension of the input (domain of \(E\)) is much larger than the dimension of the output (co-domain of \(E\)) - as it is the case in our use-case of iPEPS - this procedure becomes computationally very heavy. Indeed, saving and multiplying the large Jacobians in Eqs. (14) is often impractical. Thus, it is common to split up the starting condition \(G_{0}:=\mathds{1}_{n_{1}\times n_{1}}\) into the \(n_{1}\) canonical basis vectors \(\{\vec{e}_{i}\}_{i=1,\dots,n_{1}}\). The procedure to generate the gradient from Eq. (14) is then repeated \(n_{1}\) times, each iteration generating a single component \(i\). In this case, each step of the process of generating a component of the gradient is done by calculating a _Jacobian-vector product_ (JVP), so that only the resulting vector has to be stored. In order to create the full gradient in this way we need to repeat the procedure \(n_{1}\) times, and the cost of calculating the full gradient scales as \(\mathcal{O}(n_{1})\times\mathcal{O}(E)\), where \(\mathcal{O}(E)\) is the cost of evaluating \(E\). The _backward-mode_ AD scheme works instead by first evaluating the function \(E\) and storing all intermediate results of the primitives along the way, and by then applying the iterative prescription \[\bar{G}_{i}=\bar{G}_{i+1}\cdot J^{i}(\vec{x}_{i}^{0}) \tag{15}\] with the starting condition \(\bar{G}_{4}=\mathds{1}_{n_{4}\times n_{4}}\) and the final result \(\bar{G}_{1}=\nabla E(\vec{x}_{1}^{0})\in\mathbb{R}^{n_{4}\times n_{1}}\). In the AD literature the objects \(\bar{G}_{i}\) are called adjoint variables and the functions that map the adjoint variable on to each other, defined by Eq. (15), are called adjoint functions. We refer to Appendix A for more details on the adjoint functions and adjoint variables. In some parts of the literature the adjoint functions are also called pullbacks, which can be understood by looking at AD in language of differential geometry, cf. Appendix D. We see that in this case we build up Eq. (13) from left to right or as graphically illustrated in Fig. 14. This scheme has the advantage of being computationally much cheaper if the output (co-domain) dimension is smaller than the input (domain) dimension - precisely the situation of our iPEPS setup, with \(n_{1}=Np\chi_{B}^{4}\) and \(n_{4}=1\). We indeed only need to compute _vector-Jacobian products_ (VJP) when evaluating the gradient, and, moreover, the full gradient is computed at once, instead of just a single element at a time as in the forward-mode AD scheme. This is why the cost of calculating the gradient of the energy expectation value with _backwards-mode_ AD is \(\mathcal{O}(1)\times\mathcal{O}(E)\), which is superior to the cost of _forward-mode_ AD. However, since we need to save all intermediate results of the primitives along the way in order to compute the gradient, the memory requirement for this scheme is in principle unbounded. Fortunately, the fixed-point condition for the iPEPS environments can be used to guarantee that the memory remains bounded in our calculations, as we illustrate in the following section. Figure 14: Illustration of backward-mode AD as described in Eq. (15) for the function decomposition in Eq. (11). Figure 13: Illustration of forward-mode AD as described in Eq. (14) for the function decomposition in Eq. (11). ### Calculation of the gradient at the CTMRG fixed-point Computationally, the CTMRG routine represents the bottleneck of the full iPEPS energy function. It involves many expensive contractions and SVDs. Moreover, it requires an a priori unknown number of CTMRG iterations to reach convergence of the environment tensors. This would be especially disadvantageous for the gradient evaluation using plain-vanilla backward-mode AD, since this would require unrolling all the performed CTMRG iterations and paying a memory consumption linear in their number. However, this can be avoided by leveraging that fact that the CTMRG iteration eventually converges to a fixed point, and this is precisely the condition under which the energy evaluation is then performed. As soon this fixed point is reached, all CTMRG iterations are identical, i.e., reproducing the converged environment tensors. We can, in this situation, get away with only saving intermediate results from such a converged CTMRG iteration. This reduces the memory requirements by a factor of the number of CTMRG iterations that we perform [37]. We stress here that, for this approach to work, we must make sure that the CTMRG procedure reaches an actual fixed point, meaning that all CTMRG environment tensors are converged element wise as discussed in Sec. II.2.3. The fixed-point equation can be written as \[e^{*}(A)=c(A,e^{*}(A)), \tag{16}\] where the function \(c\) is one full CTMRG iteration, \(A\) are the iPEPS tensors which are constant during the CTMRG procedure and \(e^{*}(A)\) represents the CTMRG environment tensors at the fixed-point. \(\mathcal{E}\) is the function that maps the iPEPS tensors with the fixed point environment tensors and the Hamiltonian operators to the energy expectation value. The computational graph for the ground state energy is illustrated in Fig. 15. From it we can construct the form of the gradient of the energy expectation value with respect to the parameters of the iPEPS tensors \(A\), \[\frac{\partial\langle H\rangle}{\partial A}=\frac{\partial\mathcal{E}}{ \partial A}+\frac{\partial\mathcal{E}}{\partial e^{*}}\sum_{n=0}^{\infty} \left(\frac{\partial c}{\partial e^{*}}\right)^{n}\frac{\partial c}{ \partial A}. \tag{17}\] In practice this infinite sum is evaluated to finite order until the resulting gradient is converged to finite accuracy. An alternative viewpoint on the gradient at the fixed-point of the CTMRG procedure is presented in the Appendix C. ### Optimization As discussed in the introduction of Sec. II we seek to find the iPEPS approximation \(\ket{\psi}_{\text{TN}}\) of the ground state vector \(\ket{\psi_{0}}\). Employing the methods discussed in the last sections we can describe this energy calculation as function \(E(\ket{\psi}_{\text{TN}})\), consisting of the CTMRG power-method and the expectation value approximation using the resulting CTMRG environment tensors. Since we can calculate the gradient \(\nabla E(\ket{\psi}_{\text{TN}})\) of this real scalar function it is straightforward to use well-known optimization methods to find the energy minimum. We would like to stress that the state vector \(\ket{\psi}_{\text{TN}}\), and thus the energy function, only depends on the tensors defining the iPEPS ansatz and not the environment tensors since they are implicitly calculated from the ansatz. In this discussion we focus on two types of methods based on the gradient: The _(nonlinear) conjugate gradient_ (CG) [59, 60, 61, 62, 63] and the quasi-Newton methods [64, 65, 66, 67, 68, 69]. A naive approach to find the minimum of a function \(E(\ket{\psi_{i}})\), of which the gradient \(\nabla E(\ket{\psi_{i}})\) is known, is to shift the input parameters \(\ket{\psi_{i}}\) sufficiently along the negative gradient so that we find a new position \(\ket{\psi_{i+1}}\) where the function value is reduced. At the end of this section we discuss what a sufficient step size means in this context. Iterating this procedure to a point where the gradient of the function vanishes (within a pre-defined tolerance) yields a solution to the optimisation problem. Thus either a saddle point or a (local) minimum is reached then. This method is called steepest gradient descent. Although it resembles one of the simplest methods to find a descent direction, it is known to have a very slow convergence for difficult problems, e.g., for functions with narrow valleys [70]. Therefore, we use in practice more sophisticated methods to determine the descent direction. The family of nonlinear conjugate gradient as generalization of the linear conjugate gradient method modifies this approach. Instead of using the negative gradient as a direction in each iteration step it uses a descent direction which is conjugated to the previous ones. For the linear conjugate gradient method there is a known factor \(\beta_{i}\) to calculate the new descent direction \(d_{i}=g_{i}+\beta_{i}d_{i-1}\) from the gradient \(g_{i}\) of the current step and the descent direction \(d_{i-1}\) of last step. In the generalization for nonlinear functions this parameter is not uniquely determined anymore, however there are different Figure 15: Computational graph of the CTMRG procedure for calculating the energy density at fixed point. approaches to estimate this parameter in the literature [60, 61, 62]. In our implementation we chose the nonlinear conjugate gradient method in the formulation as has been suggested by Hager and Zhang [63], \[\begin{split}\tilde{\beta}_{i}^{\text{HZ}}&=\frac{1}{d _{i-1}^{\mathsf{T}}y_{i}}\left(y_{i}-2d_{i-1}\frac{\left\|y_{i}\right\|^{2}}{d _{i-1}^{\mathsf{T}}y_{i}}\right)^{\mathsf{T}}g_{i},\\ \eta_{i}&=\frac{-1}{\left\|d_{i-1}\right\|\min(\eta, \left\|g_{i-1}\right\|)},\\ \beta_{i}^{\text{HZ}}&=\max(\tilde{\beta}_{i}^{\text {HZ}},\eta_{i}),\end{split} \tag{18}\] with \(\left\|\cdot\right\|\) the Euclidian norm, \(y_{i}=g_{i}-g_{i-1}\) and \(\eta>0\) a numerical control parameter which has been set to \(\eta=0.01\) in the work by Hager and Zhang. In our tests and benchmarks this choice for \(\beta_{i}\) has been proven to be numerically stable. The other family of optimization methods we use in our implementation are the quasi-Newton methods, concretely the _Broyden-Fletcher-Goldfarb-Shanno_ (BFGS) algorithm [66, 67, 68, 69] and its _low-memory_ (L-BFGS) variant [71, 65]. These methods are based on the Newton method where the descent direction is calculated using not only the gradient, but also the second derivative (the Hessian matrix). Unfortunately, it is computationally expensive to calculate the Hessian for large sets of input parameter, which makes this method only feasible for small parameter sets (i.e., iPEPS ansatze with a small number of variational parameters). Quasi-Newton methods solve this problem by not calculating the full Hessian, but an approximation of it. To this end, the gradient information from successive iteration steps is used to update the approximation in each step. The BFGS algorithm stores the full approximated Hessian matrix, including the information from all previous steps. In contrast, the L-BFGS method calculates the effective descent direction in an iterative manner from the last \(N\) optimization steps. This way not the full (approximated) Hessian has to be stored in memory but only the gradients of the last \(N\) steps. This reduces the memory consumption by an order of magnitude. The disadvantage is that not the full information of all previous steps is considered, but only a fraction of it. Nevertheless, due to the memory requirements to store the full approximated Hessian in the standard BFGS method for larger iPEPS bond dimensions we use L-BFGS as the default quasi-Newton method. As noted before, we would like to shift the variational parameters \(x_{i}\) along the descent direction \(d_{i}\) determined by the different algorithms discussed above. With this shift we aim to find a new ansatz \(x_{i+1}=x_{i}+\alpha_{i}d_{i}\) with \(\alpha_{i}\) the step size along the descent direction. Ideally, we would like to find the optimal step size \(\alpha_{i}=\min_{\alpha}E(x_{i}+\alpha d_{i})\) minimizing the function value along the descent direction. However, determining this optimal value is computationally expensive and thus in practice, we stick to a sufficient step size fulfilling some conditions. The procedure to find this step size is called _line search_[72, 73, 74, 75]. In our implementation we use the Wolfe conditions [73, 74, 75], since they guarantee properties which are feasible particularly for the (L-)BFGS method and its iterative update of the effect of the approximate Hessian. ### Pitfalls and practical hints #### v.7.1 Iterative SVD algorithm We also advertise the use of iterative algorithms for the calculation of the SVD in the CTMRG procedure. This can be quite advantageous computationally, since only \(\chi_{E}\) singular values are needed for a matrix of size \((\chi_{E}\chi_{B}^{2})\times(\chi_{E}\chi_{B}^{2})\) during the CTMRG. To this end we use the use the _Golub-Kahan-Lanczos_ (GKL) bidigonalization algorithm with additional orthogonalization for the Krylov vectors. This algorithm is available, e.g., in packages like KrylovKit.jl[76] or IterativeSolvers.jl[77] in the Julia programming language. We highlight the utility of this type of algorithm for the calculation of the SVD with the comparison of the computational time of the different algorithms in the iPEPS use case in Fig. 16. #### v.7.2 Stability of the CTMRG routine One of the basic prerequisite for a stable variational iPEPS optimization is a robust CTMRG routine fulfilling the convergence requirements discussed in Sec. II.2.3. Obviously, there is the environment bond dimension \(\chi_{E}\) to control the accuracy of the approximation of the environment. If the environment bond dimension is chosen too low, the approximation is in Figure 16: Comparison of the computational time for the calculation of the first \(\chi_{E}\) singular values/vectors of a matrix of dimension \((\chi_{E}\chi_{B}^{2})\times(\chi_{E}\chi_{B}^{2})\) obtained in a CTMRG procedure with bond dimension \(\chi_{B}=6\). The conventional SVD (blue), which is truncated only after calculating the full SVD spectrum is substantially slower than the iterative GKL methods. The GKL algorithm in the CTMRG use case was showed comparable performance when constructing the \(\chi_{E}\chi_{B}^{2}\) matrix explicitly (orange) or by just implementing its action of a vector (green). While the GKL algorithm for the case at moderate \(d\) and \(\chi_{E}\) constructing the matrix usually is faster, at larger \(\chi_{B}\) and \(\chi_{E}\) it can become advantages to only implement the action of the matrix. valid and the CTMRG routine can yield an inaccurate result for the expectation value. This could further lead to an unstable variational update. To check heuristically whether the refinement parameter \(\chi_{E}\) is chosen sufficiently high, one can check the singular value spectrum obtained during the projector calculation as described in Sec. II.2.2. As a reliable criteria for the amount of information loss, we compute the truncation error \(\varepsilon_{T}\) given by the norm of the discarded singular values of the normalized spectrum [78]. If the truncation error is larger than some threshold (e.g., \(\varepsilon_{T}>10^{-5}\)), one can assume that the environment bond dimension is chosen too low and has to be increased. Employing this procedure, the bond dimension can automatically be increased during the variational optimization if necessary. A sufficiently large \(\chi_{E}\) is crutial as the AD optimization can otherwise exploit the inaccuracies of the CTMRG procedure, leading to false ground states with artificially low energy. #### ii.2.3 Prevention of local minima An ideal iPEPS optimization finds the global energy minimum of the input Hamiltonian within the iPEPS ansatz class of fixed unit cell and bond dimension. In practice, however, it is possible - and likely - that the algorithm gets stuck in local minima. In order to avoid local minima and reach the global optimum, there are a number of possible tricks. The naive way is to start several simulations with different random initial states. This is typically a practicable solution, although it is not well controllable and requires large computational resources. An optimization of a system with a tendency for local minima might still be successful, if a suitable initial state is provided. One possibility are initial states obtained by imaginary-time evolution methods (simple update, full update [79; 23; 22]). While this is typically a convenient solution, it is sometimes necessary to perturb the input tensors with a small amount of noise (e.g., \(10^{-2}\) in relative amplitude) to actually avoid local minima. As an alternative, one can input a converged state obtained from energy minimization of a different TN ansatz, provided there is a suitable mapping between the different structures. Examples for this technique are provided for benchmarks on different lattices in Sec. IV. Finally, the method of perturbing a suitable initial state with small amount of random noise of course could also be applied to the result of one optimization run. As suggested in the literature [80], this could help to escape possible local minima. Therefore, one could retry this method a few times and keep the best result of all runs. #### ii.2.4 Recycling of environments The calculation of the environment tensors with the CTMRG routine is expensive and time consuming. During an optimization process one can reuse the environment tensors of the previous optimization step as input for the next. This is advisable in the advanced stages of the optimization, in which the gradient is already small. In this scenario the iPEPS tensors usually only change minutely, such that starting the CTMRG routine from the environments of the last PEPS tensor can reduce the number of CTMRG steps required for convergence substantially. #### ii.2.5 Analysing iPEPS data at finite bond dimensions Data generated with the variational iPEPS setup inevitably carries finite iPEPS bond dimension \(\chi_{B}\) (or even finite environment bond dimension \(\chi_{E}\)) effects. Several schemes are available to utilize the correlation length of the optimal tensors at a certain value of \(\chi_{B}\) to extrapolate the values of observables [81; 82; 83]. Additionally, a extrapolation scheme using data of an optimized iPEPS state at finite \(\chi_{B}\) and finite but suboptimal \(\chi_{E}\) has been proposed and shown useful [84]. #### ii.2.6 Degenerate singular values Although very rare, a degenerate singular value spectrum in the calculation of the projectors can be an obstacle. The gradient of the SVD becomes ill-defined in this case, due to terms \(F_{i,j}=1/(s_{j}^{2}-s_{i}^{2})\) in the derivative [46], where \(s_{i}\) are the singular values. Naturally, it would be desirable to remove the degeneracy by constraining the system to the correct physical symmetry, thereby grouping the degenerate singular values to common multiplets of the underlying symmetry group. If this is not possible or the degeneracies appear independently of a symmetry ("accidental" degeneracy), workarounds have to be used. One possibility is to add a small amount of noise in the form of a diagonal matrix \(XX^{-1}\) on the CTMRG environment links, with the elements of \(X\) drawn from a tiny interval \([1-\varepsilon,1+\varepsilon]\). This can space out the singular value spectrum and stabilize the SVD derivative [85]. ## III Extension to other lattices The directional CTMRG routine on the square lattice is very convenient for its orthogonal lattice vectors and definition of the effective environments. It is therefore natural to exploit the implemented routines for different kind of lattices that can be mapped back to the square lattice. This can typically be achieved by a suitable coarse-graining, in which a collection of lattice sites on the original lattice is mapped into an effective site on the square lattice. Energy expectation values can then be directly evaluated in the coarse-grained picture as well. This is even advantageous for the AD optimization procedure, since the energy can often be computed with a smaller number of individual terms. In this section we will present the mapping for four types of lattices frequently found in condensed matter systems - the honeycomb, Kagome, square-Kagome and triangular lattice. Naturally, the framework can be extended by other suitable two-dimensional lattices, such as dice, square-octagon, maple-leaf and others. ### Honeycomb lattice The honeycomb, hexagonal or brick-wall lattice is of broad interest in material science and often appears in the context of quantum many-body systems. For instance, the _Kitaev honeycomb model_ is a paradigmatic example hosting different kinds of phases supporting different types of anyons, both Abelian and non-Abelian [86]. We will now describe the general technical framework to simulate honeycomb lattices with the backbone CTMRG procedure described in Sec. II.2. To this end we consider an elementary unit cell of the honeycomb lattice. Here we choose to define it along so-called \(x\)-links for reasons that become clear soon. Alternatively and equivalently, it could as well be defined along \(y\)- or \(z\)-links. As an example with eight different tensors on the honeycomb lattice, corresponding to four elementary unit cells is shown in Fig. 18. Coarse-graining the two lattice sites along \(x\)-links of the honeycomb lattice directly results in a square lattice, as shown in Fig. 19. Here, the (mapped) unit cell has size \((L_{x},L_{y})=(2,2)\) with an arrangement as in Eq. (3) and Fig. 1. The green color is used to highlight the coarse-graining along \(x\)-links. In contrast to the regular square lattice, each coarse-grained tensor has two physical indices that can be reshaped to a single, combined index before feeding it into the CTMRG procedure. A trivial unit cell on the square lattice, consisting of only a single-site tensor, results in two different tensors on the honeycomb lattice. The CTMRG routine can then be run as described above, just with a larger physical dimension. This does not change anything in the contractions, it is just computationally more expensive. Expectation values can now be evaluated accurately using the CTMRG environment tensors. Assuming nearest-neighbour terms again, expectation values along \(x\)-links can be computed by a single-site TN, while \(y\)- and \(z\)-bonds remain two-site TNs similarly to Fig. 11. ### Kagome lattice Another important and often encountered lattice in condensed matter physics is the Kagome lattice. It is of special interest due to its corner-sharing triangles, which lead a strong geometric frustration for anti-ferromagnetic models. Using a simple mapping of the Kagome lattice to a square lattice, we can directly incorporate it into our variational PEPS library. The Kagome lattice is shown in Fig. 20. Naturally, we can define a unit cell of tensors that is repeated periodically over the whole two-dimensional lattice. In our setting we consider an upward triangle on the Kagome lattice as an elementary unit cell, highlighted by the gray dotted area in Fig. 20. By choosing a coarse-graining, we can represent the three lattice sites in the unit cell by a single iPEPS tensor, which connects to its neighbours by four virtual indices. This direct mapping is shown in Fig. 21. Nearest-neighbour links in the Kagome lattice get mapped to nearest-neighbour or second-nearest-neighbour links in the square lattice. Every iPEPS site Figure 19: Using a mapping the brick-wall lattice is transformed to the square lattice. The green color of the tensors is just to highlight the coarse-graining along \(x\)-links, while \(y\)- and \(z\)- links remain in the network. Figure 20: Regular Kagome lattice mapped to a square lattice by coarse-graining of the three spins in each unit cell. Figure 17: Honeycomb and topologically equivalent brick-wall lattice. Figure 18: iPEPS ansatz on the honeycomb lattice with four elementary unit cells, resulting in eight different lattice sites. \(x\)-, \(y\)- and \(z\)-links denote the three types of inequivalent links in the lattice. Coarse-graining this state to a square lattice results in a \((L_{x},L_{y})=(2,2)\) configuration, with an arrangement as in Eq. (3) / Fig. 1. Figure 21: Regular Kagome lattice mapped to a square lattice by coarse-graining of the three spins in each unit cell. on the square lattice has a physical dimension of \(p^{3}\). As an alternative mapping, which results in the same coarse-grained TN structure, we move from the Kagome lattice to its dual, the honeycomb lattice. Here the spins live on the links instead of the vertices. The honeycomb mapping presented in Sec. III.1 is therefore not directly applicable and additional simplex tensors are necessary to connect the lattices sites. This TN structure is shown in Fig. 22, which is commonly known as the infinite _projected entangled simplex state_ (iPESS) [87]. Due to this particular mapping, three Kagome lattice sites (along with two simplex tensors) are coarse-grained into a single iPEPS site on the square lattice. While the mappings in Fig. 21 and Fig. 22 result in the same square lattice TN, they differ in the number of variational parameters in the ansatz. In the direct iPEPS ansatz, every unit cell tensor has \(p^{3}\chi_{B}^{4}\) parameters, while there are only \((3p\chi_{B}^{2}+2\chi_{B}^{3})\) parameters for the iPESS ansatz. Moreover, quantum correlations between lattice sites are exactly captured within the coarse-grained cluster for the iPEPS, whereas they are limited by the bulk bond dimension for the iPESS. In the ladder case, however, there is no bias between lattice sites within one cluster and sites belonging to different clusters. The nearest neighbor interactions on the Kagome lattice are mapped to on-site, nearest neighbor and next-nearest neighbor interactions on the square lattice. As a concrete mapping example which has particular use in the study of the regular Heisenberg model in a magnetic field, we consider the iPEPS configuration \[\mathcal{L}=\begin{pmatrix}A&B&C\\ B&C&A\\ C&A&B\end{pmatrix} \tag{19}\] on the square lattice. This configuration results in the Kagome lattice structure shown in Fig. 23. ### Square-Kagome lattice As a third lattice that has gained a lot of interest in recent time is the square-Kagome lattice. Similar to the regular Kagome lattice it features corner-sharing triangles and it is expected to host exotic quantum phases due to the geometric frustration for antiferromagnetic spin models. The square-Kagome lattice structure is shown in Fig. 24. Naturally, a coarse-graining of the six spins in the elementary unit cell can be used, which directly maps the square-Kagome lattice to a square lattice as depicted in Fig. 25. Following the same construction as for the regular Kagome lattice, we can generalize the iPESS ansatz to the dual of the square-Kagome lattice, the so-called \((4,8^{2})\) Archimedean lattice. This results in an ansatz with four simplex tensors and six lattice sites tensors per elementary unit cell, as illustrated in Fig. 26. Counting the number of variational parameters in both TN ansatze, we find a drastic reduction in the iPESS ansatz, again. Here the iPEPS has \(p^{6}\chi_{B}^{4}\) parameters, while the iPESS only has \((6p\chi_{B}^{2}+4\chi_{B}^{3})\) parameters for each tensor in the unit cell. In Table 1, we reinforce the difference for usual iPEPS bond dimensions, which has a strong influence on the expressivity and optimization of the different TN structures. As in the case of the Kagome lattice, the first coarse-graining captures quantum correlations within the cluster exactly. While this is not the case for the iPESS mapping, it does not introduce a bias for the different lattice sites within and across clusters. Both mappings result in a large physical bond dimension of \(p^{6}\), with \(p\) the Hilbert space dimension of the original degrees of freedom (e.g., \(p=2\) for a spin-\(1/2\)). This makes especially the CTMRG routine computationally expensive. As an example we consider a two-site checkerboard pattern (\((L_{x},L_{y})=(2,2)\) with only two different tensors) on the Figure 23: Kagome lattice structure corresponding to a square lattice unit cell according to Eq. (19). Figure 22: Honeycomb lattice (dual to the Kagome lattice) with spins residing on the lattice links and additional simplex tensors on the lattice sites. Unit cells are highlighted by the gray dotted areas. Upon coarse-graining of the unit cells, the dual honeycomb lattice is mapped to the regular square lattice. Physical indices of the corresponding TN states are not shown. Figure 24: Square-Kagome lattice. Similarly to the regular Kagome lattice, it features corner-sharing triangles. The elementary unit cell consists of six sites, as shown in Fig. 25. square lattice, given by \[\mathcal{L}=\begin{pmatrix}A&B\\ B&A\end{pmatrix}. \tag{20}\] This results in a square-Kagome state with twelve different lattice sites, as shown in Fig. 27. Assuming nearest-neighbour interactions in the Hamiltonian, the ground state energy can be computed by single-site as well as horizontal and vertical two-site expectation values. ### Triangular lattice The triangular lattice, shown in Fig. 28 is another two-dimensional lattice variant that appears frequently in condensed matter systems. Due to its large connectivity to six nearest neighbours, it is a typical playground for frustrated systems, hosting a variety of different quantum phases. As a consequence of this, the large connectivity makes it more challenging for numerical simulations. The triangular lattice can be directly interpreted as a square lattice with additional diagonal interactions. The entanglement between diagonal sites is then mediated by the regular virtual links in the square lattice tensor network. Nearest-neighbour interactions on the triangular lattice are again mapped to nearest-neighbour and next-to-nearest-neighbour interaction on the coarse-grained square lattice. An alternative TN representation of the triangular lattice can be constructed using again the iPESS ansatz. In contrast to the iPESS for Kagome and square-Kagome lattices, here the lattice sites have three virtual indices, too. The mapping is visualized in Fig. 29 with the iPESS ansatz being a honeycomb lattice. Similarly to the first interpretation, this iPESS honeycomb ansatz can be mapped to a regular square lattice with additional next-to-nearest-neighbour interactions. While the first approach as \(p\chi_{B}^{4}\) parameters per unit cell tensor, the iPESS mapping only has \((p\chi_{B}^{3}+\chi_{B}^{3})\) coefficients. Finally, and as an alternative to the previous mappings, a reverse transformation could be used, which involves a fine-graining of the lattice sites [88]. ### Comments about different structures In general there is no unique way to map a given lattice structure to the square lattice. The different approaches mainly differ in the number of variational parameters. While the energy for an ansatz with fewer parameters can be optimized with fewer resources, an ansatz with a higher variational freedom might be able to capture the physical system more accurately. At the same time the optimization becomes \begin{table} \begin{tabular}{c c c c} \(\chi_{B}\) & \(p^{6}\chi_{B}^{4}\) & \((6p\chi_{B}^{2}+4\chi_{B}^{3})\) & ratio \\ \hline 2 & 1024 & 80 & 12.8 \\ 3 & 5184 & 216 & 25.0 \\ 4 & 16384 & 448 & 36.6 \\ 5 & 40000 & 800 & 50.0 \\ 6 & 82944 & 1296 & 64.0 \\ 7 & 153664 & 1960 & 78.4 \\ 8 & 262144 & 2816 & 93.1 \\ \end{tabular} \end{table} Table 1: Number of variational parameters (per elementary unit cell) in the iPEPS and iPESS TN ansatz of the square-Kagome lattice for \(p=2\), assuming real tensor elements. Figure 27: Square-Kagome lattice structure for a square lattice unit cell according to Eq. (20). The ansatz has twelve different lattice sites with two-site translation invariance in both \(x\)- and \(y\)-direction. Figure 26: Square-octagon lattice (dual to the square-Kagome lattice) with spins residing on the lattice links and additional simplex tensors on the lattice sites. Unit cells are highlighted by the gray dotted areas. Upon coarse-graining of the unit cells, the square-octagon lattice is mapped to the regular square lattice. Physical indices of the corresponding TN states are not shown. Figure 25: Regular square-Kagome lattice mapped to a square lattice by coarse-graining the six spins in each elementary unit cell. more complex due to the need to calculate bigger gradients. In practice, choosing the right ansatz depends on the spatial structures of the quantum state, the amount of entanglement present in the system and the required accuracy. One strategy that works well is a step-wise optimization. In the first step one can choose, e.g., an iPESS ansatz with fewer variational parameters. Once an optimized wave function has been found, the iPESS ansatz is coarse-grained into a TN with a higher number of variational parameters, e.g., a direct iPEPS ansatz. A second optimization of this more expressive ansatz might then result in lower ground state energies. In the following sections we will present benchmarks, where several of the lowest data points have been obtained with such a two-step procedure. ## IV Benchmarks and Discussions In this section, we will present benchmarks for a challenging and paradigmatic models on the different currently supported lattices. Due to its prominence and availability of benchmarks to different numerical techniques, we generally focus on the spin-\(1/2\) Heisenberg anti-ferromagnet. The Heisenberg Hamiltonian is given by \[H=J\sum_{\langle i,j\rangle}\vec{S}_{i}\cdot\vec{S}_{j}\, \tag{21}\] where \(\langle i,j\rangle\) denotes nearest neighbours and \(\vec{S}_{i}\) are the spin-\(1/2\) operators on the lattice sites. We consider isotropic anti-ferromagnetic interactions at \(J=1.0\) throughout the benchmark section. Variational energies obtained with our implementation are denoted by _"variational update"_ (VU). Where applicable, we include different TN variants (e.g., iPESS and iPEPS) in the numerical benchmarks, to highlight the effect of different numbers of variational parameters. Imaginary time-evolution in the form of a _"simple update"_ (SU) on the different lattice structures can provide initial states for the variational update as discussed in Sec. II.7.3. Whenever we use initial tensors from the SU, we add a small amount of random noise to the input tensors prior to the variational update, in order to circumvent possible local minima in the imaginary time evolution. In the plots of this section we include the energies calculated by the mean-field environment (MF) used in the simple update. Using this approximation much larger iPEPS bond dimensions are computationally feasable but we would like to point out that this method is not guaranteed to be variational in the sense that the energy is an upper bound to the ground state energy. Thus, it is only sensible to rigorously compare results for which energy expectation values are computed by CTMRG. We include the non-variational MF energies for higher iPEPS bond dimensions for a rough comparison. We add for each benchmark a table with the comparison of the results obtained by the simple update simulations and the best result throughout all variational updates for a fixed iPEPS bond dimension \(\chi_{B}\). Both expecation values have been calculated by CTMRG. ### Comments on lower bounds in variational principles As a further conceptual point, it is important to stress that variational principles can be benchmarked as well by resorting to lower bounds to ground state energies. Such lower bounds can be efficiently computed and hold in the thermodynamic limit up to a small constant error in the energy density [89]. If the Hamiltonian \(H\) is seen as being written as a sum of terms \[H=\sum_{j}h_{j} \tag{22}\] where each \(h_{j}\) is a patch that contains as many unit cells that can be accommodated in an exact diagonalization, then \[\frac{\langle\psi|H|\psi\rangle}{\langle\psi|\psi\rangle}\geq E_{0}\quad \forall\ |\psi\rangle\,,\ \ \ \ E_{0}\geq\lambda_{\min}(h_{j}), \tag{23}\] where \(\lambda_{\min}(h_{j})\) denotes the smallest eigenvalue of the patch \(h_{j}\) with open boundary conditions. In this way, the quality of the variational principle giving rise to upper bounds to the ground state energy can be certified by lower bounds. ### Honeycomb lattice For the simulations of the Heisenberg on the honeycomb lattice we choose a single-site unit cell, consisting of only two different tensors on the honeycomb lattice. A mapping to the square lattice yields a fully translationally invariant iPEPS with a local Hilbert space dimension of \(p^{2}=4\). We optimize the ground states on both TN structures with \(2p\chi_{B}^{3}\) and Figure 28: Regular triangular lattice with a connectivity of six, i.e., every lattice site is connected to six nearest neighbours. Figure 29: iPESS ansatz for the triangular lattice consisting of only two tensors per triangular lattice site. When one lattice site and one simplex tensor are combined, the triangular lattice is directly mapped onto a regular square lattice. \(p^{2}\chi_{B}^{4}\) numbers of variational parameters, respectively (assuming real tensor coefficients). The model is known to be in a gapless Neel ordered phase [92; 93; 94]. Therefore, high environment bond dimensions \(\chi_{E}\) are required to capture the large correlation lengths of the critical state. Ground state energies are reported in Fig. 30. The critical property of the ground state is already nice reflected in the significant difference between simple update MF and CTMRG expectation values. The CTMRG environments treat quantum correlations much more carefully, which leads to improved energies for the infinite TN state. The VU provides lower energies than the SU with CTMRG and our results using the VU are compatible with previous results using variational iPEPS with a different CTMRG procedure [90] as well as extrapolated and thus non-variational results from the coupled cluster method [91]. ### Kagome lattice The Heisenberg model on the Kagome lattice can be considered one of the most enigmatic and well studied models in the field of frustrated magnetism [96]. While a spin liquid ground state is well established, the actual type of ground state is still under debate with different methods supporting different states (e.g., \(\mathbb{Z}_{2}\) gapped spin liquid [97; 98], \(U(1)\) gapless spin liquid [95; 99]). Since the ground state is known to be a spin liquid state, that does not form any magnetic ordering down to zero temperature while preserving lattice translation and rotation symmetry, we use the smallest unit cells of only three sites in our simulations. The SU then works on the three-site iPESS ansatz. The VU is performed both on the honeycomb iPESS and on a coarse-grained, fully translationally invariant iPEPS state. The number of variational parameters are hence \((3p\chi_{B}^{2}+2\chi_{B}^{3})\) for the iPESS and \(p^{3}\chi_{B}^{4}\) for the iPEPS. Again, the iPEPS state is more expressive and produces lower variational energies, that follow a smoother convergence with bond dimension \(\chi_{B}\), see Fig. 31. The ED energy provides a lower-bound for the energy, as argued in Sec. IV.1. Our energies are compatible with other state-of-the-art numerical methods as the extrapolated iPESS result from Ref. [95], but we would like to point out that the authors noted that their results are not variational and hence the comparison is slightly tainted. Our result showcases the purpose of variational iPEPS optimization for highly frustrated systems to obtain a real upper bound to the ground state energy. ### Square-Kagome lattice As a third benchmark model, we simulate the Heisenberg model on the square-Kagome lattice, a lattice that has gained attention as a class of promising quantum spin liquid materi Figure 31: Benchmarking results for the isotropic spin-\(1/2\) Heisenberg model on the Kagome lattice. For comparision, we show the outcome obtained by extrapolated iPESS results in Ref. [95], which, to be strict, is not variational as the authors noted. Additionally, we include the result computed by exact diagonalization in Ref. [96]. Figure 30: Benchmarking results for the isotropic spin-\(1/2\) Heisenberg model on the honeycomb lattice. For comparison we include the variational result obtained by an iPEPS study in Ref. [90]. Additionally, the result calculated by the coupled cluster method in Ref. [91] is shown, which is due to extrapolation not variational either. als [102]. It consists of corner-sharing triangles, that generate a high geometric frustration similar to the Kagome lattice. Its ground state has been found to be non-magnetic, however the existing subtle competition between different types of _valence bond crystal_ (VBC) states has only been resolved recently in a TN study [101], in favor of a VBC with loop-six resonances. Simulations of the model are performed for a twelve-site checkerboard unit cell, as shown in Fig. 27. Results for the ground state energy are presented in Fig. 32. Due to the VBC ground state with a small correlation length and an energy gap in the model, the simple update MF and CTMRG energies are nearly identical. The variational update is performed on a so-called semi-PEPS structure as described in Ref. [101] and also on a coarse-grained iPEPS TN as introduced in Fig. 25, a structure that is unfeasible for SU simulations due to the large imaginary time evolution operators. Although the VU cannot significantly improve the ground state energy for the semi-PEPS ansatz, the VU on the full coarse-grained iPEPS structure improves the energies at the same bond dimension \(\chi_{B}\). This is connected to the larger expressivity of the coarse-grained structure. Our results outperform variational Monte-Carlo simulations in Ref. [100] and are comparable to state-of-the-art iPEPS results in Ref. [101]. We emphasize that the latter result is in the extrapolation, strictly speaking, not variational so that a comparison is slightly tainted. ### Triangular lattice As a last benchmark model we consider the Heisenberg model on the triangular lattice. Due to its connectivity of six, the triangular lattice exhibits a large amount of geometric frustration. The ground state is believed to be a three-sublattice \(120^{\circ}\) magnetically ordered state [104; 105]. The ground state of the Heisenberg model on the triangular lattice is computed using a three-sublattice unit cell arranged in an ABC-BCA-CAB structure. The simple update data has been produced by an iPESS ansatz with the simplices sitting in the upward triangles (see Fig. 29). The VU is performed in two steps, using the converged iPESS state as input for second coarse-grained optimization run. The results of our benchmark are shown in Fig. 33. In the case of the triangular lattice it generally helps to add some noise on the SU input state to reach better ground states and energies. We compare against a recent iPESS study based on the simple update [103], that predicts a zero-temperature magnetisation consistent with previous Monte Carlo studies [106] and additionally against a result obtained by the extrapolated, thus non-variational coupled cluster method [91]. We would like to point out that the iPESS result was extrapolated and is, strictly speaking, not variational. Figure 33: Benchmarking results for the isotropic spin-\(1/2\) Heisenberg model on the triangular lattice with an \(ABC-BCA-CAB\)\(3\times 3\) unit cell structure. For comparison, we include the extrapolated, thus non-variational coupled cluster results presented in Ref. [91]. Additionally, we show the extrapolated iPESS result obtained in Ref. [103], which, to be strict, is not variational. Figure 32: Benchmarking results for the isotropic spin-\(1/2\) Heisenberg model on the square-Kagome lattice. For comparison, we include the variational Monte-Carlo results presented in Ref. [100]. Additionally, we show the extrapolated iPEPS result obtained in Ref. [101], which, to be strict, is not variational. We stress that the mean-field energies also are not variational as discussed in Sec. IV. ### Comments on excited states In this work, we have primarily focused on providing a comprehensive discussion of the use of AD for the study of ground state properties of interacting quantum lattice models. It should go without saying, however, that excited states can be included in a straightforward manner. The study of excited states has first been initiated in the realm of matrix product states [107], but has later been generalized to iPEPS [108; 109; 110], allowing for constructing variational ansatzes for elementary excitations on PEPS ground states that facilitate computing gaps, dispersion relations, and spectral weights in the thermodynamic limit. More recently, automatic differentiation has also found its way into the optimisation of excited states [42]. The central idea is to construct the excited state with momentum \(\vec{k}=(k_{x},k_{y})\) as a superposition of the ground state vector, perturbed by a single tensor \(B\) at position \(\vec{x}=(x,y)\) and appropriate phase factors according to \[\ket{\phi(B)_{\vec{k}}}=\sum_{\vec{x}}\mathrm{e}^{i\vec{k}\vec{x}}\ket{\phi(B )_{\vec{x}}}. \tag{24}\] The coefficients of tensor \(B\) are then determined by energy minimisation of the excited state, for which AD can again be used [42; 111]. In contrast to the regular ground state optimisation, here the CTMRG routine must be extended to include the appropriate phase factors in the directional absorption. Moreover, instead of only eight environment tensors per iPEPS tensor in the unit cell, the action of \(B\), \(B^{\dagger}\) and the product of \(B\) and \(B^{\dagger}\) has to be tracked in three additional sets of eight tensors. The excited state approach can be directly extended to different lattice geometries. To this end, we have to generalize the absorption of iPEPS tensors (growing the CTMRG transfer tensors \(T_{1}\), \(T_{2}\), \(T_{3}\) and \(T_{4}\)) to include the basis of the lattice, respecting relative phase factors of the basis vectors. Depending on the actual structure of the basis, a separate tensor \(B_{n}\) is chosen as a perturbation for each of the basis site. Our implementation already contains the main building blocks of a robust and flexible CTMRG routine, calculation of gradients using AD at the fixed-point and minimisation of an energy cost function. The extension of the framework to include excited states is therefore natural. It is planned as a future feature. ### Comments on fermionic systems As a final comment we stress that for clarity and to be concise, we have focused in our presentation on quantum spin models. It should be clear, however, that the machinery developed here readily carries over to the study of _interacting fermionic systems_, with little modifications. Naively, one might think that the simulation of two-dimensional fermionic models is marred by substantial overheads that emerge when invoking a spin-to-fermion mapping. This is, however, not the case, and the respective book-keeping of the signs can be done with negligible overhead [112; 113]. On the formal level, such tensor networks involve a particular choice of what is called a spin structure [114; 115]. Practically speaking, one can modify much of the bosonic code for PEPS to the fermionic setting, readily incorporating the relevant signs to capture interacting fermions, in what is called _fermionic PEPS_[112; 116; 117]. This insight is important as some of the most compelling test cases of interacting quantum many-body systems are of a fermionic nature. ## V Conclusion and prospects In this work we present a comprehensive introduction into automatic differentiation in the context of two-dimensional tensor networks, leading to the recently emerging variational iPEPS framework. We extend the literature on obstacles that arise in practice, as well as techniques to mitigate these. At the same time, we coherently present ideas that have to date only been mentioned in a fragmented fashion in the literature. We hope that the present work can serve as a useful reference and review in the variational study of 2d tensor networks. This work accompanies the variational iPEPS library variPEPS, a comprehensive and versatile code base for optimizing iPEPS in a general setting. We expect this library to be a helpful tool for performing state-of-the-art tensor network analyses for a wide range of physical models, featuring multiple two-dimensional lattices. It should be a powerful tool for studying the properties of strongly interacting quantum many-body systems. We have incorporated into the library techniques such as iterative sparse solvers in the context of AD, in particular the Golub-Kahan-Lanczos (GKL) bidiagonalization algorithm. Additionally, we propose an improved mechanism that heuristically and automatically determines the choice of the refinement parameters. This allows to further stabilize the numerically challenging iPEPS simulations, thus pushing the boundaries of the feasible problems we can study. The project is intended as a code base for further development. As such, the code is prepared to incorporate extended applications of automatic differentiation. For instance, the calculation of low energy excitations on different lattices would be a valuable addition. In addition, the calculation of structure factors could be included by summing \(n\)-point correlation functions or using generating functions for iPEPS. Finally, the incorporation of fermions would be a significant improvement in the future. ### Code release As the code is still in its final development, it is initially only available on request and/or collaboration. It will be fully openly released within the CRC 183 funding period. ### Co\({}_{2}\)-emissions table For the sake of completeness and for promoting carbon footprint awareness, we display an estimated lower bound of the carbon emissions generated during the course of this work in Table 2. ###### Acknowledgements. We acknowledge inspiring discussions with Ji-Yao Chen, Andreas Haller, Juraj Hasik, Augustine Kshetrimayum, Alexander Nietner and Niklas Tausendpfund. We would like to particularly thank Boris Ponsioen, who has for shared valuable insights, and Frederik Wilde, who has helped us to get the details of the automatic differentiation and the custom fixed-point derivative correct. E. L. W. thanks the Studienstiftung des deutschen Volkes for support. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the project number 277101999 - CRC 183 (project B01), for which this constitutes an inter-node publication involving both Cologne and Berlin, and the BMBF (MUNIQC-Atoms, FermiQP). It has also received funding from the Cluster of Excellence MATH+ and from the Quantum Flagship (PasQuans2). We would like to thank the ZEDV (IT support) of the physics department, Freie Universitat Berlin, for computing time and their technical support, particularly we thank Jorg Behrmann and Jens Dreger. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at the Julich Supercomputing Centre (JSC) (Grant NeTeNeSyQuMa) and the FZ Julich for JURECA (institute project PGI-8) [119]. For the Python version we thank the developers of the JAX framework [120; 121] for their work to provide an AD framework with optimized numerical operations and their technical support during the development of this work. We also acknowledge the use of the TensorKit package [122] in the Julia version of the code and wish to advertise the open source libraries of the Quantum Ghent group in this context [123]. We make use of the Zygote [124] package for AD in the Julia programming language. The work has been discussed and refined during the workshop "Tensor Networks: Mathematical Structures and Novel Algorithms (2022)" at the Erwin Schrodinger International Institute for Mathematics and Physics in Vienna and the workshop "Entanglement in Strongly Correlated Systems (2023)" at the Centro de Ciencias de Benasque Pedro Pascual. We thank the organizers for their hospitality and work. \begin{table} \begin{tabular}{l c} \hline \hline **Numerical simulations** & \\ \hline Total Kernel Hours [h] & \(\geq 255276\) \\ Thermal Design Power Per Kernel [W] & \(12\) \\ Total Energy Consumption Simulations [kWh] & \(\geq 3063\) \\ Average Emission Of CO\({}_{2}\) In Germany [kg\(/\)kWh] & 0.441 \\ Total CO\({}_{2}\)-Emission For Numerical Simulations [kg] & \(\geq 1351\) \\ **Were** The Emissions Offset? & **Yes** \\ \hline **Air Travel** & \\ \hline Total CO\({}_{2}\)-Emission For Air Travel [kg] & 924 \\ Were The Emissions Offset? & **Yes** \\ \hline Total CO\({}_{2}\)-Emission [kg] & \(\geq 2275\) \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the estimated lower bound of the carbon cost generated during the development of this work. The estimations have been calculated using the examples of the Scientific CO\({}_{2}\)nduct project [118] and include the costs of the numerical calculations and air travel for collaborations. ## Appendix A Adjoint functions and variables In the literature it is common to use so called _adjoint functions_ and _adjoint variables_ when using backwards-mode AD. These adjoint functions map the adjoint variables onto each other, as in Eq. (15) when building up the gradient. In this section, we will briefly introduce the basic notation of adjoint functions and variables following Ref. [125]. Explicit constructions of adjoint functions, which are vector-Jacobian-products in the practical implementation, for a large number of useful operations including those for the iPEPS use-case can be found in Refs. [125; 126; 127]. As an example throughout this section, we consider the function \(h\), composed out of two primitive functions \(h_{1}\) and \(h_{2}\) which are concatenated as \[\begin{split}& h=h_{2}\circ h_{1},\\ & h_{1}:M_{n\times n}\times M_{n\times n}\to M_{n\times n},\\ & h_{2}:M_{n\times n}\to\mathbb{R},\end{split} \tag{17}\] with variables \((A,B)\in M_{n\times n}\times M_{n\times n}\), \(C\in M_{n\times n}\) and \(x\in\mathbb{R}\). We start by examining the differential of the output variable \(x\) \[dx=\frac{\partial h_{2}}{\partial C}dC=:\sum_{i,j}\bar{C}_{i,j}dC_{i,j}= \mathrm{Tr}(\bar{C}^{\mathsf{T}}dC). \tag{18}\] In the first equation, we have suppressed the sum over the indices of \(C\). Eq. (18) defines the adjoint variable \(\bar{C}\) of \(C\). We see that the adjoint variable \(\bar{C}\) is the derivative of the scalar output of the function \(h_{2}\) w.r.t. \(C\). Thus, for the case of a scalar output the variable \(C\) and the adjoint variable \(\bar{C}\) have the same dimension. Now, in order to get the gradient \(\nabla h\) we are interested in the derivative of the output w.r.t. the input variables \((A,B)\). To this end we consider the differential of the intermediate variable \[dC=\frac{\partial h_{1}}{\partial A}dA+\frac{\partial h_{1}}{\partial B}dB. \tag{19}\] Inserting this into Eq. (18), we obtain \[dx=\mathrm{Tr}\bigg{(}\underbrace{\bar{C}^{\mathsf{T}}\frac{\partial h_{1}}{ \partial A}}_{\bar{A}^{\mathsf{T}}}dA\bigg{)}+\mathrm{Tr}\bigg{(}\underbrace {\bar{C}^{\mathsf{T}}\frac{\partial h_{1}}{\partial B}}_{\bar{B}^{\mathsf{T}}} dB\bigg{)}. \tag{20}\] Here we have already implicitly used the adjoint function \(\bar{h}_{1}\) that maps the adjoint variable \(\bar{C}\) to the adjoint variables \(\bar{A}\) and \(\bar{B}\) according to \[\bar{h}_{1}:\bar{C}^{\mathsf{T}}\mapsto(\bar{A}^{\mathsf{T}},\bar{B}^{\mathsf{ T}})^{\mathsf{T}}=\bigg{(}\bar{C}^{\mathsf{T}}\frac{\partial h_{1}}{\partial A}, \bar{C}^{\mathsf{T}}\frac{\partial h_{1}}{\partial B}\bigg{)}^{\mathsf{T}}. \tag{21}\] Given the fact that we are dealing with a scalar output variable \(x\), we recall that \(\bar{C}\) can be considered a vector, such that the adjoint function is a vector-Jacobian-product (vJP). We can see that the this mapping of the adjoint variables with adjoint functions eventually produces the gradient \[\begin{split}\nabla h=\big{(}\bar{A},\bar{B}\big{)}& =\bigg{(}\frac{\partial h_{1}}{\partial A}\bar{C},\frac{\partial h _{1}}{\partial B}\bar{C}\bigg{)}\\ &=\bigg{(}\frac{\partial h_{1}}{\partial A}\frac{\partial h_{2}} {\partial C},\frac{\partial h_{1}}{\partial B}\frac{\partial h_{2}}{\partial C }\bigg{)}\,.\end{split} \tag{22}\] ## Appendix B Automatic differentiation for complex variables Some extra attention has to be given to the case in which the primitive functions are complex valued. This is because not all functions one might want to consider are complex-differentiable (holomorphic) and as such the derivative depends on the direction we move in the complex plane when taking the limit for the derivative. In such a case one needs to resort to the calculus of two sets of independent real variables. For a generic function \(f:\mathbb{C}\to\mathbb{C}\) this can be done by treating \(x\) and \(y\) in \(z=x+iy\) as independent variables or alternatively, by choosing \(z\) and \(z^{*}\) and making use of Wirtinger calculus. However we should also note that in the iPEPS use case we deal with a function \(E:\mathbb{C}^{n}\to\mathbb{R}\), which removes the necessity to think about holomorphism. ## Appendix C The implicit function theorem and its use at the CTMRG fixed-point In this section, we are going to present an alternative approach to taking the derivative of the energy function by utilizing the fixed point of the CTMRG procedure. To this end, we can make use of the implicit function theorem [128] to calculate the derivative of the full fixed-point routine. Our discussion will follow the description of Refs. [129; 130]. Differentiating Eq. (16) on both sides we end up with \[\partial_{A}e^{*}(A)=\partial_{A}c(A,e^{*})+\partial_{e^{*}}c(A,e^{*}) \partial_{A}e^{*}(A). \tag{23}\] Introducing the shorthand writing for the Jacobians \(L=\partial_{A}c(A,e^{*}(A))\) and \(K=\partial_{e^{*}}c(A,e^{*}(A))\) and rearranging the equation we find \[\begin{split}\partial_{A}e^{*}(A)=&(L+K\partial_{A }e^{*}(A))\\ =&\left(\sum_{n=0}^{\infty}K\right)L=(\mathds{1}-K)^{- 1}L.\end{split} \tag{24}\] As discussed in Appendix A, we aim at finding the adjoint function of the CTMRG iteration at the fixed point, which is a _vector-Jacobian product_ (vJP) \(\mathbf{v}^{\mathsf{T}}\partial_{A}e^{*}(A)\). Inserting Eq. (24) yields \[\mathbf{v}^{\mathsf{T}}\partial e^{*}(A)=\mathbf{v}^{\mathsf{T}}(\mathds{1}-K) ^{-1}L=\mathbf{w}^{\mathsf{T}}L, \tag{25}\] where we have introduced \(\mathbf{w}^{\mathsf{T}}:=\mathbf{v}^{\mathsf{T}}(\mathds{1}-K)^{-1}\). The second equality in the equation above can be rearranged into another fixed-point equation \[\mathbf{w}^{\mathsf{T}}=\mathbf{v}^{\mathsf{T}}+\mathbf{w}^{\mathsf{T}}K. \tag{26}\] Here \(\mathbf{w}^{\mathsf{T}}K\) is another vJP but this time only dependent of the derivative of a single absorption step evaluated at the fixed-point of the CTMRG routine. Solving Eq. (100) we can find \(\mathbf{w}^{\mathsf{T}}\) to calculate the vJP of the CTMRG routine from Eq. (101). In the end we reduced the naive effort of unrolling the fixed-point iterations to just calculate the derivative of a single CTMRG iteration and another fixed-point iteration which both are much less memory intensive. ## Appendix D Automatic differentiation in the language of differential geometry In order to unify the different frameworks for thinking about forward- and backwards-mode AD, we will briefly introduce a mathematical notation for AD. It also serves to give some more precise meaning to the terms "push-forward" and "pullback", that are sometimes used in forward- and backwards-mode AD discussions, respectively. For this we first recall the general concept of a push-forward and a pullback for the simple case of functions and distributions. Imagine two functions \(f:M\to N\) and \(g:N\to\mathbb{R}\). The _pullback_ of \(g\) along \(f\) allows us to construct a function \(f^{*}g:M\to\mathbb{R}\) for which the domain of the function \(g\) is "pulled back" to the domain of the function \(f\). This is done by a simple concatenation of \(f\) and \(g\) \[f^{*}g(\underbrace{m}_{\in M})=(g\circ f)(m)=g(f(m)). \tag{101}\] This construction can now be used to define a _push-forward_ on the dual objects of the functions under integration. These dual objects are distributions. With a distribution, we can integrate a function \[\begin{split}\int_{M}\bullet\mu:\mathcal{F}(M)& \to\mathbb{R},\\ f&\mapsto\int_{M}f\mu,\end{split} \tag{102}\] where \(\mathcal{F}(M)\) are just the functions on \(M\) and \(\mu\) is the distribution. Given such a distribution on \(M\) we can now integrate functions on \(M\). The push-forward \(f_{*}\mu\) of \(\mu\) allows us to integrate functions on \(N\) by defining a distribution that is "pushed forward" to \(N\). This works as \[\int_{N}h(f_{*}\mu)=:\int_{M}(f^{*}h)\mu, \tag{103}\] where \(h\) is a function on \(N\). This type of construction for the pullback and push-forward generalizes to many mathematical objects that have a _pairing dual_. The relevant mathematical objects for AD are the derivative \(\partial/\partial x_{i}\) and its pairing dual, the differential \(dx_{i}\). It might be useful, beyond the conceptual clarity of this notation, to look at AD in this way because one can easily imagine situations where the intermediate data of a function is restricted by constraints such that the "data-space" becomes geometrically non-trivial. An example could be vectors in \(\mathbb{R}^{n}\) restricted to unit length or matrices in \(M_{n,m}\) restricted to be unitary. We note that an optimisation in these situations requires some additional concepts, like finding a path on the given space from a tangent vector. This requires some extra care and is not discussed here. We now introduce the mathematical notation that we need in order to talk about AD in this language. We will not be particularly rigorous in this endeavour and leave out all details that are not explicitly needed. We start with a manifold \(M\) on which we can consider points \(p\in M\), as well as functions \(f:M\to\mathbb{R}\). For each point \(p\in M\) we can define a vector space \(T_{p}M\) (call it the tangent-space at \(p\)) of tangent-vectors at that point. The elements in \(T_{p}M\) act like derivatives on functions on \(M\) \[\text{e.g.:}\ \ \frac{\partial}{\partial x_{i}}=e_{i}\in T_{p}M,\ \ \frac{\partial}{ \partial x_{i}}(f)=\frac{\partial f}{\partial x_{i}}.\] Here we have assumed that we have equipped the manifold \(M\) with coordinates via a chart \(\phi:M\to\mathbb{R}^{m}\) around the point p, where \(m=\dim(M)\). Our tangent-space \(T_{p}M\) has dimension \(m\) and we can choose a canonical basis \[\left\{\frac{\partial}{\partial x_{i}},\ldots,\frac{\partial}{\partial x_{m}} \right\}=\{e_{1},\ldots,e_{m}\}.\] One further defines the dual vector space \(T_{p}^{*}M\) of the tangent vector space, called cotangent-space. This cotangent-space contains the dual vectors to the derivatives \(\frac{\partial}{\partial x_{i}}\). These cotangent vectors from the cotangent-space are the differentials \(dx_{i}\). The cotangent-space also has dimension \(m\) and we can choose the canonical basis \[\{dx_{1},\ldots,dx_{m}\}.\] Obviously, given the canonical basis for the tangent-space and cotangent-space we can expand arbitrary vectors in these spaces in the basis. Take \(v\in T_{p}M\) and \(df\in T_{p}^{*}M\) we can expand as \[v =\sum_{i}v_{i}\frac{\partial}{\partial x_{i}}=\sum_{i}v_{i}e_{i}, \tag{104}\] \[df =\sum_{i}\frac{\partial f}{\partial x_{i}}dx_{i}. \tag{105}\] We have a pairing between the derivatives that live in the tangent-space \(T_{p}M\) and the differentials that live in \(T_{p}^{*}M\) as \[dx_{j}\left(\frac{\partial}{\partial x_{i}}\right):=\frac{\partial x_{j}}{ \partial x_{i}}=\delta_{i,j}. \tag{106}\] Note that by this pairing relation we see that tangent and cotangent vectors are "pairing duals" and we can use an analogous construction for pullbacks and push-forwards as we did for functions and distribution above. Since \(T_{p}M\) and \(T_{p}^{*}M\) are isomorphic, we can introduce a correspondence transformation between the canonical basis of the two spaces \[\bullet^{\flat}:T_{p}M\to T_{p}^{*}M,\ \ e_{i}\mapsto dx_{i}=e_{i}^{\flat}, \tag{107}\] \[\bullet^{\sharp}:T_{p}^{*}M\to T_{p}M,\ \ dx_{i}\mapsto e_{i}=dx_{i}^{\sharp}. \tag{108}\] We now have assembled all nessessary tools to formulate what a "gradient" is in this language. It is given by \[\nabla f:=(df)^{\sharp}, \tag{108}\] which matches the common formula \[\nabla f=\left(\sum_{i}\frac{\partial f}{\partial x_{i}}dx_{i}\right)^{\sharp} =\sum_{i}\frac{\partial f}{\partial x_{i}}e_{i} \tag{109}\] \[=\left(\frac{\partial f}{\partial x_{1}},\dots,\frac{\partial f }{\partial x_{m}}\right),\] where we have taken \(e_{i}\) just as the \(i\)-th unit vector of \(T_{p}M\). Now it is easy to construct the pullbacks and push-forwards in this context analogous to our treatment of functions and distributions. For this we start from manifolds \(M\) and \(N\) with points \(p\in M\) and \(q\in N\), and with the two functions \(f:M\to N\) and \(g:N\to\mathbb{R}\). We can consider a differential \(dg\in T_{q}^{*}N\) which we want to "pull back" along the function \(f\) and associate it with and element of \(T_{f^{-1}(q)}^{*}M\), where \(f^{-1}(q)\in M\). We do this with the familiar definition \[\underset{\in T_{f^{-1}(q)}^{*}M}{f^{*}}:=d(g\circ f) \tag{110}\] which uses a concatenation of \(f\) and \(g\) just as in the first example. For a tangible example consider \(g=x_{i}\) to be a coordinate function. We then get \(f^{*}dx_{i}=d(x_{i}\circ f)=d(f_{i})\). As before the push-forward can be defined via the pullback just as we had done for functions and distributions. In this case, we start with a tangent vector \(\frac{\partial}{\partial x_{i}}\) in \(T_{p}M\) and want to "push it forward" along \(f\) into \(T_{f(p)}N\). This works as \[(\underset{\in T_{f(p)}N}{f_{*}\left(\frac{\partial}{\partial x_{i}}\right)}) (g):=\frac{\partial}{\partial x_{i}}(f^{*}g)=\frac{\partial}{\partial x_{i}}( g\circ f). \tag{111}\] Now that we are equipped with the pullback and push-forward of differentials and derivatives we see how the gradient is calculated in the forward- and backward-mode AD. For this we will go back to our neat example from Sec. II.4 and slightly generalize. Say, we would like to take the gradient \(\nabla E\) of a function that is composed of three primitive functions \(E=f_{3}\circ f_{2}\circ f_{1}\). We say these primitive functions map between manifolds \[E:M_{1}\overset{f_{1}}{\longmapsto}M_{2}\overset{f_{2}}{\longmapsto}M_{3} \overset{f_{3}}{\longmapsto}\mathbb{R}. \tag{112}\] Lets first look at what happens when we build the gradient using backwards-mode AD. In this case we start with the differential \(df_{3}\) of the last primary function of \(E\). This differential lives in \(T_{k}^{*}M_{3}\), where \(k\in M_{3}\) is a point in \(M_{3}\). We can now use the pullback along the functions \(f_{2}\) and then \(f_{1}\) to pull back this differential to \(M_{1}\) \[f_{1}^{*}(f_{2}^{*}(df_{3}))\overset{\text{pullback}}{\longmapsto}f_{2}^{*}( df_{3})\overset{\text{pullback}}{\longmapsto}df_{3}. \tag{113}\] With the definitions above we see that in this way we construct the gradient \[f_{1}^{*}(f_{2}^{*}(df_{3}))=f_{1}^{*}((d(f_{3}\circ f_{2})))=d(f_{3}\circ f_{ 2}\circ f_{1})=dE. \tag{114}\] With our identification between tangent and cotangent vectors we finalize to \(\nabla E=(dE)^{\sharp}\). If we express the differential that we start from \(df_{3}\) in coordinates, we straightforwardly obtain the product of Jacobians as a result for the gradient. This also establishes the connection to the adjoint functions we talked about in the previous section and the vector-Jacobian product as discussed in Sec. II.4. In the case of forward-mode AD we start from a tangent vector \(\frac{\partial}{\partial x_{i}}\), which lives in \(T_{l}M_{1}\), where \(l\in M_{1}\) is a point in \(M_{1}\). We can now push this tangent vector forward into a tangent space of \(M_{3}\) with successive push-forwards along \(f_{1}\) followed by \(f_{2}\) \[\frac{\partial}{\partial x_{i}}\xrightarrow[\text{push-forward}]{\text{push- forward}}f_{2*}\left(f_{1*}\left(\frac{\partial}{\partial x_{i}}\right)\right). \tag{115}\] With the definitions for the push-forward we see that the gradient we obtain in this way is given by \[\sum_{i}f_{2*}\left(f_{1*}\left(\frac{\partial}{\partial x_{i}} \right)\right)(f_{3})\;e_{i} =\sum_{i}f_{1*}\left(\frac{\partial}{\partial x_{i}}\right)(f_{3} \circ f_{2})\;e_{i} \tag{116}\] \[=\sum_{i}\frac{\partial}{\partial x_{i}}(\underset{\sim E}{ \longrightarrow}e_{i}\] \[=\nabla E.\]
2303.07427
Forming and Controlling Hitches in Midair Using Aerial Robots
The use of cables for aerial manipulation has shown to be a lightweight and versatile way to interact with objects. However, fastening objects using cables is still a challenge and human is required. In this work, we propose a novel way to secure objects using hitches. The hitch can be formed and morphed in midair using a team of aerial robots with cables. The hitch's shape is modeled as a convex polygon, making it versatile and adaptable to a wide variety of objects. We propose an algorithm to form the hitch systematically. The steps can run in parallel, allowing hitches with a large number of robots to be formed in constant time. We develop a set of actions that include different actions to change the shape of the hitch. We demonstrate our methods using a team of aerial robots via simulation and actual experiments.
Diego S. D'Antonio, Subhrajit Bhattacharya, David Saldaña
2023-03-13T19:05:18Z
http://arxiv.org/abs/2303.07427v1
# Forming and Controlling Hitches in Midair Using Aerial Robots ###### Abstract The use of cables for aerial manipulation has shown to be a lightweight and versatile way to interact with objects. However, fastening objects using cables is still a challenge and human is required. In this work, we propose a novel way to secure objects using hitches. The hitch can be formed and morphed in midair using a team of aerial robots with cables. The hitch's shape is modeled as a convex polygon, making it versatile and adaptable to a wide variety of objects. We propose an algorithm to form the hitch systematically. The steps can run in parallel, allowing hitches with a large number of robots to be formed in constant time. We develop a set of actions that include different actions to change the shape of the hitch. We demonstrate our methods using a team of aerial robots via simulation and actual experiments. ## I Introduction From ancient times, humans have been familiar with the use of ropes to secure and transport objects. There is an old trace indicating that neanderthals used twisted fiber to tie objects up [1]. So ropes have been used even before the invention of the wheel, and nowadays, we can see ropes, cables, and strings everywhere in all types of applications. Although humans have widely used them, their use in robotics has been very limited due to their high complexity. A cable has infinite possible shapes, also known as configurations, that offers high versatility, but at the same time, this complicates its analysis and computation. In aerial manipulation, external mechanisms, such as lightweight grippers [2, 3, 4] and robot arms [5, 6, 7, 8], have been added to interact with objects and the environment. However, the attachment of an external mechanism on an aerial vehicle increases its system complexity [9], changing inertia, the center of mass, and overall weight. In contrast to those types of mechanisms, ropes are lightweight and low-cost. The use of cables in aerial manipulation has existed for more than a decade, and there are significant contributions to the state of the art [10, 11]. Specifically, cables attached to quadrotors have become part of science development due to their abilities and versatility. For instance, a quadrotor is constrained to its maximum thrust in object transportation, but multiple quadrotors with cables can combine forces and increase their actual capacity [12]. Suspended load transportation was studied with a single cable and multiple cables [13, 14, 15, 16, 17]. Although there is a significant amount of existing research in suspended load transportation, most approaches assume that the connection between the quadrotor and the load is made in a previous stage. That is a bottleneck in autonomous transportation because it requires human intervention. Humans interlace cables to form hitches and knots in their daily life for multiple purposes, especially to tie, hold, and carry objects. Cowboys use hitches to tied horses or hold objects with multiple interconnected ropes [18]. A wide classification of hitches can be found in [19]. We found interesting physical hitches, such as a single diamond and a Marline hitch. Both used multiple cable intersections to hold objects. In this manuscript, we do not make any distinction between ropes and cables. The seminal work that introduced aerial robots weaving multiple cables to create hitches was presented in [20]. The authors focused on forming tensile structures such as bridges. In our previous work, we introduced the catenary robot, a pair of quadrotors that control a hanging cable that describes a catenary curve. This vehicle is used for non-prehensile manipulation with hook-shape objects [21] and cuboid objects [22, 23]. Since some objects require fastening for transportation purposes, we developed an algorithm to create knots in midair [24], While tying a knot can effectively secure the object, the autonomous knot release is still difficult. Consequently, a fully autonomous transportation system using cables remains an open area of research. In this work, we propose a novel type of hitch and a set of actions to form it and morph its shape in midair using multiple catenary Fig. 1: Six quadrotors forming a triangular polygonal hitch. Video available at: [https://youtu.be/gBVJPY7ilzc](https://youtu.be/gBVJPY7ilzc) robots (see Fig. 1). The hitch is defined by a polygon, making it versatile and adaptable to a wide variety of objects. Depending on cross-sectional object shape, we can choose a convex polygon (some polygonal hitches are illustrated in Fig. 2). For instance, a box can be transported with a square hitch. The main contribution of this paper is twofold. _i)_ We introduce and formalize a new type of hitch that can be formed using aerial robots. The maximum size of the hitch is scalable by increasing the number of catenary robots that form it. _ii)_ We propose a new set of actions that include an algorithm to form the hitch, and change its shape. our algorithm to form the hitch can run in parallel, allowing hitches with a large number of robots to be formed in constant time. Additionally, Our solution is computationally efficient and runs in actual robots. ## II Problem statement Consider a team of _catenary robots_[21], where each robot is composed of two quadcopters attached to the ends of a flexible non-stretchable cable (see Fig. 2(a)). The catenary robots can interlace their cables by passing along each other (see Fig. 2(b)). The interaction of multiple catenary robots forming a hitch creates a manipulation tool for aerial robots that is lightweight, versatile, and adjustable (see Fig. 2(c)). We define a world frame in \(\mathbb{R}^{3}\), denoted by \(\{\mathcal{W}\}\), which is fixed, and its \(z\)-axis points upwards. Although our analysis uses a plane as a workspace, notice that the robots can freely move and reach the planar workspace that can be projected in the three-dimensional space. **Polygonal-hitch:** Hitches have been studied for many years [19, 25, 26] and there are several types, but in this paper, we focus on a specific type of hitch that can be formed using aerial robots, we call _it polygonal hitch_. Based on a convex polygon, defined by a set of \(n\) vertices on the Euclidean plane, i.e., \(\mathcal{P}=\{\mathbf{p}_{k}\in\mathbb{R}^{2},\,k=1,\ldots,n\}\), we want to use \(n\) catenary robots to interlace their cables and form a hitch with the shape of the polygon \(\mathcal{P}\). The vertices in \(\mathcal{P}\) are numbered in an increasing order, following a clockwise order (see Fig. 4). The \(k\)th edge of the polygon \(\mathcal{P}\) goes from vertex \(\mathbf{p}_{k}\) to vertex \(\mathbf{p}_{k+1}\). The cable of each catenary robot is used to form an edge of the polygon, and therefore, we enumerate the robots accordingly to the edge where they belong. Assuming that each end of the cable is attached to the center of mass of a quadrotor, we abstract the two quadrotors of a catenary robot as points, denoted by \(\mathbf{q}_{k}\) and \(\mathbf{r}_{k}\). In this way, the points \((\mathbf{q}_{k},\,\mathbf{r}_{k})\) also represent the end points of the cable. The length of the cable is \(L_{k}\) and we assume that the polygon satisfies \(\|\mathbf{p}_{k+1}-\mathbf{p}_{k}\|<L_{k}\). Notice that vertex \(\mathbf{p}_{k}\) has two adjacent quadrotors located at \(\mathbf{r}_{k-1}\) and \(\mathbf{q}_{k}\). Due to the cyclic nature of the system, any index greater than \(n\) will be reduced modulo \(n\) (plus \(1\) in order to make the indices start at \(1\) rather than at \(0\)). Thus, for notational convenience, whenever we refer to an index \(k+1\), we mean the index (\(k\mod n\)) \(+1\). **Hitch configuration:** The configuration of a polygonal-hitch is determined by the location of the quadrotors \(\big{(}\mathbf{q}_{k},\mathbf{r}_{k}\big{)}\), and the vertices of the polygon \(\mathbf{p}_{k}\) for \(k=1,\ldots,n\), denoted by the set \[\mathcal{C}=\{(\mathbf{q}_{k},\mathbf{r}_{k},\mathbf{p}_{k}),\,k=1,\ldots,n\}.\] Each cable is always in tension, and can be represented by three straight lines. Assuming negligible friction at the interaction between the cables, the uniform tension over the entirety of each cable. We denote the tension on the \(k\)-th cable by \(T_{k}\). The polygonal-hitch offers a new versatile way to interact and manipulate objects since its geometric representation allows rotation, translation and shape adaptation. In this work, we focus on finding a class of polygonal hitches that can be formed and controlled systematically. **Problem 1**.: _Given a polygon \(\mathcal{P}\) with \(n\) vertices and a team of \(n\) catenary robots, design the strategy to form a polygonal-hitch and a set of actions to be able to change its shape on the fly._ Fig. 4: Notations involved in describing a general polygonal-hitch. Fig. 3: Stages of forming a hitch with aerial robots, (a) Free catenary robots, (b) Weaving a cable, and (c) Control the hitch shape. Fig. 2: Polygonal hitches. ## III Polygonal-Hitches in Equilibrium In this section, we propose a class of polygonal-hitches that can maintain their shape in midair. The key is to analyze the configurations that lead the tension of the cables to an equilibrium. Base on static analysis, we design a practical solution to find a configuration for a polygonal-hitch \(\mathcal{C}\) for a given polygon \(\mathcal{P}\). Examples of this class of flying hitches can be seen in Fig. 2. ### _Equilibrium Analysis of a Static Hitch_ A vertex is formed by interlacing two cables, forming an x-like shape with four tensions (see Fig. 5). We analyze the tensions at the \(k\)-th vertex to be in force equilibrium. The direction of the \(k\)th edge of the polygon, \((\mathbf{p}_{k},\mathbf{p}_{k+1})\), is denoted by the unit vector \[\mathbf{u}_{k}=\frac{\mathbf{p}_{k+1}-\mathbf{p}_{k}}{\left\|\mathbf{p}_{k+1} -\mathbf{p}_{k}\right\|}. \tag{1}\] The \(k\)-th vertex at location \(\mathbf{p}_{k}\) is in equilibrium when all the tensions add up to zero, meaning that the following equation has to be satisfied, \[T_{k}\mathbf{\widetilde{q}}_{k}+T_{k}\mathbf{u}_{k}+T_{k-1}\mathbf{\widetilde {r}}_{k-1}-T_{k-1}\mathbf{u}_{k-1}=0, \tag{2}\] where the unit vectors are, \[\mathbf{\widetilde{q}}_{k}=\frac{\mathbf{q}_{k}-\mathbf{p}_{k}}{\left\| \mathbf{q}_{k}-\mathbf{p}_{k}\right\|},\quad\text{and}\quad\mathbf{\widetilde {r}}_{k-1}=\frac{\mathbf{r}_{k-1}-\mathbf{p}_{k}}{\left\|\mathbf{r}_{k-1}- \mathbf{p}_{k+1}\right\|}.\] In general, the whole system has \(2n\) equations (there are \(n\) vertices, and each vertex has to satisfy the vector equation in (2) for \(x\) and \(y\) coordinates). The position of all vertices are part of the input, so the vectors \(\mathbf{u}_{k}\) and \(\mathbf{u}_{k-1}\) are known, but the tensions, \(T_{k}\), and cable orientations \((\mathbf{\widetilde{q}}_{k},\mathbf{\widetilde{r}}_{k-1})\) are unknown. Since each cable orientation is unitary and can be determined by a single parameter, the total number of unknowns is \(3n\). Therefore, the system is underdetermined, and there are in general infinitely many solutions. ### _A Specific Solution for Equilibrium_ In order to find a practical solution for the equilibrium equation in (2), we consider a special case where all the cables have the same tension \(T>0\), _i.e._, \(T_{1}=...=T_{n}=T\), and the cables are aligned with the polygon edges, _i.e._, \(\mathbf{\widetilde{r}}_{k-1}=-\mathbf{u}_{k}\) and \(\mathbf{\widetilde{q}}_{k}=\mathbf{u}_{k-1}\), for all \(k=1,\ldots,n\). Then, we can easily verify that our specific solution \[\mathbf{\widetilde{q}}_{k}\!=\!\mathbf{u}_{k-1},\ \ \mathbf{\widetilde{r}}_{k-1}\!=\! -\!\mathbf{u}_{k}\ \ \text{and}\ \ T_{k}\!=\!T, \tag{3}\] satisfies the equilibrium equation in (2) for any constant \(T\ >\ 0\). In the rest of the paper, we will focus on this special class of solutions for simplicity. Now that the orientation of the cables is defined, we only need to compute the location of the end points to find a configuration \(\mathcal{C}\) for a hitch in equilibrium. ### _Determining Robot Positions_ For a given polygon shape \(\mathcal{P}\), the length of the \(k\)th edge is \(l_{k}=\left\|\mathbf{p}_{k+1}-\mathbf{p}_{k}\right\|\). Suppose the \(k\)-th cable has a total length of \(L_{k}>l_{k}\). Using this constraint, and the solutions for the unit vectors \(\mathbf{\widetilde{q}}_{k}\) and \(\mathbf{\widetilde{r}}_{k-1}\) in (3), we can compute appropriate positions for the robots \(\left(\mathbf{q}_{k},\mathbf{r}_{k}\right)\) as follows. For each \(k=1,2,\ldots,n\), choose a distance \(d_{k}<L_{k}-l_{k}\) (or, \(e_{k}<L_{k}-l_{k}\)), at which to place the robots \(\mathbf{q}_{k}\) from the desired polygon vertex \(\mathbf{p}_{k}\) (or, the robot \(\mathbf{r}_{k}\) from the desired polygon vertex \(\mathbf{p}_{k+1}\)), and define \(e_{k}=L_{k}-l_{k}-d_{k}\) (or, \(d_{k}=L_{k}-l_{k}-e_{k}\)) so that \(d_{k}+e_{k}+l_{k}=L_{k}\). See Fig. 5(a) for illustration. Then the position vectors of the robots are given by \[\mathbf{q}_{k} = \mathbf{p}_{k}\ +\ d_{k}\,\mathbf{u}_{k-1},\] \[\mathbf{r}_{k} = \mathbf{p}_{k+1}\ -\ e_{k}\,\mathbf{u}_{k+1}. \tag{4}\] **Balanced configuration:** A special case is when the cable distances \(d_{k}\) and \(e_{k}\) are equal. We can compute them by, \[d_{k}=e_{k}=\frac{L_{k}-l_{k}}{2}.\] This special case is useful to find an initial equilibrium configuration for a given polygon \(\mathcal{P}\). Then, the balanced configuration is denoted by the set, \[\bar{\mathcal{C}}=\{(\mathbf{\widetilde{q}}_{k},\mathbf{\widetilde{r}}_{k}, \mathbf{p}_{k}),\,k=1,\ldots,n\}, \tag{5}\] where \(\mathbf{\widetilde{q}}_{k}=\mathbf{p}_{k}+\frac{L_{k}-l_{k}}{2}\mathbf{u}_{k-1}\), and \(\mathbf{\widetilde{r}}_{k}=\mathbf{p}_{k+1}-\frac{L_{k}-l_{k}}{2}\mathbf{u}_ {k+1}\). ## IV Actions for Hitch Manipulation In this section, we present four actions that allow robots to form a hitch and change its shape. Assuming a quasistatic motion. We compute robot positions and trajectories, and then use a classical position and trajectory-tracking controller for quadrotors [27, 28]. ### _Action 1: Forming a hitch_ The objective of this action is to use \(n\) catenary robots, initially disconnected as illustrated in Fig. 6(a), and interlace them to form a hitch configuration \(\bar{\mathcal{C}}\): We can form a hitch configuration \(\bar{\mathcal{C}}\) in three steps: * _Step 1:_ For each catenary robot \(k=1,\ldots,n\), move the end points of the cables, \((\mathbf{q}_{k},\mathbf{r}_{k})\), specified by the configuration \(\bar{\mathcal{C}}\) (see Fig. 6(a)). * _Step 2:_ Each pair of quadrotors at locations \((\mathbf{q}_{k},\mathbf{r}_{r-1})\) swap their places following a circular trajectory (or any collision-free trajectory) as illustrated in Fig. 6(b). Fig. 5: Tensions at the \(k\)-th vertex of a polygonal-hitch. * _Step 3:_ The same quadrotors swap their places again, interconnecting the cables and forming the \(k\)-th vertex of the polygon \(\mathbf{p}_{k}\) (see Fig. 6(c)). We highlight that the Step 1, 2, and 3 can be performed in parallel, so the time to create a polygonal-hitch of \(n\) vertices is independent of \(n\), taking the always constant time. ### _Action 2: Moving a vertex_ This action is focused on changing the shape of the polygon by moving a single vertex. We start with a configuration that forms a polygon \(\mathcal{P}\) and is transformed to a polygon \(\mathcal{P}^{\prime}\). Both polygons differ in a vertex \(\mathbf{p}_{k}\in\mathcal{P}\) that we denote by \(\mathbf{p}_{k}^{\prime}\in\mathcal{P}^{\prime}\). Fig. 5(a) illustrates the \(\mathcal{P}\) in solid lines and the difference with \(\mathcal{P}^{\prime}\) in dashed lines. From (4), it can be noted that for a given \(d_{k}\), The computation of \(\mathbf{q}_{k}\) depends only on the position of the vertices \(\mathbf{p}_{k}\) and \(\mathbf{p}_{k-1}\). Similarly, for a given \(e_{k}\), \(\mathbf{r}_{k}\) depends only on the position of the vertices \(\mathbf{p}_{k+1}\) and \(\mathbf{p}_{k+2}\) (based on (1)). As a consequence, it is easy to check that, for a given value of \(e_{k}\) and \(d_{k-1}\), \(\mathbf{p}_{k}\) is involved only in the expressions of \(\mathbf{q}_{k},\mathbf{r}_{k-1},\mathbf{q}_{k+1}\) and \(\mathbf{r}_{k-2}\). Thus, in order to change the position of a single vertex of the polygon, from \(\mathbf{p}_{k}\) to \(\mathbf{p}_{k}^{\prime}\), while keeping the positions of all other vertices fixed, it is sufficient to recompute the end points (using (4)) and change the positions of the robots \(\mathbf{q}_{k},\mathbf{r}_{k-1},\mathbf{q}_{k+1}\) and \(\mathbf{r}_{k-2}\) only. We can see in Fig. 5(a) that the vertex \(\mathbf{p}_{k}\) can be moved to \(\mathbf{p}_{k}^{\prime}\) by changing only \(\mathbf{q}_{k},\mathbf{r}_{k-1},\mathbf{q}_{k+1}\) and \(\mathbf{r}_{k-2}\). ### _Action 3: Moving an edge_ This action is focused on changing the shape of the polygon \(\mathcal{P}\) by moving a single edge. The \(k\)-th edge, \((\mathbf{p}_{k},\mathbf{p}_{k+1})\) from the polygon, \(\mathcal{P}\) is moved to a new location \((\mathbf{p}_{k}^{\prime},\mathbf{p}_{k+1}^{\prime})\), forming a new polygon \(\mathcal{P}^{\prime}\). Both polygons share the same vertices, except for vertices the edge, \((\mathbf{p}_{k}^{\prime},\mathbf{p}_{k+1}^{\prime})\). Similar to Action 2, only four robots need to be moved to perform this action. In this case, we can move the edge \(k\) by translating the two endpoints of the \(k\)th catenary robot \(\mathbf{r}_{k}\) and \(\mathbf{q}_{k}\) along the directions \(\mathbf{\widehat{r}}_{k}\) and \(\mathbf{\bar{q}}_{k}\) respectively. At the same time, the end points \(\mathbf{r}_{k-1}\) and \(\mathbf{q}_{k+1}\) need to be moved to maintain the vertices at the new locations \(\mathbf{p}_{k}^{\prime}\) and \(\mathbf{p}_{k+1}^{\prime}\). As illustrated in Fig. 5(b), we only need to move four quadrotors, \(\mathbf{r}_{k-2},\mathbf{q}_{k},\mathbf{r}_{k-1},\text{ and }\mathbf{q}_{k-1}\), to move an edge. ### _Action 4: Adjusting the cable_ In the balanced configuration \(\bar{\mathcal{C}}\), the cable distances \(e_{k}\) and \(d_{k}\) are the same for each edge, but this property is not always maintained after applying Actions 2 and 3. The problem is that small values of, \(e_{k}\) and \(d_{k}\), limit the potential changes for a new polygon \(\mathcal{P}^{\prime}\). For this purpose, after performing Actions 2 and 3, we adjust the cable to create a balanced configuration \(\bar{\mathcal{C}}\) for the new polygon \(\mathcal{P}^{\prime}\). Fig. 5(b) illustrates a configuration where \(e_{k}\neq d_{k}\), and \(e_{k-2}\neq d_{k-2}\). Therefore, cables \(k\) and \(k-2\) need to be adjusted to achieve a balanced configuration as illustrated in 5(c). In order to adjust cable \(k\), the endpoints \(\mathbf{r}_{k}\) and \(\mathbf{q}_{k}\) will be moved the same distance but in opposite directions along the cable lines \(\mathbf{\widehat{r}}_{k}\) and \(-\mathbf{\bar{q}}_{k}\) respectively. Therefore, the new positions for the end points after the adjustment are the same as in (5) \(\mathbf{\bar{r}}_{k}^{\prime}\) and \(\mathbf{\bar{q}}_{k}^{\prime}\) for the new polygon \(\mathcal{P}^{\prime}\). ## V Experiments We evaluate each of the four actions discussed in Section IV. First, the catenary robots start flying and form polygonal-hitches in mid-air. Second, we show that we can control a simple vertex in the polygonal-hitch by manipulating the robot's position. Third, we demonstrate that our system can move an edge. Finally, we show that we can adjust the cable Fig. 6: Multiple cables forming a section of a polygonal-hitch. The dashed lines represent the actions of moving a vertex, edge, and adjusting cables. Fig. 7: Action 1: Forming a hitch in three steps. during the flight. We validate our method for polygonal-hitches in simulations and actual robots (see attached multi-media video). ### _Simulations_ Using a realistic 3D simulator, we are able to quickly implement and test different types of maneuvers that involve cables. We performed experiments with the Obi Rope Unity package version 6.3, which is based on an advanced particle physics engine. The Obi Rope is optimized to deal with the infinite states of a rope. However, it becomes unstable once we include more than five ropes. We implemented a polygonal-hitch with three cables, as shown in Fig. 8. In the simulation, we are able to analyze the effect of the friction between cables, and the performance of our quasi-static approach, before running experiments with actual robots. Our simulation framework for polygonal-hitches is open-source and publicly available1. Footnote 1: The source code for simulations and actual robots is available at ### _Experiments with actual robots_ In our experimental testbed, we used the geometrical controller for quadrotors [28], on the crazyswarm framework [29], which allows us to control eight robots simultaneously. Every quadrotor has the same components and dimensions; its weight is \(132\)\(g\) and, cable length of \(L_{k}=2\)\(m\). We use a single Crazyradio PA 2.4 GHz USB dongle to communicate the computer with the robots. The localization of the quadrotors is obtained using motion capture system (Optitrack) operating at 120 Hz for localization of the quadrotors. Although we placed markers on the cables for performance analysis, our method works open-loop and only uses the location of the quadrotors as feedback. To obtain the intersection of the cables, we placed three markers in the middle of each cable. Then, we apply a linear regression to approximate each cable section. Furthermore, we can obtain the intersections between the lines to the desired intersections. Our evaluation metric is based on the error between the estimated intersection and the desired intersection. In the following four experiments, validate each of the four proposed actions. **Experiment 1 - Action 1, forming a hitch:** We propose a method to form a hitch by interlacing cables in mid-air without human intervention. For a given polygon \(\mathcal{P}\), we compute a balanced configuration following the procedure in Section III-C, including the desired quadrotor positions \(\mathbf{q}_{k}\) and \(\mathbf{r}_{k}\) for \(k=1,\ldots,n\). Then, we follow the algorithm in Section IV-A for a polygon \(\mathcal{P}=\{\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3}\}\), where \(\mathbf{p}_{1}=[0..,-1.,0.5]^{\mathsf{T}}\), \(\mathbf{p}_{2}=[-0.9,0.4,0.5]^{\mathsf{T}}\), and \(\mathbf{p}_{3}=[0.9,0.4,0.5]^{\mathsf{T}}\). In Step 1, the quadrotors move to the starting point as illustrated in Fig. 0(a). In Step 2, they swap position with a circular trajectory between \(\mathbf{q}_{k}\) and \(\mathbf{r}_{k}\) creating an intersection (see Fig. 0(b) and 0(a)). In Step 3, the robots complete the polygonal-hitch (see Fig. 0(c) and 0(d)). We performed the experiment multiple times and found that a convex regular polygon was a reliable polygon to form a hitch. Otherwise, if the convex polygon has a wide angle between the edges \((\mathbf{p}_{k},\mathbf{p}_{k+1})\) and \((\mathbf{p}_{k},\mathbf{p}_{k-1}\), the swapping trajectory has a bigger radius. This is easy to check, because the radius is half of the Euclidean distance between \(\mathbf{p}_{k}\), and \(\mathbf{q}_{k}\)). Here we found that our success rate for the hitch forming experiments is 8 out of 10. The downwash can affect the robots' trajectory during the location swapping step. However, it could be improved using a trajectory that maintains a higher vertical distance between the robots [30]. We also successfully formed a hitch with four vertices as shown in Fig. 8(b). **Experiment 2 - Action 2, moving a vertex:** This action is focused on changing the shape of the polygon by moving a single vertex. To demonstrate that a polygonal hitch is able to control an intersection point, we perform an experiment where the point \(\mathbf{p}_{1}\) is moving along a trajectory described by, \[\mathbf{p}_{1}(t)=\left\{\begin{array}{cl}(0,-0.1)&\text{if }t<25s\\ (0,0.05t-0.1)&\text{if }t>25s\end{array}\right.\] The input is the initial polygon \(\mathcal{P}=\{\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3}\}\), where Fig. 8: Realistic simulation, red spheres are the desired intersections, point robots are the white color. Fig. 9: Polygonal hitches. \(\mathbf{p}_{1}=[0.,-1.,0.5]^{\mathsf{\tau}}\), \(\mathbf{p}_{2}=[-0.4,0.5,0.5]^{\mathsf{\tau}}\), and \(\mathbf{p}_{3}=[0.4,0.5,0.5]^{\mathsf{\tau}}\). We compute the Euclidean distance between the current intersection points and the desired intersection points - see results in Fig. 10. The average error in position are \(\mu_{p_{1}}=0.0819\), \(\mu_{p_{2}}=0.168\), and \(\mu_{p_{3}}=0.762\), and its standard deviations are \(\sigma_{p_{1}}=0.056\), \(\sigma_{p_{2}}=0.145\), and \(\sigma_{p_{3}}=0.088\). Since our current implementation does not use any feedback from the actual position of the intersection points, there are some offsets in their positions. It is interestingly, that the moving point \(\mathbf{p}_{1}\) has a smaller error. We observed that the motion of the vertex helps to overcome the frictional resistance between cables. We tested two types of ropes - nylon and leather. The nylon rope has higher friction, and it makes it more difficult to move the vertex. **Experiment 3 - Action 3, moving an edge,** This experiment shows the ability to move an edge. The initial polygon, \(\mathcal{P}=\{\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3}\}\), is given by \(\mathbf{p}_{1}=[0.,-1.,0.5]^{\mathsf{\tau}}\), \(\mathbf{p}_{2}=[-0.4,0.5,0.5]^{\mathsf{\tau}}\), and \(\mathbf{p}_{3}=[0.4,0.5,0.5]^{\mathsf{\tau}}\). We computed the distance between the current intersection points and the desired intersection points (see results in Fig. 11). However, we observed an increase in position errors in \(e_{1}\) while \(e_{2}\) and \(e_{3}\) were moving in the second ten. This increase in error is due to the friction of the interlaced cable. The average error in positions are \(\mu_{p_{1}}=0.0819\), \(\mu_{p_{2}}=0.168\), and \(\mu_{p_{3}}=0.762\), and the standard deviations are \(\sigma_{p_{1}}=0.056\), \(\sigma_{p_{2}}=0.145\), and \(\sigma_{p_{3}}=0.088\). **Experiment 4 - Action 4, adjusting a cable:** The last action that we discuss is adjusting the cable length. This action is helpful after moving a vertex or an edge as the resultant action will give \(e_{k}\not=d_{k}\). Then, adjusting the cable length can create a balanced configuration with \(e_{k}=d_{k}\). We computed the distance between the current intersection points and the desired intersection points (see results in Fig. 12). Notably, this experiment shows the ability to adjust the cable and maintain a smaller position error. The average error in position \(\mu_{p_{1}}=0.0363\), \(\mu_{p_{2}}=0.026\), and \(\mu_{p_{3}}=0.043\), and its standard deviations \(\sigma_{p_{1}}=0.036\), \(\sigma_{p_{2}}=0.084\), and \(\sigma_{p_{3}}~{}=~{}0.987\). ## VI Conclusion and future work In this work, we propose a novel class of hitches that can be formed and morphed in midair using a team of aerial robots with cables. We introduce the concept of a _polygonal-hitch_, which consists of multiple catenary robots forming a cyclic sequence by interlacing multiple cables. The cables of two consecutive catenary robots are linked, forming a convex polygonal shape. We propose an algorithm to form the hitch systematically without any human intervention. The steps can run in parallel, allowing hitches with a large number of robots to be formed in constant time. We develop a set of actions that include three actions to changeits shape. Including, moving a vertex, moving an edge, and adjusting the cable. We analyzed and controlled the hitch with a quasi-static approach that works in simulation and actual robots. We demonstrated the successful functionality of our system in simulation and actual robots. In future work, we aim to transport objects and include the cable dynamics involved in our system. Fig. 11: Results of Experiment 3: Action moving an edge. The plots show the error of the desired intersection. The static point \(\mathbf{p}_{1}\) is represented by the green line. The moving points \(\mathbf{p}_{2}\) and \(\mathbf{p}_{3}\) are the red and blue line, respectively. Fig. 12: Results of Experiment 4: Action adjusting the cable. The plots show the error of the desired intersection. \(\mathbf{p}_{1}\), \(\mathbf{p}_{2}\) and \(\mathbf{p}_{3}\). Fig. 10: Results of Experiment 2: Action moving a vertex. The plots are showing the error between the desired intersection and the actual intersection. The green line shows the error for the moving point \(\mathbf{p}_{1}\). Red and blue lines show the error of fixed points \(\mathbf{p}_{2}\) and \(\mathbf{p}_{3}\) respectively.
2302.10272
Is Autoencoder Truly Applicable for 3D CT Super-Resolution?
Featured by a bottleneck structure, autoencoder (AE) and its variants have been largely applied in various medical image analysis tasks, such as segmentation, reconstruction and de-noising. Despite of their promising performances in aforementioned tasks, in this paper, we claim that AE models are not applicable to single image super-resolution (SISR) for 3D CT data. Our hypothesis is that the bottleneck architecture that resizes feature maps in AE models degrades the details of input images, thus can sabotage the performance of super-resolution. Although U-Net proposed skip connections that merge information from different levels, we claim that the degrading impact of feature resizing operations could hardly be removed by skip connections. By conducting large-scale ablation experiments and comparing the performance between models with and without the bottleneck design on a public CT lung dataset , we have discovered that AE models, including U-Net, have failed to achieve a compatible SISR result ($p<0.05$ by Student's t-test) compared to the baseline model. Our work is the first comparative study investigating the suitability of AE architecture for 3D CT SISR tasks and brings a rationale for researchers to re-think the choice of model architectures especially for 3D CT SISR tasks. The full implementation and trained models can be found at: https://github.com/Roldbach/Autoencoder-3D-CT-SISR
Weixun Luo, Xiaodan Xing, Guang Yang
2023-01-23T12:48:08Z
http://arxiv.org/abs/2302.10272v2
# Is Autoencoder Truly Applicable for 3D CT Super-Resolution? ###### Abstract Featurered by a bottleneck structure, autoencoder (AE) and its variants have been largely applied in various medical image analysis tasks, such as segmentation, reconstruction and de-noising. Despite of their promising performances in aforementioned tasks, in this paper, we claim that AE models are not applicable to single image super-resolution (SISR) for 3D CT data. Our hypothesis is that the bottleneck architecture that resizes feature maps in AE models degrades the details of input images, thus can sabotage the performance of super-resolution. Although U-Net proposed skip connections that merge information from different levels, we claim that the degrading impact of feature resizing operations could hardly be removed by skip connections. By conducting large-scale ablation experiments and comparing the performance between models with and without the bottleneck design on a public CT lung dataset, we have discovered that AE models, including U-Net, have failed to achieve a compatible SISR result (\(p<0.05\) by Student's \(t\)-test) compared to the baseline model. Our work is the first comparative study investigating the suitability of AE architecture for 3D CT SISR tasks and brings a rationale for researchers to re-think the choice of model architectures especially for 3D CT SISR tasks. The full implementation and trained models can be found at: [https://github.com/Roldbach/Autoencoder-3D-CT-SISR](https://github.com/Roldbach/Autoencoder-3D-CT-SISR) Weixun Luo\({}^{\star\dagger}\) Xiaodan Xing\({}^{\dagger}\) Guang Yang\({}^{\dagger\ddagger}\)\({}^{\star}\) Department of Bioengineering, Imperial College London, London, UK \({}^{\dagger}\) National Heart and Lung Institute, Imperial College London, London, UK \({}^{\ddagger}\) Cardiovascular Research Centre, Royal Brompton Hospital, London, UK Autoencoder, super-resolution, CT ## 1 Introduction High resolution (HR) volumetric data generated by Computed Tomography (CT) can capture small structures and provide detailed textural information about human anatomy and pathology, thus facilitate the diagnostic procedure. However, the acquisition of HRCT data requires exposure to high-dose radiation, which can bring potential health risks to patients. More importantly, HRCT data are often downsampled by increasing the slice interval to reduce the intrinsically high storage requirement. Unfortunately, the downsampled data are less likely to be re-used in subsequent image analysis that requires high-quality input. To address these dilemmas, single image super-resolution (SISR) has attracted increasing attention as it requires only one low resolution (LR) instance to reconstruct the HR counterpart without affecting the raw data acquisition. Compared with 2D SISR, 3D SISR is considerably more challenging. First, the size of 3D volumetric data easily leads to memory bottlenecks and prolonged training time. Moreover, 3D data contain vastly more contextual and structural details that impose additional difficulties for 3D SISR model training. Finally, the use of 3D convolutional layers inevitably necessitates a much higher number of parameters than in 2D, so the size of the model must be very carefully considered. Among 3D super resolution models, a popular memory efficient solution is the utilization of the autoencoder (AE) architecture [1], where feature maps could be substantially downsampled to a great extent in the middle of the model. By downsampling feature maps, the number of parameters and the time required for optimization are largely reduced. Typically, common downsampling layers include pooling [2], stride convolution [3] and interpolation [4]. Besides of its applications on SR tasks, the AE structure has also been successfully applied in other tasks such as segmentation [5] and detection [6]. In this paper, however, we show that **models utilizing AE, including U-Net, have substantial limitations for 3D CT SISR.** Specifically, we compare various AE models with the baseline model, and provide statistically significant evidence to show that AE structures cause an unrecoverable loss of information during the data processing and potentially increases the training difficulty. We also demonstrated that skip connections and the feature map concatenations in U-Nets may mitigate the negative effect caused by the feature map down sampling, but they can not fully compensate for the information loss. ## 2 Methods In this section, we describe the implementation of each of the models used in the experiments. We first build the baseline model, "Plain CNN", upon the simplest backbone to avoid any possible benefits brought by the architecture itself. All
2301.09273
Constraining SMEFT BSM scenarios with EWPO and $Δ_{CKM}$
Precision observables are well known for constraining most of the Beyond Standard Model (BSM) scenarios tightly. We present here a simple and comprehensive fitting framework for various BSM scenarios to these observables. We start with the fit of $S$, $T$ and $V$ parameter and their correlations using the Electroweak Precision Observables (EWPO) including the recent $m_W$ measurement from CDF-II. Utilizing these observables, we also fit various New Physics (NP) scenarios consisting of different subsets of dimension-6 Standard Model Effective Field Theory (SMEFT) operators in the Warsaw basis out of a total of 10 appearing at tree level in EWPO. To further constrain these scenarios, we augment these observables with $\Delta_{CKM}$ measurement using 1-loop matching of the Low Energy Effective Field Theory (LEFT) to SMEFT operators at the Z-pole. We show that the inclusion of $\Delta_{CKM}$ constraint indeed results in stronger bounds on the SMEFT Wilson Coefficients. We also constrain the UV parameters of BSM extensions like Vectorlike leptons (VLL) and find out that such a minimal extension is in tension with the forward-backward asymmetry in $b$-sector ($A_b^{FB}$) and the recent measurement of $M_W$. In order to lift the two blind directions, which one encounters while fitting all the 10 SMEFT WCs at tree-level, we also include the LEP-II observables pertaining to the $WW$ production and present the results for the fits with and without $\Delta_{CKM}$ constraint.
Mathew Thomas Arun, Kuldeep Deka, Tripurari Srivastava
2023-01-23T05:05:26Z
http://arxiv.org/abs/2301.09273v2
###### Abstract ###### Abstract Precision observables are well known for constraining most of the Beyond Standard Model (BSM) scenarios tightly. We present here a simple and comprehensive fitting framework for various BSM scenarios to these observables. We start with the fit of \(S\), \(T\) and \(V\) parameter and their correlations using the Electroweak Precision Observables (EWPO) including the recent \(m_{W}\) measurement from CDF-II. Utilizing these observables, we also fit various New Physics (NP) scenarios consisting of different subsets of dimension-6 Standard Model Effective Field Theory (SMEFT) operators out of a total of 10 appearing at tree level in EWPO. To further constrain these scenarios, we augment these observables with \(\Delta_{CKM}\) measurement using 1-loop matching of the Low Energy Effective Field Theory (LEFT) to SMEFT operators at the Z-pole. We show that the inclusion of \(\Delta_{CKM}\) constraint indeed results in stronger bounds on the SMEFT Wilson Coefficients. We also constrain the UV parameters of BSM extensions like Vectorlike leptons (VLL) and find out that such a minimal extension is in tension with the forward-backward asymmetry in \(b\)-sector (\(A_{b}^{FB}\)) and the recent measurement of \(M_{W}\). In order to lift the two blind directions, which one encounters while fitting all the 10 SMEFT WCs at tree-level, we also include the LEP-II observables pertaining to the \(WW\) production and present the results for the fits with and without \(\Delta_{CKM}\) constraint. **Constraining SMEFT BSM scenarios with EWPO and \(\Delta_{CKM}\)** \({}^{1}\)Mathew Thomas Arun 1, 2Kuldeep Deka2, 2Tripurari Srivastava3 Footnote 1: [email protected] Footnote 2: [email protected] Footnote 3: [email protected] \({}^{1}\)_School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, Vithura, Kerala, 695551, India_ \({}^{2}\)_Department of Physics and Astrophysics, University of Delhi, Delhi 110007, India_ ## 1 Introduction The discovery of Higgs at the LHC in 2012 [1] completes the Standard Model of particle physics (SM). On the other hand, the absence of any heavy resonances in the Run 2 of the Large Hadron Collider (LHC) reveals that any physics beyond the Standard Model (BSM), with SM like coupling, might be significantly heavier than the electroweak scale. Interestingly, however, there has been a few discrepancies in low energy experiments like \((g-2)\) of the muon [2, 3, 4, 5, 6], flavor anomalies [7, 8, 9, 10], Cabibbo anomaly [11] and the recent CDF anomaly in W boson mass [12]. Given the elusiveness of the New Physics at collider experiments, there is a huge motivation to understand the nature of various anomalies in a model-independent Effective Field Theory formalism. Here, the New Physics effects are included by extending SM with higher dimensional operators which are invariant under the SM gauge group [13, 14, 15, 16, 17, 18]. Being well measured at run-1 and run-2 of LEP [19, 20, 21, 22, 23], the precise measurements of electroweak observables strongly constrain the scales of these SMEFT operators. The recent measurement of \(W\) boson mass [12], \(M_{W}=80,433.5\pm 9.4\) MeV, at CDF II has put particle physicists in a precarious position since it disagrees with the SM prediction, \(M_{W}=80,360\pm 6\) MeV, at \(7\sigma\) level. The new combined world average now lies at \(M_{W}=80,413.3\pm 9\) MeV, which is about \(5.3\sigma\) away from the Standard Model value. Several studies have been conducted to address the impact of this new measurement on the Global EW fit in terms of the oblique parameters [24, 25, 26, 27] assuming the Standard Model Effective Field Theory (SMEFT) framework [28, 29, 30, 31, 32, 33, 34, 35]. Models such as Technicolor models, Extradimensions, Composite Higgs or Little Higgs scenarios, SU(2) Triplet Scalar Extensions [36], Models with VLQ [37], Two Higgs Doublet Models [24] among many others [38, 39, 40, 41] are also proposed to address this, with Higgs and additional gauge bosons (\(Z^{\prime}\)) [25]. Moreover, there have been anomalies reported from low-energy measurements. These are studied by Low-energy Effective Field Theory (LEFT) description of Standard Model. In particular, in this paper, we have considered the beta decay and other semileptonic processes that contribute to the \(\Delta_{CKM}\) anomaly in the LEFT basis. Since \(\Delta_{CKM}\) and electro-weak precision tests crucially depend on the measured value of \(G_{F}\), though at different scales, these measurements are connected and have to be taken together while constraining the New Physics. To that end, we match these LEFT operators with SMEFT at 1-loop at \(Z\) boson scale and obtain experimental constraints on the Wilson Coefficients (WCs) of the dimension-6 SMEFT operators. In this article, we augment the recent CDF-II \(M_{W}\) mass measurement in the electroweak fit with \(\Delta_{CKM}\) constraint and explore various BSM scenarios. Along with these two observables, \(A_{b}^{FB}\) and \(A_{l}^{SLD}\) play a significant role in these fits, since both have a long-standing discrepancy of more than \(2\sigma\) with respect to the SM predictions. One of New Physics parametrisation, which we use here, is the \(S,\ T\) and \(V\)[42, 43, 44]. Inclusion of \(V\) parameter along with \(S\) and \(T\) parameters identifies the New Physics that interacts via the weak coupling constant \(G_{F}\). Moreover, this analysis is also crucial in understanding the Cabibbo anomaly as it solely depends on the \(V\) parameter. We then switch to SMEFT paramatrisation where we consider various operator subsets affecting the observables of our interest and and perform the fit to constrain their WCs. Depending on the UV complete scenarios, there can be only one subset of all the WCs that contribute significantly at low energies. One such fit looks at the scenario where the operators common to the \(M_{W}\) and \(\Delta_{CKM}\) are the ones carrying the imprint of the New Physics. Another New Physics scenario considered here is the set of WCs which appear in the expression for \(M_{W}\) at tree-level. We also perform a fit for the vector-like lepton models, being a popular extension of SM because of its ability to address the discrepancy in muon \((g-2)\) and provide mass to the SM neutrinos through see-saw mechanism, which contains the WCs carrying imprints of these new leptons at electroweak scale. It also enables us to constrain the UV parameters of the model. Finally, we also perform an all-parameter fit which takes into account all the 10 WCs affecting the EWPO at tree-level. However, we encounter two blind directions as the EWPO can only constrain 8 independent combinations of the WCs. Addition of \(\Delta_{CKM}\) also does not help in lifting the blind directions. Rather, this forces us to use LEP-II data, where the presence of W-W production channel breaks the two blind directions, enabling us to constrain all the 10 WCs of our interest. This paper is organized as follows: In the next section, we revisit the contributions of dimension-6 operators to the shift in fermion couplings to gauge-bosons at leading order. In section [3], we briefly discuss near Z pole observables (LEP-I) and the observables arising due to \(WW\) production (LEP-II). We then discuss the contributions of SMEFT operators to muon and beta decay obtained by matching with LEFT at one loop in section [4] and discuss the fitting framework in section [5]. Further, we analyze various model independent frameworks, using \(S,\ T,\ V\) and the subsets of dimension-6 operators affecting the EWPO and \(\Delta_{CKM}\). Finally, in section [6] we summarize the results. ## 2 SMEFT contributions to Electowweak Fit The low-energy effects of the massive BSM particles can be approximated by integrating them out to obtain higher-dimensional interactions between the SM fields. In such an approach, the SM can be regarded as an effective theory whose known renormalizable interactions are supplemented by higher-order terms and scaled by inverse powers of the BSM mass scale. These higher dimensional operators can then be written in the form \[{\cal L}={\cal L}_{SM}+\sum_{d=5}^{\infty}\sum_{i=1}^{n}\frac{{\cal C}_{i}^{d}} {\Lambda^{d-4}}O_{i}^{d}\,, \tag{1}\] where \(d\) represents the dimension of the operator and \(i\) runs over all the redundant set of operators at a particular dimension. The operators \(O_{i}^{d}\) are all \(SU(3)\times SU(2)_{L}\times U(1)_{Y}\) invariant where all of the effects of the BSM physics reside in the WCs \({\cal C}_{i}^{d}\). We assume the WCs to be real and use the Warsaw basis [15] to paramatrise them. The Electroweak Precesion Observables (EWPO) are nothing but the \(Z\) and \(W\) pole observables coming from LEP studies and there are 10 six dimensional operators which contribute to these observables at tree level. We collect these operators in Table 1, where \(H\) is the \(SU(2)_{L}\) Higgs doublet, \(\tau^{a}\) are the Pauli matrices, \(D_{\mu}=\partial_{\mu}+ig_{s}T^{A}G_{\mu}^{A}+ig_{2}\frac{\tau^{a}}{2}W_{\mu} ^{a}+ig_{1}YB_{\mu}\), \(q\) is the \(SU(2)_{L}\) quark-doublet with \(q^{T}=(u_{L},d_{L})\), \(l\) is the \(SU(2)_{L}\) lepton-doublet with \(l^{T}=(\nu_{L},e_{L})\), \(W_{\mu\nu}^{a}\) is the \(SU(2)_{L}\) field strength with \(W_{\mu\nu}^{a}=\partial_{\mu}W_{\nu}^{a}-\partial_{\nu}W_{\mu}^{a}-g_{2} \epsilon^{abc}W_{\mu}^{b}W_{\nu}^{c}\) and \(B_{\mu\nu}\) is the \(U(1)_{Y}\) field strength with \(B_{\mu\nu}\mathop{\leftrightarrow}\limits_{\leftrightarrow}\partial_{\mu}B_{ \nu}-\partial_{\nu}W_{\mu}\). The other definitions include: \(H^{\dagger}i\overleftrightarrow{D_{\mu}}H=iH^{\dagger}(D_{\mu}H)-i(D_{\mu}H) ^{\dagger}H\), and \(H^{\dagger}i\overleftrightarrow{D_{\mu}^{a}}H=iH^{\dagger}\tau^{a}D_{\mu}H-i( D_{\mu}H)^{\dagger}\tau^{a}H\). In order to get the theoretical predictions from the Electroweak Precesion Data pertaining the pole observables, we first fix our choice of input parameters, namely, the fine structure constant \(\hat{\alpha}_{e}\) from the low energy limit of electron Compton scattering, the Fermi constant in muon decays \(\hat{G}_{F}\) and the measured \(Z\) mass (\(\hat{m}_{Z}\)). At tree level, one can then define the effective _measured_ mixing angle \[s_{W}^{2}=\frac{1}{2}-\frac{1}{2}\sqrt{1-\frac{4\,\pi\hat{\alpha}_{e}}{\sqrt{ 2}\,\hat{G}_{F}\,\hat{m}_{Z}^{2}}}, \tag{2}\] The value of the SU(2)\({}_{\rm L}\) gauge coupling can be taken as: \[\hat{g}_{2}\,s_{W}=2\,\sqrt{\pi}\,\hat{\alpha}_{e}^{1/2}. \tag{3}\] The effective measured vacuum expectation value (vev) in the SM can be defined as \(\hat{v}^{2}=1/\sqrt{2}\,\hat{G}_{F}\). \begin{table} \begin{tabular}{|c|c||c||c|} \hline \({\cal O}_{HD}\) & \(\left(H^{\dagger}D^{\mu}H\right)^{*}\left(H^{\dagger}D_{\mu}H\right)\) & \({\cal O}_{HWB}\) & \(\left(H^{\dagger}\tau^{a}H\right)W_{\mu\nu}^{a}B^{\mu\nu}\) \\ \hline \({\cal O}_{ll}\) & \(\left(\overline{l}\gamma_{\mu}l)(\overline{l}\gamma^{\mu}l)\) & \({\cal O}_{He}\) & \(\left(H^{\dagger}i\overleftrightarrow{D_{\mu}}H\right)(\overline{e}\gamma^{ \mu}e)\) \\ \hline \({\cal O}_{Hu}\) & \(\left(H^{\dagger}i\overleftrightarrow{D_{\mu}}H\right)(\overline{u}\gamma^{ \mu}u)\) & \({\cal O}_{Hd}\) & \(\left(H^{\dagger}i\overleftrightarrow{D_{\mu}}H\right)(\overline{d}\gamma^{ \mu}d)\) \\ \hline \({\cal O}_{Hq_{3}}\) & \(\left(H^{\dagger}iD_{\mu}^{\leftrightarrow}H\right)(\overline{q}\tau^{a} \gamma^{\mu}q)\) & \({\cal O}_{Hq_{1}}\) & \(\left(H^{\dagger}iD_{\mu}^{\leftrightarrow}H\right)(\overline{q}\tau^{a}\gamma^ {\mu}q)\) \\ \hline \({\cal O}_{Hl_{3}}\) & \(\left(H^{\dagger}iD_{\mu}^{\rightarrow}H\right)(\overline{l}\tau^{a}\gamma^{ \mu}l)\) & \({\cal O}_{Hl_{1}}\) & \(\left(H^{\dagger}iD_{\mu}^{\leftrightarrow}H\right)(\overline{l}\tau^{a}\gamma^ {\mu}l)\) \\ \hline \end{tabular} \end{table} Table 1: Dimension-6 operators contributing to the \(Z\) and \(W\) pole observables of this study at tree level. Once we allow for these new SMEFT operators, the gauge sector gets modified resulting in the requirement for barred fields and couplings in order to get canonical kinetic terms. These transformations are [45]: \[\overline{W}^{a}_{\mu} \equiv (1-{\cal C}_{HW}v^{2}/\Lambda^{2})W^{a}_{\mu}\] \[\overline{B}_{\mu} \equiv (1-{\cal C}_{HB}v^{2}/\Lambda^{2})B_{\mu}\] \[\overline{g}_{2} \equiv (1+{\cal C}_{HW}v^{2}/\Lambda^{2})g_{2}\] \[\overline{g}_{1} \equiv (1+{\cal C}_{HB}v^{2}/\Lambda^{2})g_{1}\,, \tag{4}\] such that \(\overline{W}_{\mu}\overline{g}_{2}=W_{\mu}g_{2}\) and \(\overline{B}_{\mu}\overline{g}_{1}=B_{\mu}g_{1}\). The masses of the W and Z fields to \({\cal O}\biggl{(}\frac{1}{\Lambda^{2}}\biggr{)}\) are [45], \[M^{2}_{W} = \frac{\overline{g}_{2}^{2}v^{2}}{4},\] \[M^{2}_{Z} = \frac{(\overline{g}_{1}^{2}+\overline{g}_{2}^{2})v^{2}}{4}+\frac {v^{4}}{\Lambda^{2}}\left(\frac{1}{8}(\overline{g}_{1}^{2}+\overline{g}_{2}^{ 2}){\cal C}_{HD}+\frac{1}{2}\overline{g}_{1}\overline{g}_{2}{\cal C}_{HWB} \right). \tag{5}\] The 4-fermion operators contributing to the decay of the \(\mu\) changes the relation between vev, \(v\), and the Fermi constant \(\hat{G}_{F}\), \[\hat{G}_{F}\equiv\frac{1}{\sqrt{2}v^{2}}-\frac{1}{\sqrt{2}\Lambda^{2}}{\cal C }_{ll}+\frac{\sqrt{2}}{\Lambda^{2}}{\cal C}_{H_{l}}\,. \tag{6}\] The shift in \(M^{2}_{Z}\) can then be written as: \[\delta M^{2}_{Z}\equiv\frac{1}{2\,\sqrt{2}}\,\frac{\hat{m}_{Z}^{2}}{\hat{G}_{ F}}C_{HD}+\frac{2^{1/4}\sqrt{\pi}\,\sqrt{\hat{\alpha}_{e}}\,\hat{m}_{Z}}{\hat{G}_{ F}^{3/2}}C_{HWB}. \tag{7}\] where \(\hat{m}_{Z}\) is the input parameter The kinetic mixing introduced by the operator with Wilson coefficient \(C_{HWB}\) leads to a redefinition of the usual \(s_{\theta}=\sin\theta\) mixing angle of the SM given by \[\delta s_{W}^{2}=-\frac{v^{2}}{\Lambda^{2}}\frac{s_{W}c_{W}}{c_{W}^{2}-s_{W}^ {2}}\left[2s_{W}c_{W}\left(\delta v+\frac{1}{4}{\cal C}_{HD}\right)+{\cal C}_ {HWB}\right] \tag{8}\] We relate the Lagrangian parameters \(\overline{g}_{2},\overline{g}_{1}\) to the input parameters at tree level via \[\overline{g}_{1}^{2}+\overline{g}_{2}^{2}=4\,\sqrt{2}\,\hat{G}_{F}\,\hat{m}_ {Z}^{2}\left(1-\sqrt{2}\,\delta G_{F}-\frac{\delta M_{Z}^{2}}{\hat{m}_{Z}^{2} }\right), \tag{9}\] \[\overline{g}_{2}^{2}=\frac{4\,\pi\,\hat{\alpha}_{e}}{s_{W}^{2}}\left[1+\frac {\delta s_{W}^{2}}{s_{W}^{2}}+\frac{c_{W}}{s_{W}}\frac{1}{\sqrt{2}\,\hat{G}_{ F}}\,C_{HWB}\right]. \tag{10}\] Expressing \(\overline{M}_{W}^{2}\) in terms of the inputs parameters we get: \[\overline{M}_{W}^{2}=\hat{m}_{W}^{2}\left(1+\frac{\delta s_{W}^{2}}{s_{W}^{2} }+\frac{c_{W}}{s_{W}\sqrt{2}\hat{G}_{F}}C_{HWB}+\sqrt{2}\delta G_{F}\right)= \hat{m}_{W}^{2}-\delta M_{W}^{2}, \tag{11}\] where \(\delta M_{W}^{2}=-\hat{m}_{W}^{2}\left(\frac{\delta s_{W}^{2}}{s_{W}^{2}}+ \frac{c_{W}}{s_{W}\sqrt{2}G_{F}}C_{HWB}+\sqrt{2}\delta G_{F}\right)\) and \(\hat{m}_{W}^{2}=c_{W}^{2}\hat{m}_{Z}^{2}\). The effective axial and vector couplings of the SMEFT \(Z\) boson are defined as follows \[{\cal L}_{Z,eff}=g_{Z,eff}\,\left(J_{\mu}^{Z\ell}Z^{\mu}+J_{\mu}^{Z\nu}Z^{\mu }+J_{\mu}^{Zu}Z^{\mu}+J_{\mu}^{Zd}Z^{\mu}\right), \tag{12}\] where \(g_{Z,eff}=-\,2\,2^{1/4}\,\sqrt{\hat{G}_{F}}\,\hat{m}_{Z}\), \((J_{\mu}^{Zx})^{ij}=\overline{\psi}_{i}\,\gamma_{\mu}\left[(\overline{g}_{V}^{y} )_{eff}^{ij}-(\overline{g}_{A}^{x})_{eff}^{ij}\,\gamma_{5}\right]\psi_{j}\) for \(\psi=\{u,d,\ell,\nu\}\). In general, these currents are matrices in flavour space. When we restrict our attention to the case of a minimal linear MFV scenario \((J_{\mu}^{Zx})_{ij}\simeq(J_{\mu}^{Zx})\delta_{ij}\). In the standard basis, the effective axial and vector couplings are modified from the SM values by a shift defined as \[\delta(g_{V,A}^{x})_{ij}=(\overline{g}_{V,A}^{x})_{ij}^{eff}-(g_{V,A}^{x})_{ij }^{SM}, \tag{13}\] The tree level couplings are usual SM relations, \[g_{R}^{Zf} = -s_{W}^{2}Q_{f}\quad\text{and}\quad g_{L}^{Zf}=T_{3}^{f}-s_{W}^{2 }Q_{f} \tag{14}\] with \(T_{3}^{f}=\pm\dfrac{1}{2}\). The full SMEFT contributions to the effective couplings shown in Table 2. For the charged currents, we define \[\mathcal{L}_{W,eff}=-\dfrac{\sqrt{2\,\pi\,\hat{c_{e}}}}{s_{W}}\left[(J_{\mu}^{ W_{\pm},\ell})_{ij}W_{\pm}^{\mu}+(J_{\mu}^{W_{\pm},q})_{ij}W_{\pm}^{\mu}\right], \tag{15}\] where in the SM one has \[(J_{\mu}^{W_{+},\ell})_{ij} = \overline{\nu}_{i}\,\gamma^{\mu}\,\left(\overline{g}_{V}^{W_{+}, \ell}-\overline{g}_{A}^{W_{+},\ell}\gamma_{5}\right)\,\ell_{j}, \tag{16}\] \[(J_{\mu}^{W_{-},\ell})_{ij} = \overline{\ell}_{i}\,\gamma^{\mu}\,\left(\overline{g}_{V}^{W_{-}, \ell}-\overline{g}_{A}^{W_{-},\ell}\gamma_{5}\right)\nu_{j}. \tag{17}\] The contribution of these shifts to the Observables of our interest can then be calculated [46, 47, 48, 49, 50, 51, 52, 48]: ## 3 \(W\) and \(Z\) pole observables of interest This section dedicate to review the contributions from SMEFT to the observables considered. Our list of EWPO includes [51, 52, 19, 28]: \[M_{W},\Gamma_{W},\Gamma_{Z},\sigma_{h},R_{l},A_{l}^{FB},R_{b},R_{c},A_{b}^{FB },A_{c}^{FB},A_{b},A_{c},A_{l},A_{l}^{SLD},BR_{W\to\nu l}\,. \tag{18}\] \begin{table} \begin{tabular}{||c||c||} \hline \hline \(\delta(g_{V}^{t})\) & \(\delta\overline{g}_{Z}\,(g_{V}^{t})^{SM}-\frac{1}{4\sqrt{2}G_{F}}\left(C_{He}+ C_{Hl_{1}}+C_{Hl_{3}}\right)-\delta s_{W}^{2}\), \\ \hline \(\delta(g_{A}^{\ell})\) & \(\delta\overline{g}_{Z}\,(g_{A}^{t})_{pr}^{SM}+\frac{1}{4\sqrt{2}G_{F}}\left(C_ {He}-C_{Hl_{1}}-C_{Hl_{3}}\right),\) \\ \hline \(\delta(g_{V}^{\nu})\) & \(\delta\overline{g}_{Z}\,(g_{V}^{t})^{SM}-\frac{1}{4\sqrt{2}G_{F}}\left(C_{Hl_{ 1}}-C_{Hl_{3}}\right)\) \\ \hline \(\delta(g_{A}^{\nu})\) & \(\delta\overline{g}_{Z}\,(g_{A}^{\nu})_{pr}^{SM}-\frac{1}{4\sqrt{2}G_{F}}\left(C_ {Hl_{1}}-C_{Hl_{3}}\right)\) \\ \hline \(\delta(g_{V}^{u})\) & \(\delta\overline{g}_{Z}\,(g_{V}^{u})_{pr}^{SM}+\frac{1}{4\sqrt{2}G_{F}}\left(-C_ {Hq_{1}}+C_{Hq_{3}}-C_{Hu}\right)+\frac{2}{3}\delta s_{W}^{2}\) \\ \hline \(\delta(g_{A}^{u})\) & \(\delta\overline{g}_{Z}\,(g_{A}^{u})_{pr}^{SM}-\frac{1}{4\sqrt{2}G_{F}}\left(C_ {Hq_{1}}-C_{Hq_{3}}-C_{Hu}\right)\) \\ \hline \(\delta(g_{A}^{d})\) & \(\delta\overline{g}_{Z}\,(g_{A}^{d})_{pr}^{SM}-\frac{1}{4\sqrt{2}G_{F}}\left(C_ {Hq_{1}}+C_{Hq_{3}}+C_{Hd}\right)-\frac{1}{3}\delta s_{W}^{2}\) \\ \hline \(\delta(g_{A}^{d})\) & \(\delta\overline{g}_{Z}\,(g_{A}^{d})_{pr}^{SM}+\frac{1}{4\sqrt{2}G_{F}}\left(-C_ {Hq_{1}}-C_{Hq_{3}}+C_{Hd}\right)\) \\ \hline \(\delta(g_{V}^{W_{\pm},\ell})=\delta(g_{A}^{W_{\pm},\ell})\) & \(\frac{1}{2\sqrt{2}G_{F}}\left(C_{Hl_{3}}+\frac{1}{2}\frac{c_{W}}{s_{W}}\,C_ {HWB}\right)+\frac{1}{4}\frac{\delta s_{W}^{2}}{s_{W}^{2}}\) \\ \hline \(\delta(g_{V}^{W_{\pm},q})=\delta(g_{A}^{W_{\pm},q})\) & \(\frac{1}{2\sqrt{2}G_{F}}\left(C_{Hq_{3}}+\frac{1}{2}\frac{c_{W}}{s_{W}}\,C_ {HWB}\right)+\frac{1}{4}\frac{\delta s_{W}^{2}}{s_{W}^{2}}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Anomalous fermion couplings at LO In the SMEFT, at tree level the decay width of \(Z\) boson to fermions can be given by: \[\overline{\Gamma}\left(Z\to f\overline{f}\right) = \frac{\sqrt{2}\,\hat{G}_{F}\hat{m}_{Z}^{3}\,N_{c}}{3\pi}\left(| \overline{g}_{V}^{f}|^{2}+|\overline{g}_{A}^{f}|^{2}\right), \tag{19}\] \[\overline{\Gamma}\left(Z\to{\rm Had}\right) = 2\,\overline{\Gamma}\left(Z\to u\overline{u}\right)+3\, \overline{\Gamma}\left(Z\to d\overline{d}\right). \tag{20}\] With our chosen normalization of \(\overline{g}_{V}^{x}=T_{3}/2-Q^{x}\,\overline{s}_{\theta}^{2},\overline{g}_{A} =T_{3}/2\) where \(T_{3}=1/2\) for \(u_{i},\nu_{i}\) and \(T_{3}=-1/2\) for \(d_{i},\ell_{i}\) and \(Q^{x}=\{-1,2/3,-1/3\}\) for \(x=\{\ell,u,d\}\). The modification of the decay widths in the SMEFT compared to the situation in the SM introduces corrections of the form [46, 47, 48, 49, 53] \[\delta\Gamma_{Z\to\ell\overline{d}} = \frac{\sqrt{2}\,\hat{G}_{F}\hat{m}_{Z}^{3}}{6\pi}\,\left[-\delta g _{A}^{\ell}+\left(-1+4s_{W}^{2}\right)\delta g_{V}^{\ell}\right], \tag{21}\] \[\delta\Gamma_{Z\to\nu\overline{\nu}} = \frac{\sqrt{2}\,\hat{G}_{F}\hat{m}_{Z}^{3}}{6\pi}\,\left[\delta g _{A}^{\nu}+\delta g_{V}^{\nu}\right],\] (22) \[\delta\Gamma_{Z\to Had} = 2\,\delta\Gamma_{Z\overline{u}u}+3\,\delta\Gamma_{Z\overline{d}d},\] (23) \[= \frac{\sqrt{2}\,\hat{G}_{F}\hat{m}_{Z}^{3}}{\pi}\left[\delta g_{ A}^{u}-\frac{1}{3}\left(-3+8s_{W}^{2}\right)\delta g_{V}^{u}-\frac{3}{2} \delta g_{A}^{d}+\frac{1}{2}\left(-3+4s_{W}^{2}\right)\delta g_{V}^{d}\right],\] \[\delta\Gamma_{Z} = 3\delta\Gamma_{Z\to\ell\overline{d}}+3\delta\Gamma_{Z\to\nu \overline{\nu}}+\delta\Gamma_{had},\] \[= \frac{\sqrt{2}\,\hat{G}_{F}\hat{m}_{Z}^{3}}{2\,\pi}\left[\delta g _{A}^{\nu}+\delta g_{V}^{\nu}-\delta g_{A}^{\ell}+\left(-1+4s_{W}^{2}\right) \delta g_{V}^{\ell},\right.\] \[\left.\hskip 56.905512pt+2\delta g_{A}^{u}-\frac{2}{3}\left(-3+8s _{W}^{2}\right)\delta g_{V}^{u}-3\delta g_{A}^{d}+\left(-3+4s_{W}^{2}\right) \delta g_{V}^{d}\right],\] So that: \(\overline{\Gamma}\left(Z\to f\overline{f}\right)=\Gamma_{Z\to f\overline{f}}+ \delta\Gamma_{Z\to f\overline{f}}\) for all \(f\) and the same kind of relation holds for \(\overline{\Gamma}_{Z}\). When considering partial widths extracted from LEP data in the SM _at_ the \(Z\) pole, \(\sigma_{e^{+}e^{-}\to had}\) has the theoretical expression \[\overline{\sigma}_{h}^{0}=12\pi\,\frac{\overline{\Gamma}_{Z\to e\overline{e }}\overline{\Gamma}_{Z\to Had}}{|\overline{\omega}(M_{Z}^{2})|^{2}}, \tag{27}\] with \(\overline{\Gamma}_{Z\to e\overline{e}}\), \(\overline{\Gamma}_{Z\to Had}\) being the decay widths in SM. With the choice \(\overline{\omega}(M_{Z}^{2})=\overline{M}_{Z}\,\overline{\Gamma}_{Z}\), and the partial width taking on SM values, this expression simplifies to the well known SM result. The contribution from SMEFT is relegated in the appendix 6. The shift of the ratios of decay rates defined in the SM as \(R_{f}^{0}=\frac{\Gamma_{had}}{\Gamma_{Z\to f\overline{f}}}\) where \(f\) can be a charged lepton \(\ell\) or a neutrino follows from \[\delta R_{f}^{0}=\frac{1}{(\Gamma(Z\to f\overline{f})^{2})_{SM}}\left[\delta \Gamma_{Z\to Had}(\Gamma(Z\to f\overline{f}))_{SM}-\delta\Gamma_{Z\to f \overline{f}}(\Gamma\left(Z\to{\rm Had}\right)_{SM})\right], \tag{28}\] and we can then write that \(\overline{R}_{f}^{0}=R_{f}^{0}+\delta R_{f}^{0}\). For an identified quark the inverse ratio is used. The forward backward asymmetry for 2-2 scattering is defined as \[A_{FB}=\frac{\sigma_{F}-\sigma_{B}}{\sigma_{F}+\sigma_{B}}. \tag{29}\] If \(\theta\) is the angle between incoming lepton \(\ell\) and outgoing fermion \(f\) then, \(\sigma_{F}\) is the cross-section defined in the region \(\theta\in[0,\pi/2]\) and \(\sigma_{B}\) is the cross-section defined in the region \(\theta\in[\pi/2,\pi]\). In terms of the left-right asymmetry (\(A\)) for the incoming lepton and outgoing fermion: \[A_{e}=2\frac{g_{V}^{\ell}g_{A}^{\ell}}{(g_{V}^{\ell})^{2}+(g_{A}^{\ell})^{2}}, \quad A_{f}=2\frac{g_{V}^{f}g_{A}^{f}}{(g_{V}^{f})^{2}+(g_{A}^{f})^{2}}. \tag{30}\] we can express the forward backward asymmetry as: \[A_{FB}^{0,f}=\frac{3}{4}A_{e}A_{f}, \tag{31}\] Once we include the SMEFT operators, the \(Z\) couplings receive corrections which in turn brings corrections to these observables. The expressions for these corrections have been shown in Appendix 6 in detail. The partial \(W\) width in SMEFT is given by [49, 54]: \[\overline{\Gamma}_{W\to\overline{f}_{i}f_{j}}= \Gamma^{SM}_{W\to\overline{f}_{i}f_{j}}+\delta\Gamma_{W\to \overline{f}_{i}f_{j}}, \tag{32}\] \[\Gamma^{SM}_{W\to\overline{f}_{i}f_{j}}= \frac{N_{C}\,|V_{ij}^{f}|^{2}\sqrt{2}\hat{G}_{F}\,\hat{m}_{W}^{3} }{12\pi},\] (33) \[\delta\Gamma_{W\to\overline{f}_{i}f_{j}}= \frac{N_{C}\,|V_{ij}^{f}|^{2}\sqrt{2}\hat{G}_{F}\,\hat{m}_{W}^{3} }{12\pi}\left(4\delta g_{V/A}^{W_{\pm},f}+\frac{1}{2}\frac{\delta m_{W}^{2}}{ \hat{m}_{W}^{2}}\right). \tag{34}\] Here, \(N_{C}\) is 3 for quarks and 1 for leptons. \(V_{ij}^{f}\) is CKM or PMNS mixing matrix elememts. Summing over all the modes we can compute the contribution to the total decay width from SMEFT. ### LEP-II data One issue with using just the LEP-I data for the EWPO is that they can only constrain 8 linear combination of the WCs out of a total of 10 WCs which affect the EWPO at tree-level, resulting in two blind directions. The origin of this can be traced to the fact that there is a reparametrisation invariance in the LEP-I data where the fields and the couplings have the property: \(\mathcal{F},g\to\mathcal{F}^{\prime}(1+\epsilon)g^{\prime}(1-\epsilon)\)[54, 55, 56] for the \(2\to 2\) scattering processes. In order to break this invariance, one has to go to \(2\to 4\) scattering processes, which can be realised in LEP-II due to the possibility of having resonant production of \(W^{+}\) and \(W^{-}\). Pair produced \(W\)'s can then further can decay leptonically to \((\ell,\nu_{\ell})\) or hadronically to \((j,j)\). In addition to breaking the invariance, measurement of these four fermions final states also provide additional observables for the electroweak precision study. Recently the contributions from the SMEFT to these processes have been done assuming both resonant and non-resonant precesses. Deatils of these observables and their chi-squared analysis has also been studied in detail in [57]. ## 4 Going Beyond EWPO: \(\Delta_{CKM}\) The aim of this section is to understand the SMEFT operators that enter the LEFT observables like muon decay and \(\beta-\)decay which are relevant in electro-weak precision study. In [58], the authors studied the maximal deviation of \(\Delta_{CKM}\) allowed by electro-weak precision measurements. Their study was specific to the combination of \(\mathcal{O}_{ll},\mathcal{O}_{ld_{3}},\mathcal{O}_{Hl_{3}}\) and \(\mathcal{O}_{Hq_{3}}\) SMEFT operators that contribute at tree-level to the \(\Delta_{CKM}\) constraint, in the limit of \(U(3)^{5}\) invariance. Since the SMEFT operators \({\cal O}_{HD}\) and \({\cal O}_{HWB}\) among others, given in Table 1, are crucial to electro-weak study, here we ask the question regarding how much they influence the \(\Delta_{CKM}\) at 1-loop order. Moreover, the reason to study purely bosonic operators are crucial. This is not too difficult to envisage, given that LHC direct bounds on these operators are not as bad as leptonic ones. Moreover, in such a scenario with New Physics coupling only to bosons, it becomes important to understand these operators and their contributions in low-energy observables. Hence, in this section, we match the muon decay and \(\beta-\)decay to 1-loop order, thus including the SMEFT operators given in Table 1 in the \(\Delta_{CKM}\) constraint. Assuming \(U(3)^{5}\) flavour symmetry [58], the correction to CKM unitarity is given by \[\Delta_{CKM}=|V_{ud}|^{2}+|V_{us}|^{2}+|V_{ub}|^{2}-1. \tag{35}\] The CKM matrix element \(|V_{ud}|\) is precisely measured from the study of super allowed \(0^{+}\to 0^{+}\) beta decay. A crucial input for the correct determination is the precision of \(G_{F}\). To understand the New Physics contribution to \(G_{F}\), lets study the muon decay process. ### Muon Decay At low energies, the Lagrangian density that leads to the process \(\mu\to e\overline{\nu}_{e}\nu_{\mu}\) is given by \[{\cal L}_{\mu}=L_{\nu e}^{V,LL}(\overline{\nu}_{L\mu}\gamma^{\mu}\nu_{Le})( \overline{e}_{L}\gamma_{\mu}\mu_{L})+L_{\nu e}^{V,LR}(\overline{\nu}_{L\mu} \gamma^{\mu}\nu_{Le})(\overline{e}_{R}\gamma_{\mu}\mu_{R})\, \tag{36}\] where, all the WCs are measured at the muon mass scale \(\mu=m_{\mu}\). At tree-level, the matching of the LEFT Lagrangian with the SMEFT one gives, \[L_{\nu e\ tree}^{V,LL} = -\frac{2}{v^{2}}+\Big{(}C_{ll(1,2,2,1)}+C_{ll(2,1,1,2)})-2(C_{Hl _{3}(2,2)}+C_{Hl_{3}(1,1)})\Big{)}U_{(1,1)}U_{(2,2)}^{\dagger}. \tag{37}\] At 1-loop level, the matching of LEFT Lagrangian with the SMEFT Lagrangian given, \[L_{\nu e\ 1-loop}^{V,LL} = -1.48946\log\left(\mu_{W}^{2}\right)-6.8206\log\left(\mu_{W}^{2} \right)\Big{(}0.024(C_{Hl_{3}(1,1)}+C_{Hl_{3}(2,2)}) \tag{38}\] \[+ 0.252418C_{Hq_{3}(1,1)}-0.0029(C_{ll(1,1,2,2)}+C_{ll(2,2,1,1)})\] \[+ 0.001(C_{ll(1,2,2,1)}+C_{ll(2,1,1,2)})+0.463C_{HD}+0.0449C_{HWB} \overline{g}\Big{)}\] \[- 0.0025(C_{Hl_{1}(1,1)}+C_{Hl_{1}(2,2)})+0.0473(C_{Hl_{3}(1,1)}+C _{Hl_{3}(2,2)})+1.0119C_{Hq_{3}(1,1)}\] \[- 0.0206(C_{ll(1,1,2,2)}+C_{ll(2,2,1,1)})+0.0067(C_{ll(1,2,2,1)}+C _{ll(2,1,1,2)})\] \[+ 2.0729C_{HD}+0.3125C_{HWB}\overline{g}\] Note that Standard Model only has the left handed currents, and hence the \(L^{V,LR}\) are generated by new physics beyond the cut-off scale \(\Lambda\). Thus all square terms of this WCs are \(\frac{v^{4}}{\Lambda^{7}}\) suppressed in comparison with its interference term with \(L^{V,LL}\). Moreover, since the electron in \(L^{V,LL}\) and \(L^{V,LR}\) have opposite hierarchies, the interference term will be helicity suppressed. Moreover, since the tree and 1-loop matching of \(L_{\nu e}^{V,LR}\) operator with SMEFT Lagrangian consists of only \(C_{le(2,1,1,2)}\) operator, electro-weak precision study will not play any role in constraining them. Hence we assume that this SMEFT operator vanishes and neglect the corresponding LEFT operator in further analysis. This means that, \[L_{\nu e}^{V,LL}=L_{\nu e\ tree}^{V,LL}+L_{\nu e\ 1-loop}^{V,LL}=-\frac{4G_{F} }{\sqrt{2}}\, \tag{39}\] evaluated at \(\mu_{W}=M_{Z}\) and is fixed by muon decay measurement. ### Beta Decay The low-energy effective Lagrangian for the process \(d\toue^{-}\overline{\nu}_{e}\) is given by, \[{\cal L}_{n} = L_{\nu edu}^{V,LL}(\overline{\nu}_{L}\gamma^{\mu}e_{L})(\overline {u}_{L}\gamma_{\mu}d_{L})+L_{\nu edu}^{V,LR}(\overline{\nu}_{L}\gamma^{\mu}e_{L })(\overline{u}_{R}\gamma_{\mu}d_{R})+L_{\nu edu}^{S,RR}(\overline{\nu}_{L}e_{ R})(\overline{u}_{L}d_{R}) \tag{40}\] \[+ L_{\nu edu}^{S,RL}(\overline{\nu}_{L}e_{R})(\overline{u}_{L} \gamma_{\mu}d_{R})+L_{\nu edu}^{T,RR}(\overline{\nu}_{L}\sigma^{\mu\nu}e_{R})( \overline{u}_{L}\sigma^{\mu\nu}d_{R})\] where \(L_{\nu edu}^{V,LL},L_{\nu edu}^{V,LR},L_{\nu edu}^{S,RR},L_{\nu edu}^{S,RL},L_{ \nu edu}^{T,RR}\) are the LEFT WCs. At tree level, the matching of the above Lagrangian with SMEFT Lagrangian gives, \[L_{\nu edu\ tree}^{V,LL} = \Big{(}-\frac{2}{v^{2}}-2(C_{Hl_{3}(1,1)}+C_{Hq_{3}(1,1)})\Big{)} U_{(1,1)}^{\dagger}V_{(1,1)}^{\dagger} \tag{41}\] Matching this Lagrangian with the SMEFT Lagrangian, at 1-loop, level we get, \[L_{\nu edu\ 1-loop}^{V,LL} = U_{11}^{\dagger}V_{11}^{\dagger}\Big{(}(0.0241\log\left(\mu_{W} ^{2}\right)+0.0595)C_{Hl_{3}(1,1)} \tag{42}\] \[- (0.0139\log\left(\mu_{W}^{2}\right)+0.0712)C_{Hq_{1}(1,1)}+(0.100 7\log\left(\mu_{W}^{2}\right)+0.3931)C_{Hq_{3}(1,1)}\] \[+ (0.1338\log\left(\mu_{W}^{2}\right)+0.592)C_{HD}-(0.0115\log\left( \mu_{W}^{2}\right)+0.0715)C_{HWB}\overline{g_{1}}\] \[+ 0.0007C_{Hl_{1}(1,1)}\Big{)}\] The contribution from \(L_{\nu edu}^{V,LR},L_{\nu edu}^{S,RR}\) and \(L_{\nu edu}^{T,RR}\) do not contain any SMEFT coefficients that contribute to the EWPO. Then, up to 1-loop order, \[L_{\nu edu}^{V,LL}=L_{\nu edu\ tree}^{V,LL}+L_{\nu edu\ 1-loop}^{V,LL}= \frac{4G_{F}}{\sqrt{2}}V_{ud}\, \tag{43}\] Using Eq.39, we get \[V_{ud}=\frac{L_{\nu edu}^{V,LL}}{L_{\nu e}^{V,LL}} = \frac{L_{\nu edu\ tree}^{V,LL}+L_{\nu edu\ 1-loop}^{V,LL}}{L_{\nu e \ tree}^{V,LL}+L_{\nu e\ 1-loop}^{V,LL}} \tag{44}\] \[= V_{11}(1+\delta_{SMEFT\ tree}+\delta_{SMEFT\ 1-loop})\.\] \[\Delta_{CKM}=\delta_{SMEFT\ tree}+\delta_{SMEFT\ 1-loop} \tag{45}\] where, \[\delta_{SMEFT\ tree} = 2(C_{Hl_{3}(1,1)}+C_{Hq_{3}(1,1)})\] \[+ (-2C_{Hl_{3}(2,2)}+C_{ll(1,2,2,1)}+C_{ll(2,1,1,2)}-2C_{Hl_{3}(1,1)})\] \[\delta_{SMEFT\ 1-loop} = -0.0029C_{Hl_{1}(1,1)}-(0.0026\log\left(\mu_{W}^{2}\right)+0.017 2)C_{Hl_{3}(1,1)}\] \[+ (0.0139\log\left(\mu_{W}^{2}\right)+0.0711)C_{Hq_{1}(1,1)}\] \[- (0.0255\log\left(\mu_{W}^{2}\right)+0.0917)C_{Hq_{3}(1,1)}\] \[+ (0.004\log\left(\mu_{W}^{2}\right)+0.0252)C_{HD}\] \[+ (0.0249\log\left(\mu_{W}^{2}\right)+0.1645)C_{HWB}\overline{g_{1}} \tag{46}\] Assuming flavor universal scenarios and so removing indices the contributions to SMEFT can be written as: \[\delta_{SMEFT\ tree} = 2(C_{Hl_{3}}+C_{Hq_{3}})\] \[+ (-2C_{Hl_{3}}+C_{ll}+C_{ll}-2C_{Hl_{3}})\] \[= (2C_{ll}-2C_{Hl_{3}}+2C_{Hq_{3}})\] \[\delta_{SMEFT\ 1-loop} = -0.0029C_{Hl_{1}}-(0.0026\log\left(\mu_{W}^{2}\right)+0.0172)C_{Hl _{3}}\] \[+ (0.0139\log\left(\mu_{W}^{2}\right)+0.0711)C_{Hq_{1}}\] \[- (0.0255\log\left(\mu_{W}^{2}\right)+0.0917)C_{Hq_{3}}\] \[+ (0.004\log\left(\mu_{W}^{2}\right)+0.0252)C_{HD}\] \[+ (0.0249\log\left(\mu_{W}^{2}\right)+0.1645)C_{HWB}\overline{g_{1}} \tag{47}\] Note that \(V_{ud}=V_{11}\) in the limit where all the SMEFT coefficients vanish and the tree-level result matches with [58]. ## 5 Fitting procedure The results of the fit involving EWPO has strong dependence on the input scheme [49]. In our analyses, we choose the \((G_{F},M_{Z}\) and \(\alpha)\) scheme and the input parameters we used are as follows: \[G_{F} = 1.1663787(6)\times 10^{-5}\,\mbox{GeV}^{-2}\] \[M_{Z} = 91.1876\pm.0021\,\mbox{GeV}\] \[\frac{1}{\alpha_{e}} = 137.035999139(31)\] Let the contribution of high dimensional operators, along with SM, to the electro-weak observables be represented as, \[\mathcal{O}_{i}^{SMEFT} = \mathcal{O}_{i}^{SM}+\delta\mathcal{O}_{i}^{SMEFT}\, \tag{49}\] where \(\delta\mathcal{O}_{i}^{SMEFT}\)'s are the functions of WCs \(C_{i}\)'s. For \(O_{i}^{SM}\) in Eq. 49, we have collected the most precisely calculated value and are given in Table 3. The form of the \(\chi^{2}\) is given by, \[\chi^{2} = \Sigma_{i,j}(\mathcal{O}_{i}^{exp}-\mathcal{O}_{i}^{SMEFT}) \sigma_{ij}^{-2}(\mathcal{O}_{j}^{exp}-\mathcal{O}_{j}^{SMEFT})\,. \tag{50}\] where the covariance matrix \(\sigma_{ij}^{2}=\Delta_{i}^{exp}\rho_{ij}^{exp}\Delta_{j}^{exp}+\Delta_{i}^{ th}\rho_{ij}^{th}\Delta_{j}^{th}\). Here \(\rho_{ij}^{exp}\) is the experimental correlation matrix obtained from [19] and \(\rho_{ij}^{th}\) is identity. \(\Delta_{i}^{exp}\) and \(\Delta_{i}^{th}\) are the experimental and theory errors of the \(i^{\rm th}\) observable. Inclusion of theoretical uncertainity and the correlation among the observables lead to better constraints over the parameters of interest. In Table 3, we summarize the current status of the SM theory and the experimental results, including \(\Delta_{CKM}\). First we compute the \(\chi^{2}\) including the relevant observables from Table 3. The \(\chi^{2}\) thus becomes a function of WCs which affects the observables that we are interested in. The fit can be carried out by including all the relevant WCs or by taking only a subset. The \(\chi^{2}\) function is then minimized to get the best-fit values of the WCs and the \(1\sigma\) ranges of the WCs are calculat the other parameters. This is done in the following way: For the parameter \(c_{i}\) of our interest, we choose a random value and then define a partial \(\chi^{2}\) as follows. \[\chi^{2}_{par}=\chi^{2}_{c_{i}=x_{0},c_{j}}=\chi^{2}(x_{0},c_{j}),\] where \(x_{0}\) is the one value of random variable \(c_{i}\) and \(c_{j}\) are the other parameters present in the \(\chi^{2}\). We then define the following test-statistic which is a difference between the minima of the partial \(\chi^{2}\) and the minima of the total \(\chi^{2}\) \[min(\chi^{2}(c_{i}(i\neq j)=x_{0},c_{j}))-min(\chi^{2}(c_{i},c_{j}))\] For various random values of \(c_{i}\), we get some result for this new test-statistics, which can be used to obtain an interpolating function. The marginalized result is then obtained by demanding the value of the interpolating function to be smaller than the significant region of \(\chi^{2}\) distribution for a single variable. The correlation between two WCs affecting the fit can also be calculated in a similar way by treating the two WCs at hand as random variables and then marginalizing over the others. ### S, T, V fitting The effects of new heavy particles(with mass \(M_{new}\)) on the gauge boson self-energies can be described by three parameters \(S\), T, and U. The U parameter is not very sensitive to heavy New Physics because of the presence of an extra factor of \(M_{Z}/M_{new}\) and hence can be neglected. This S and T parametrisation can be mapped to the SMEFT WCs \(C_{HD}\) and \(C_{HWB}\) and has been studied in literature [59, 60, 61, 62]. Along with them, we append the V parameter which parametrise the change in \(G_{F}\). This parameter is crucial if we are to make the quarks and leptons interact with BSM resonances. \begin{table} \begin{tabular}{|l|c|c|c|} \hline Measurement & Experiment & Precise Theory & Pull \\ \hline \hline \(\Gamma_{Z}\)(GeV) & \(2.4955\pm 0.0023\) & \(2.4945\pm 0.0006\) & 0.42 \\ \hline \(\sigma_{h}\)(nb) & \(41.481\pm 0.033\) & \(41.482\pm 0.008\) & -0.29 \\ \hline \(R_{l}\) & \(20.767\pm 0.025\) & \(20.749\pm 0.009\) & 0.67 \\ \hline \(R_{b}\) & \(0.21629\pm 0.00066\) & \(0.21582\pm 0.00002\) & 0.71 \\ \hline \(R_{c}\) & \(0.1721\pm 0.0030\) & \(0.17221\pm 0.00003\) & -0.03 \\ \hline \(R_{uc}\) & \(0.166\pm 0.009\) & \(0.172227\pm 0.000032\) & -0.69 \\ \hline \(A_{l}\) & \(0.1465\pm 0.0033\) & \(0.1468\pm 0.0003\) & -0.09 \\ \hline \(A_{l}(SLD)\) & \(0.1513\pm 0.0021\) & \(0.1468\pm 0.0003\) & 2.12 \\ \hline \(A_{b}\) & \(0.923\pm 0.020\) & \(0.92699\pm 0.00006\) & -0.19 \\ \hline \(A_{c}\) & \(0.670\pm 0.027\) & \(0.6677\pm 0.0001\) & 0.08 \\ \hline \(A_{s}\) & \(0.895\pm 0.020\) & \(0.9356\pm 0.00004\) & -0.44 \\ \hline \(A_{l,FB}\) & \(0.0171\pm 0.0010\) & \(0.01617\pm 0.00007\) & 0.92 \\ \hline \(A_{b,FB}\) & \(0.0996\pm 0.0016\) & \(0.1029\pm 0.0002\) & -2.04 \\ \hline \(A_{c,FB}\) & \(0.0707\pm 0.0035\) & \(0.0735\pm 0.0002\) & -0.79 \\ \hline \(M_{W}\)(GeV) & \(80.4133\pm 0.008\) & \(80.360\pm 0.006\) & 5.33 \\ \hline \(\Gamma_{W}\)(GeV) & \(2.085\pm 0.042\) & \(2.0904\pm 0.0003\) & -0.13 \\ \hline \(BR_{W\rightarrow\nu l}\) & \(0.1086\pm 0.0009\) & \(0.108271\pm 0.000024\) & 0.36 \\ \hline \hline \(\Delta_{CKM}\) & \(-0.0015\pm 0.0007\) & \(0\pm 0\) & -2.14 \\ \hline \end{tabular} \end{table} Table 3: Experimental and theoretical values and uncertainities of the observables. Pulls of the measurement with best theory is also provided in the last column. As shown in Eq. 6, this shift in \(G_{F}\) can be mapped to combination \(2C_{Hl_{3}}-C_{ll}\). The expressions for the Electroweak Observables in terms of this S, T and V paramatrisation is given in Table 4. For the \(\Delta_{CKM}\), we infer the result by considering only the tree-level matching, where the dependence on V comes from the muon decay parameters sitting in the denominator of Eq 44. As can be seen from Table 4, this V parameter is highly crucial for both \(M_{W}\) as it has the highest sensitivity to this parameter, which can be inferred from its largish coefficient of V compared to other observables. In Figure 1, we have plotted the change in S, T and V parameters with EWPO data and EWPO + \(\Delta_{CKM}\) data in a 2-D plane by marginalising over the third parameter. Note that, inclusion of \(\Delta_{CKM}\) constraints the Peskin-Takeuchi parameters better in comparison with just the EWPO data. The results of the fit are presented in Table 5. The posterior values of the observables after the (S,T,V) can be read off from Table 6. Even without including the \(\Delta_{CKM}\) constraint, we can see that this paramatrisation has very good agreement with the experimental results of \(M_{W}\) and \(\Delta_{CKM}\) simultaneously. The discrepancy in \(A_{l}^{SLD}\) and \(A_{b}^{FB}\) also moves below \(2\sigma\), thereby making this a \begin{table} \begin{tabular}{|l|l|} \hline Observables & Expression (S, T, V) \\ \hline \(\Gamma_{\rm Z}\) & \(\Gamma_{\rm Z}^{\rm SM}-0.009131\,\Delta\)S + 0.024158\(\,\Delta\)T - 3.31062\(\,\Delta\)V \\ \hline \(\sigma_{\rm h}\) & \(\sigma_{\rm h}^{\rm SM}-0.047145\,\Delta\)S + 0.03152\(\,\Delta\)T - 4.31945\(\,\Delta\)V \\ \hline \(\rm R_{e}\) & \(R_{\rm e}^{\rm SM}-0.020222\,\Delta\)S + 0.013519\(\,\Delta\)T - 1.85268\(\,\Delta\)V \\ \hline \(\rm R_{b}\) & \(R_{\rm b}^{\rm SM}+0.000163\,\Delta\)S - 0.000109\(\,\Delta\)T + 0.014965\(\,\Delta\)V \\ \hline \(\rm R_{c}\) & \(R_{\rm c}^{\rm SM}-0.000245\,\Delta\)S + 0.000163\(\,\Delta\)T - 0.022448\(\,\Delta\)V \\ \hline \(\rm A_{l}\) & \(A_{\rm l}^{\rm SM}-0.023672\,\Delta\)S + 0.015827\(\,\Delta\)T - 2.16886\(\,\Delta\)V \\ \hline \(\rm A_{b}\) & \(A_{\rm b}^{\rm SM}-0.00179\,\Delta\)S + 0.001197\(\,\Delta\)T - 0.163999\(\,\Delta\)V \\ \hline \(\rm A_{c}\) & \(A_{\rm c}^{\rm SM}-0.009707\,\Delta\)S + 0.006489\(\,\Delta\)T - 0.889361\(\,\Delta\)V \\ \hline \(\rm A_{e}^{FB}\) & \(A_{\rm eFB}^{\rm SM}-0.010511\,\Delta\)S + 0.007027\(\,\Delta\)T - 0.962983\(\,\Delta\)V \\ \hline \(\rm A_{b}^{FB}\) & \(A_{\rm bFB}^{\rm SM}-0.017214\,\Delta\)S + 0.011508\(\,\Delta\)T - 1.5771\(\,\Delta\)V \\ \hline \(\rm A_{c}^{FB}\) & \(A_{\rm cFB}^{\rm SM}-0.015128\,\Delta\)S + 0.01011\(\,\Delta\)T - 1.38607\(\,\Delta\)V \\ \hline \(\rm M_{W}\) & \(M_{W}^{\rm SM}-0.25649\,\Delta\)S + 0.404143\(\,\Delta\)T - 14.9135\(\,\Delta\)V \\ \hline \(\rm\Gamma_{w}\) & \(\Gamma_{w}^{\rm SM}-0.01985\,\Delta\)S + 0.031278\(\,\Delta\)T - 3.24223\(\,\Delta\)V \\ \hline \(\rm BF_{W\to wl}\) & \(\rm BF_{W\to wl}^{\rm SM}+0.000702\,\Delta\)S - 0.001106\(\,\Delta\)T + 0.040704\(\,\Delta\)V \\ \hline \(\Delta_{CKM}\) & \(-\Delta\)V \\ \hline \end{tabular} \end{table} Table 4: Expressions for observables in terms of \(\Delta S\), \(\Delta T\) and \(\Delta V\). \begin{table} \begin{tabular}{|l|c c|c c|c c|} \hline WC & B.F(EWPO) & Correlation & B.F(EWPO+CKM) & \multicolumn{2}{|c|}{Correlation} \\ \hline & \multicolumn{4}{|c|}{(\(\chi_{\rm fit}^{2}/\chi_{\rm SM}^{2}=11.41/40.08\))} & \multicolumn{4}{|c|}{(\(\chi_{\rm fit}^{2}/\chi_{\rm SM}^{2}=11.68/44.67\))} \\ \hline \(\Delta S\) & \(-0.0032\pm 0.1077\) & 1.00 & & \(-0.0292\pm 0.0965\) & 1.00 & & \\ \(\Delta T\) & \(0.1646\pm 0.0649\) & 0.79 & 1.00 & & \(0.1631\pm 0.06484\) & 0.86 & 1.00 & \\ \(\Delta V\) & \(-0.001\pm 0.0007\) & \(-0.59\) & \(-0.07\) & 1.00 & \(0.0013\pm 0.0005\) & \(-0.44\) & \(-0.04\) & 1.00 \\ \hline \end{tabular} \end{table} Table 5: Global fit of (S,T,V) paramatrisation tree-level to EWPO and \(\Delta_{CKM}\). paramatrisation worth considering to look for BSM physics. Inclusion of \(\Delta_{CKM}\) in the fit improves it's agreement with the measured value, while slightly deteriorating \(A_{l}^{SLD}\) and \(A_{b}^{FB}\). ### Tree level study: \(C_{Hl_{3}}\) and \(C_{ll}\) As already seen from the expression of the recent anomalies in terms of SMEFT WCs, all tree-level \(C_{Hl_{3}}\) and \(C_{ll}\) are the ones which are common both in \(m_{W}\) and \(\Delta_{CKM}\). Incidentally, these WCs are also the ones which affects the weak coupling constant \(G_{F}\). However, this goes beyond just \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Observables & Posterior(EWPO) & Pull & Posterior(EWPO+\(\Delta_{\text{CKM}}\)) & Pull \\ \hline \(\Gamma_{\text{Z}}\) & \(2.49534\pm 0.00199\) & 0.05 & \(2.49452\pm 0.001021\) & 0.39 \\ \hline \(\sigma_{\text{h}}\) & \(41.4832\pm 0.006398\) & \(-0.07\) & \(41.483\pm 0.004735\) & \(-0.06\) \\ \hline \(\text{R}_{\text{e}}\) & \(20.7495\pm 0.0027444\) & 0.69 & \(20.7494\pm 0.002031\) & 0.7 \\ \hline \(\text{R}_{\text{b}}\) & \(0.215816\pm 0.000022\) & 0.72 & \(0.215816\pm 0.000016\) & 0.72 \\ \hline \(\text{R}_{\text{c}}\) & \(0.172216\pm 0.000033\) & \(-0.04\) & \(0.172215\pm 0.000025\) & \(-0.04\) \\ \hline \(\text{R}_{\text{uc}}\) & \(0.172233\pm 0.000033\) & \(-0.69\) & \(0.172232\pm 0.000024\) & \(-0.69\) \\ \hline \(\text{A}_{\text{l}}\) & \(0.147408\pm 0.003213\) & \(-0.19\) & \(0.147313\pm 0.002378\) & \(-0.19\) \\ \hline \(\text{A}_{\text{l}}^{\text{SLD}}\) & \(0.147407\pm 0.003212\) & 1.01 & \(0.147313\pm 0.002378\) & 1.26 \\ \hline \(\text{A}_{\text{b}}\) & \(0.927036\pm 0.000243\) & \(-0.2\) & \(0.927029\pm 0.000179\) & \(-0.2\) \\ \hline \(\text{A}_{\text{c}}\) & \(0.667949\pm 0.001317\) & 0.07 & \(0.66791\pm 0.000975\) & 0.08 \\ \hline \(\text{A}_{\text{s}}\) & \(0.935646\pm 0.000243\) & \(-0.45\) & \(0.935639\pm 0.000179\) & \(-0.45\) \\ \hline \(\text{A}_{\text{e}}^{\text{FB}}\) & \(0.016439\pm 0.001426\) & 0.38 & \(0.016398\pm 0.001056\) & 0.48 \\ \hline \(\text{A}_{\text{b}}^{\text{FB}}\) & \(0.103342\pm 0.002336\) & \(-1.32\) & \(0.103273\pm 0.001728\) & \(-1.56\) \\ \hline \(\text{A}_{\text{c}}^{\text{FB}}\) & \(0.073888\pm 0.002053\) & \(-0.78\) & \(0.073828\pm 0.001519\) & \(-0.82\) \\ \hline \(\text{M}_{\text{W}}\) & \(80.4131\pm 0.013023\) & 0.01 & \(80.4143\pm 0.006242\) & \(-0.09\) \\ \hline \(\Gamma_{\text{w}}\) & \(2.09251\pm 0.002631\) & \(-0.18\) & \(2.09197\pm 0.001559\) & \(-0.16\) \\ \hline \(\text{Br}_{\text{W-w}M}\) & \(0.108126\pm 0.000036\) & 0.53 & \(0.108122\pm 0.000017\) & 0.53 \\ \hline \(\Delta_{\text{CKM}}\) & \(-0.0009561\pm 0.000778\) & \(-0.52\) & \(-0.00126043\pm 0.0005156\) & \(-0.27\) \\ \hline \end{tabular} \end{table} Table 6: Observables posteriors using STV fit. Figure 1: S, T and V parameters with EWPO and EWPO + \(\Delta_{CKM}\) data because of the additional dependences on \(C_{Hl_{3}}\) in the shift in couplings for the observables as shown in Table 2. Similar to the (S,T,V) fit, we present two distinct cases, where first we have shown the fit to the EWPO provided by LEP-I. We then augment the number of observables by including \(\Delta_{CKM}\) and then present the corresponding pulls of the observables after each step. As seen from Figure 2, the inclusion of \(\Delta_{CKM}\) has a more constraining effect on the two WCs. Table 7 presents the result of the fit and correlation among the WCs. The posterior results for the observables are given in Table 8. For the first case with only the EWPO observables in the fit, the discrepancy in \(M_{W}\) comes down to 2.85\(\sigma\). The discrepancy in \(A_{b}^{FB}\) however deteriorates to -2.9\(\sigma\). Discrepancy in \(\Delta_{CKM}\) however comes down to a mere 0.14\(\sigma\). Inclusion of the \(\Delta_{CKM}\) constraint in the fit does not significant constrain it. Thus one can argue that any New Physics effect coming through these two parameters (or \(\delta G_{F}\)) cannot fully satisfy the EWPO data. This also strengthens the case for the (S,T,V) paramatrisation, where presence of extra parameters S and T (or equivalently \(C_{HD}\) and \(C_{HWB}\)) makes helps us to explain all the observables within 2\(\sigma\) discrepancy. ### Tree level study: \(C_{HD}\), \(C_{HWB}\), \(C_{Hl_{3}}\) and \(C_{ll}\) With the result from CDF impacting the world-average of W-mass, the discrepancy now stands at around 5.3 \(\sigma\) from the SM value. Taking it as a sign of BSM physics, we look at a fit with just the SMEFT operators contributing to W-mass at tree-level. The shift in W mass in SMEFT can be written as: \[\frac{\delta m_{W}^{2}}{m_{W}^{2}}=v^{2}\ \frac{s_{WCW}}{s_{W}^{2}-c_{W}^{2}} \left[2\,C_{HWB}+\frac{c_{W}}{2s_{W}}\,C_{HD}+\frac{s_{W}}{c_{W}}\left(2\,C_{ HI}^{(3)}-C_{ll}\right)\right]\,, \tag{51}\] This can also be thought of as augmenting the previous fit by including two more parameters \(C_{HD}\) and \(C_{HWB}\). The results of the fit and the 2-D marginalised plots are shown in Table 9 and Figure 3. The plot here shows that inclusion of \(\Delta_{CKM}\) constraint has significant impact in the results of the \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline WC & B.F(LEP-I) & Correlation & B.F(LEP-I+CKM) & Correlation \\ \hline & \((\chi^{2}_{\rm Hl}/\chi^{2}_{\rm SM}=24.27/40.08)\) & \multicolumn{3}{|c|}{\((\chi^{2}_{\rm Hl}/\chi^{2}_{\rm SM}=105.36/126.37)\)} \\ \hline \(C_{Hl_{3}}\) & -0.0138 \(\pm\) 0.0065 & 1.00 & & -0.0152 \(\pm\) 0.0058 & 1.00 \\ \(C_{ll}\) & -0.0008 \(\pm\) 0.0114 & 0.86 & 1.00 & -0.0034 \(\pm\) 0.0084 & 0.88 & 1.00 \\ \hline \end{tabular} \end{table} Table 7: Global fit of the WCs common to \(W\)-mass and \(\Delta_{CKM}\) at tree-level. Figure 2: Marginalised 2-D plot for WCs common to \(W\)-mass and \(\Delta_{CKM}\) at tree-level. fit with \(C_{HWB}\) and \(C_{ll}\) having noticeable shifts in their best fit points and their 2\(\sigma\) reaches. The correlations among the WCs also show significant changes, with \(C_{ll}\)-\(C_{HWB}\) and \(C_{ll}\)-\(C_{Hl_{3}}\) showing the highest change. The posterior values for the observables are given in Table 10. For the fit with EWPO, the dicrepancy in \(M_{W}\), as expected, is drastically reduced. \(A_{l}^{SLD}\) also comes down under 2\(\sigma\). However \(A_{b}^{FB}\) and \(\Delta_{CKM}\) shows the highest discrepancy with a deviation of -2.09\(\sigma\) and -1.97\(\sigma\) respectively. Inclusion of the \(\Delta_{CKM}\) constraint along with EWPO reduces it's dicrepancy significantly to -0.59\(\sigma\) at the expense of worsening \(A_{b}^{FB}\) to -2.24\(\sigma\). The agreement of \(\Gamma_{Z}\) also suffers with it's discrepancy increasing from -0.03\(\sigma\) to -1.49\(\sigma\). This study thus indicates that we need to go beyond these four WCs in order to satisfy the preces \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Observables & Posterior(EWPO) & Pull & Posterior(EWPO+\(\Delta_{\text{CKM}}\)) & Pull \\ \hline \(\Gamma_{Z}\) & \(2.49861\pm 0.001182\) & -1.2 & \(2.49851\pm 0.000902\) & -1.2 \\ \hline \(\sigma_{h}\) & \(41.454\pm 0.015843\) & 0.74 & \(41.453\pm 0.014626\) & 0.77 \\ \hline \(\text{R}_{e}\) & \(20.7907\pm 0.018776\) & -0.76 & \(20.7917\pm 0.017961\) & -0.8 \\ \hline \(\text{R}_{b}\) & \(0.215795\pm 6.2\times 10^{-6}\) & 0.75 & \(0.215796\pm 5.7\times 10^{-6}\) & 0.75 \\ \hline \(\text{R}_{c}\) & \(0.172247\pm 9.2\times 10^{-6}\) & -0.05 & \(0.172246\pm 8.5\times 10^{-6}\) & -0.05 \\ \hline \(\text{R}_{\text{uc}}\) & \(0.172264\pm 9.3\times 10^{-6}\) & -0.69 & \(0.172263\pm 8.5\times 10^{-6}\) & -0.69 \\ \hline \(\text{A}_{\text{t}}\) & \(0.149023\pm 0.000795\) & -0.74 & \(0.148943\pm 0.000547\) & -0.73 \\ \hline \(\text{A}_{\text{t}}^{\text{StD}}\) & \(0.149022\pm 0.000795\) & 1.01 & \(0.148943\pm 0.000547\) & 1.08 \\ \hline \(\text{A}_{\text{b}}\) & \(0.92726\pm 0.000068\) & -0.21 & \(0.927256\pm 0.000062\) & -0.21 \\ \hline \(\text{A}_{\text{c}}\) & \(0.669162\pm 0.000367\) & 0.03 & \(0.669143\pm 0.000337\) & 0.03 \\ \hline \(\text{A}_{\text{s}}\) & \(0.93587\pm 0.000067\) & -0.45 & \(0.935866\pm 0.000062\) & -0.45 \\ \hline \(\text{A}_{\text{e}}^{\text{FB}}\) & \(0.017157\pm 0.000353\) & -0.05 & \(0.017121\pm 0.000243\) & -0.02 \\ \hline \(\text{A}_{\text{b}}^{\text{FB}}\) & \(0.104539\pm 0.000575\) & -2.9 & \(0.104481\pm 0.000398\) & -2.96 \\ \hline \(\text{A}_{\text{c}}^{\text{FB}}\) & \(0.075043\pm 0.000498\) & -1.23 & \(0.074995\pm 0.000355\) & -1.22 \\ \hline \(\text{M}_{W}\) & \(80.3845\pm 0.006159\) & 2.85 & \(80.3842\pm 0.005656\) & 2.97 \\ \hline \(\text{F}_{\text{w}}\) & \(2.09455\pm 0.001162\) & -0.23 & \(2.09445\pm 0.000905\) & -0.22 \\ \hline \(\text{Br}_{\text{W}\to\text{M}}\) & \(0.108079\pm 0.000069\) & 0.58 & \(0.108077\pm 0.000068\) & 0.58 \\ \hline \(\Delta_{\text{CKM}}\) & \(-0.0016499\pm 0.0007857\) & 0.14 & \(-0.0015679\pm 0.0005242\) & 0.08 \\ \hline \end{tabular} \end{table} Table 8: Posterior results and Pull for WCs common to \(M_{W}\) and \(\Delta_{CKM}\), once just with EWPO observables and once including the \(\Delta_{CKM}\) constraint. \begin{table} \begin{tabular}{|c|c c|c c|c c c|} \hline WC & B.F(EWPO) & \multicolumn{2}{c|}{Correlation} & B.F(EWPO+CKM) & \multicolumn{2}{c|}{Correlation} \\ \hline & \multicolumn{4}{c|}{\((\chi_{\text{ll}}^{2}/\chi_{\text{SM}}^{2}=11.07/40.08)\)} & \multicolumn{4}{c|}{\((\chi_{\text{ll}}^{2}/\chi_{\text{SM}}^{2}=14.96/44.67)\)} \\ \hline \(C_{HD}\) & \(-0.0376\pm 0.0159\) & 1.00 & & \(-0.0392\pm 0.0159\) & 1.00 & & \\ \(C_{HWB}\) & \(-0.00007\pm 0.0079\) & \(-0.76\) & 1.00 & & \(0.0085\pm 0.0066\) & \(-0.87\) & 1.00 & \\ \(C_{Hl_{3}}\) & \(-0.0042\pm 0.0072\) & \(-0.21\) & \(-0.04\) & 1.00 & & \(-0.0048\pm 0.0072\) & \(-0.21\) & \(-0.01\) & 1.00 & \\ \(C_{ll}\) & \(-0.0198\pm 0.0146\) & \(-0.15\) & 0.51 & 0.47 & 1.00 & \(0.0028\pm 0.0091\) & \(-0.19\) & 0.16 & 0.81 & 1.00 \\ \hline \end{tabular} \end{table} Table 9: Global fit of the WCs affecting the W-boson mass at tree-level ### VLL inspired study Inspired by various see-saw like scenarios along with possibility to solve other shortcomings of the SM like muon \((g-2)\), neutrino mass etc., we take a closer look at various Vectorlike leptons multiplets by identifying the leading dimension-6 operators which gets affected. The list of leptons along with their corresponding yukawa coupling \(\lambda_{l}\) (where \(l\) represents the corresponding vectorlike lepton) and the masses of the heavy states \(M_{l}\) are shown in Table 11. As seen from the table, the tree-level imprints of the model parameters on the SMEFT WCs are restricted to \(C_{He}\), \(C_{Hl}^{(1)}\) and \(C_{Hl}^{(3)}\). Figure 4 and Table 12 represents the result of the fit once for the LEP observables alone and second for the case including \(\Delta_{CKM}\) alongwith LEP observables. The posterior values of the observables for these two cases can be read off from Table 13. We can see that even without including the \(\Delta_{CKM}\) constraint, we can satisfy both \(M_{W}\) and \(\Delta_{CKM}\) with \(2\sigma\). \(A_{l}^{SLD}\) improves to 1.56 \(\sigma\) while \(A_{B}^{FB}\) deteriorates to -2.18\(\sigma\). Inclusion of the \(\Delta_{CKM}\) constraint improves the posterior value of \(\Delta_{CKM}\) only marginally, while pushing the \(M_{W}\) beyond \(2\sigma\). We can put limits on the ratio of the coupling and mass for these scenarios. For eg. if we only consider only \(N^{\prime}\), we get a limit of \(v\lambda_{N}/M_{N}<0.074\), \(v\) is the vev. Similarly for \(E^{\prime}\), we get a limit of \(v\lambda_{E}/M_{E}<0.05\). If we look at \(L^{\prime}\oplus E^{\prime}\), the bounds come out to be \(v\lambda_{L}/M_{L}<0.045\) and \(v\lambda_{E}/M_{E}<0.066\). combinations of the corresponding WCs by rotating away the two purely bosonic contributions (\(C_{HD}\) and \(C_{HWB}\)) to get eight independent parameters which are finally constrained. When analyzed in that fashion, we showed that our results are in well agreement with those in the literature. We also found out that similar constraints can also be achieved by setting those two purely bosonic WCs just to zero. Phenomenologically, there can be frameworks where these bosonic operators are highly suppressed and so this procedure can direct provide limits for such kind of scenarios. The results of this analyses is presented in the Table 14. The second column of the table shows the best fit and deviation for the WCs given in the first column. Deviations are computed using marginalization procedure \begin{table} \begin{tabular}{|l c|c|c|c|} \hline Observables & Posterior(EWPO) & Pull & Posterior(EWPO+\(\Delta_{\text{CKM}}\)) & Pull \\ \hline \(\Gamma_{\text{Z}}\) & \(2.4956\pm 0.002319\) & \(-0.03\) & \(2.49943\pm 0.001285\) & \(-1.49\) \\ \hline \(\sigma_{\text{h}}\) & \(41.4734\pm 0.016867\) & \(0.2\) & \(41.4728\pm 0.016849\) & \(0.22\) \\ \hline \(\text{R}_{\text{e}}\) & \(20.7615\pm 0.020472\) & \(0.17\) & \(20.7636\pm 0.020442\) & \(0.1\) \\ \hline \(\text{R}_{\text{b}}\) & \(0.215813\pm 9.2\times 10^{-6}\) & \(0.72\) & \(0.215809\pm 9.2\times 10^{-6}\) & \(0.73\) \\ \hline \(\text{R}_{\text{c}}\) & \(0.17222\pm 0.000014\) & \(-0.04\) & \(0.172225\pm 0.000014\) & \(-0.04\) \\ \hline \(\text{R}_{\text{uc}}\) & \(0.172237\pm 0.000014\) & \(-0.69\) & \(0.172242\pm 0.000014\) & \(-0.69\) \\ \hline \(\text{A}_{\text{t}}\) & \(0.147395\pm 0.001155\) & \(-0.26\) & \(0.147823\pm 0.001149\) & \(-0.38\) \\ \hline \(\text{A}_{\text{t}}^{\text{SD}}\) & \(0.147395\pm 0.001155\) & \(1.63\) & \(0.147823\pm 0.001149\) & \(1.45\) \\ \hline \(\text{A}_{\text{b}}\) & \(0.927065\pm 0.000101\) & \(-0.2\) & \(0.927102\pm 0.0001\) & \(-0.2\) \\ \hline \(\text{A}_{\text{c}}\) & \(0.668109\pm 0.000546\) & \(0.07\) & \(0.668309\pm 0.000543\) & \(0.06\) \\ \hline \(\text{A}_{\text{s}}\) & \(0.935675\pm 0.000101\) & \(-0.45\) & \(0.935712\pm 0.0001\) & \(-0.45\) \\ \hline \(\text{A}_{\text{e}}^{\text{FB}}\) & \(0.016434\pm 0.000513\) & \(0.59\) & \(0.016624\pm 0.00051\) & \(0.42\) \\ \hline \(\text{A}_{\text{b}}^{\text{FB}}\) & \(0.103339\pm 0.00084\) & \(-2.07\) & \(0.10365\pm 0.000836\) & \(-2.24\) \\ \hline \(\text{A}_{\text{c}}^{\text{FB}}\) & \(0.073917\pm 0.000739\) & \(-0.89\) & \(0.074196\pm 0.000736\) & \(-0.98\) \\ \hline \(\text{M}_{\text{W}}\) & \(80.4131\pm 0.009697\) & \(0.02\) & \(80.4075\pm 0.009661\) & \(0.46\) \\ \hline \(\Gamma_{\text{w}}\) & \(2.09271\pm 0.001599\) & \(-0.18\) & \(2.09525\pm 0.000984\) & \(-0.24\) \\ \hline \(\text{Br}_{\text{W}\to M}\) & \(0.108088\pm 0.000069\) & \(0.57\) & \(0.108098\pm 0.00007\) & \(0.56\) \\ \hline \(\Delta_{\text{CKM}}\) & \(0.001861\pm 0.001554\) & \(-1.97\) & \(-0.0009359\pm 0.0006392\) & \(-0.59\) \\ \hline \hline \end{tabular} \end{table} Table 10: Posterior results and Pull for WCs effecting \(M_{W}\), once just with EWPO observables and once including the \(\Delta_{CKM}\) constraint. \begin{table} \begin{tabular}{|c c c c c c|} \hline **VLF** & **(_SU_(2)\({}_{L}\),_Y_)** & **Interaction** & **\(\mathbf{C_{He}}\)** & **\(\mathbf{C_{H}^{(1)}}\)** & **\(\mathbf{C_{H}^{(3)}}\)** \\ \hline \hline \(N^{a}\) & \((3,0)\) & \(N^{a}(H\epsilon\pi^{a}L)\) & \(0\) & \(+\lambda_{N}^{2}/4M_{N}^{2}\) & \(-\lambda_{N}^{2}/4M_{N}^{2}\) \\ \hline \(N^{\prime}\) & \((1,0)\) & \(N^{\prime}LH\) & \(0\) & \(+3\lambda_{N}^{2}/4M_{N}^{2}\) & \(+\lambda_{N}^{2}/4M_{N}^{2}\) \\ \hline \(L^{\prime}\) & \((2,-1/2)\) & \(EL^{\prime}H^{*}\) & \(-\lambda_{L}^{2}/2M_{L}^{2}\) & \(0\) & \(0\) \\ \hline \(L_{3/2}\) & \((\overline{2},-3/2)\) & \(E(L_{3/2}\epsilon H)\) & \(+\lambda_{L}^{2}/2M_{L}^{2}\) & \(0\) & \(0\) \\ \hline \(E^{\prime}\) & \((1,1)\) & \(E^{\prime}LH^{*}\) & \(0\) & \(-\lambda_{E}^{2}/4M_{E}^{2}\) & \(-\lambda_{E}^{2}/4M_{E}^{2}\) \\ \hline \(E^{a}\) & \((3,1)\) & \(E^{a}(H^{*}\pi^{a}L)\) & \(0\) & \(-3\lambda_{E}^{2}/4M_{E}^{2}\) & \(+\lambda_{E}^{2}/4M_{E}^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 11: List of new leptons that can couple to the SM lepton doublet \(L=(\nu_{\mu},l_{L})\) or singlet \(E=l_{R}\) (with the same gauge quantum numbers as \(L^{\prime}\) and \(E^{\prime}\)) and to the Higgs doublet \(H=(0,v+h/\sqrt{2})\) (an SU(2) doublet with \(Y=1/2\)). discussed in the section 5. We also tabulated the correlations between the coefficients in the last column. As shifts in the gauge couplings due to SMEFT depends on \(\delta G_{f}\) which is a combination of \(C_{Hl_{3}}\) and \(C_{ll}\) these two are expected to be highly correlated with a value of 0.94. In a similar manner, leptonic WCs have higher correlation among each other such as correlation coefficients for \(C_{Hl_{1}}\) and \(C_{ll}\) is 0.81 but negative and for \(C_{Hl_{1}}\) and \(C_{Hl_{3}}\) it is \(-0.67\). From the shifts in the gauge coupling shown in Table 2, one can expect very small correlations among quark WCs and leptonic one \(\sim\mathcal{O}(0.01)\). The value of SMEFT \(\chi^{2}\) after the fit comes out to be 3.24 compared the SM \(\chi^{2}\) of 40.08, implying a very good quality of the fit. The 2-D marginalized distributions of the WCs are shown in Figure 5 in purple colour. Posterior values of the EWPO observables show very good agreement with the experimental data with all of them within 1 \(\sigma\) as shown in Table 16. Notable improvements among them are \(M_{W}\) at -0.2\(\sigma\), \(A_{b}^{FB}\) at -0.19\(\sigma\) and \(A_{l}^{SLD}\) at 0.84\(\sigma\). \(\Delta_{CKM}\) however becomes worse as the pull with the EWPO fitted parameters increases its discrepancy to -2.7\(\sigma\). Inclusion of the CKM anomaly (\(\Delta_{CKM}\)), along with EWPO, shifts the best-fit values of the WCs, constraining them better. The results of the fit can be read off from Table 15 and the 2-D marginalized distributions of the WCs are given in Figure 5 in green colour. \(\Delta_{CKM}\) has the largest dependence on \(C_{Hl_{1}}\), \(C_{ll}\), \(C_{Hl_{3}}\) and \(C_{Hq_{3}}\) therefore there is significant change in the correlations involving these WCs. \(C_{Hq_{3}}\) and \(C_{ll}\) becomes highly correlated with a value of \(-0.83\). As \(\Delta_{CKM}\) depends only on left chiral operators, the correlation of their WCs to that of the right chiral ones decrease. \(\Delta_{CKM}\) also introduce high correlation between the WCs corresponding to leptonic and quark operators, e.g., the correlation factor for \(C_{Hq_{3}}\) with \(C_{Hl_{1}}\) and \(C_{Hl_{3}}\) now stands at 0.78\(\sigma\) and -0.64\(\sigma\) respectively. On the other hand, with the inclusion of \(\Delta_{CKM}\), observable reduces its discrepancy to a mere -0.32\(\sigma\). However, discrepancy in \(A_{c}^{FB}\), \(A_{b}^{FB}\) and \(A_{l}^{SLD}\) goes beyond 1\(\sigma\). The other observables also show good agreement with the experimental observations. \begin{table} \begin{tabular}{|c|c|c c|c|c c|} \hline WC & B.F(EWPO) & Correlation & B.F(EWPO+ckm) & \multicolumn{2}{|c|}{Correlation} \\ \hline & \multicolumn{2}{|c|}{\((\chi^{2}_{\rm fit}/\chi^{2}_{\rm SM}=20.53/40.08)\)} & \multicolumn{2}{|c|}{\((\chi^{2}_{\rm fit}/\chi^{2}_{\rm SM}=21.32/44.67)\)} \\ \hline \(C_{He}\) & \(-0.0148\pm 0.0075\) & 1.00 & & \(-0.0125\pm 0.0073\) & 1.00 & \\ \(C_{Hl_{1}}\) & \(-0.0037\pm 0.0043\) & 0.57 & 1.00 & & \(-0.0027\pm 0.0045\) & 0.56 & 1.00 \\ \(C_{Hl_{3}}\) & \(-0.0184\pm 0.0039\) & 0.55 & 0.21 & 1.00 & \(-0.0158\pm 0.0027\) & 0.47 & 0.17 & 1.00 \\ \hline \end{tabular} \end{table} Table 12: Global fit of the WCs affecting VLLs at tree-level to EWPO and \(\Delta_{CKM}\). Figure 4: 2-D Marginalised plot for the WCs impacting the VLLs Figure 5: 2-D marginalised plot for the fit with 8 WCs ### Lep-I+ Lep-II 10 parameter fit As discussed in the previous subsection, EWPO can only constrain eight of 10 WCs with Z pole LEP-I data. This can be overcomed by including LEP-II data. In second run of LEP, energy reached more than the need for production of pair of on-shell \(W\) bosons. This provides an opportunity to test four-fermionic final states at the collider. Precise measurements of cross-sections and angular distributions at various energy enhance the list of precision observables. Ref. [54, 55, 57], explored this in the context of SMEFT and showed that it lifts up the flat directions which were there in the case of LEP-I. Charged current productions of a pair of \(W\)'s are sensitive to triple gauge coupling (TGC) and that break the redundency and provide the possibility to constrain all 10 WCs of EWPO. In addition to the 10 WC, introduction of LEP-II data brings the anomalous triple gauge operator \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{Observables} & \multicolumn{1}{c|}{Posterior(EWPO)} & \multicolumn{1}{c|}{Pull} & \multicolumn{1}{c|}{Posterior(EWPO+\(\Delta_{\text{CKM}}\))} & \multicolumn{1}{c|}{Pull} \\ \hline \(\Gamma_{\text{Z}}\) & \(2.50052\pm 0.001405\) & -1.86 & \(2.49977\pm 0.001135\) & -1.66 \\ \hline \(\sigma_{\text{h}}\) & \(41.4738\pm 0.026172\) & 0.17 & \(41.4745\pm 0.025418\) & 0.16 \\ \hline \(\text{R}_{\text{e}}\) & \(20.7822\pm 0.013789\) & -0.53 & \(20.7787\pm 0.013098\) & -0.41 \\ \hline \(\text{R}_{\text{b}}\) & \(0.215787\pm 7.4\times 10^{-6}\) & 0.76 & \(0.215791\pm 5.9\times 10^{-6}\) & 0.76 \\ \hline \(\text{R}_{\text{c}}\) & \(0.172259\pm 0.000011\) & -0.05 & \(0.172253\pm 8.9\times 10^{-6}\) & -0.05 \\ \hline \(\text{R}_{\text{uc}}\) & \(0.172276\pm 0.000011\) & -0.69 & \(0.17227\pm 8.9\times 10^{-6}\) & -0.69 \\ \hline \(\text{A}_{\text{t}}\) & \(0.147563\pm 0.001161\) & -0.3 & \(0.147568\pm 0.001142\) & -0.31 \\ \hline \(\text{A}_{\text{t}}^{\text{SLD}}\) & \(0.147563\pm 0.001161\) & 1.56 & \(0.147568\pm 0.001142\) & 1.56 \\ \hline \(\text{A}_{\text{b}}\) & \(0.92735\pm 0.000081\) & -0.22 & \(0.927307\pm 0.000065\) & -0.21 \\ \hline \(\text{A}_{\text{c}}\) & \(0.669653\pm 0.000438\) & 0.01 & \(0.669418\pm 0.000352\) & 0.02 \\ \hline \(\text{A}_{\text{s}}\) & \(0.93596\pm 0.000081\) & -0.45 & \(0.935917\pm 0.000065\) & -0.45 \\ \hline \(\text{A}_{\text{v}}^{\text{FB}}\) & \(0.016509\pm 0.000516\) & 0.52 & \(0.016511\pm 0.000507\) & 0.52 \\ \hline \(\text{A}_{\text{b}}^{\text{FB}}\) & \(0.103522\pm 0.000825\) & -2.18 & \(0.103516\pm 0.000811\) & -2.18 \\ \hline \(\text{A}_{\text{c}}^{\text{FB}}\) & \(0.074352\pm 0.000645\) & -1.03 & \(0.074302\pm 0.000632\) & -1.01 \\ \hline \(\text{M}_{\text{W}}\) & \(80.3928\pm 0.007349\) & 1.89 & \(80.3888\pm 0.005898\) & 2.46 \\ \hline \(\Gamma_{\text{w}}\) & \(2.09599\pm 0.001254\) & -0.26 & \(2.09532\pm 0.00101\) & -0.24 \\ \hline \(\text{Br}_{\text{H-w}\text{M}}\) & \(0.108019\pm 0.000056\) & 0.64 & \(0.108049\pm 0.000045\) & 0.61 \\ \hline \(\Delta_{\text{CKM}}\) & \(-0.0022691\pm 0.0005091\) & 0.89 & \(-0.0019954\pm 0.0004086\) & 0.61 \\ \hline \end{tabular} \end{table} Table 13: Posterior results and Pull for the VLL inspired study, once just with EWPO observables and once including the \(\Delta_{CKM}\) constraint. \begin{table} \begin{tabular}{|c|c|c c c c c c c|} \hline \multicolumn{1}{|c|}{WC} & \multicolumn{2}{c|}{B.F(EWPO)} & \multicolumn{1}{c|}{Correlation} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline & \multicolumn{6}{c|}{(\(\chi^{2}_{\text{fit}}/\chi^{2}_{\text{SM}}=3.24/40.08\))} \\ \hline \(C_{Hd}\) & -0.4646 \(\pm\) 0.1715 & 1.00 & & & & & \\ \(C_{He}\) & -0.0099\(\pm\) 0.0085 & -0.36 & 1.00 & & & & \\ \(C_{Hl_{1}}\) & -0.0031\(\pm\) 0.011 & -0.12 & 0.42 & 1.00 & & & & \\ \(C_{Hl_{3}}\) & -0.0398\(\pm\)0.0159 & -0.05 & 0.08 & -0.67 & 1.00 & & & \\ \(C_{Hq_{1}}\) & 0.0101\(\pm\) 0.0341 & 0.21 & -0.08 & 0.01 & -0.04 & 1.00 & & \\ \(C_{Hq_{3}}\) & -0.0989\(\pm\) 0.0310 & 0.59 & -0.17 & 0.03 & 0.06 & -0.42 & 1.00 & & \\ \(C_{Hu}\) & 0.1162\(\pm\) 0.1195 & -0.14 & 0.10 & 0.07 & -0.03 & 0.44 & -0.76 & 1.00 & \\ \(C_{ll}\) & -0.0204\(\pm\) 0.0284 & -0.06 & -0.10 & -0.81 & 0.94 & -0.04 & -0.03 & -0.01 & 1.00 \\ \hline \end{tabular} \end{table} Table 14: Global fit to EWPO for the case with 8 WCs. \(\mathcal{O}_{\mathcal{W}}\) in the set of operators contributing at tree-level. As this is not part of EWPO or \(\Delta_{CKM}\) we set \(C_{W}=0\) for our analyses. Including the LEP-II data, increases the \(\chi^{2}\) to 121 due to the increase in number of observables. The results of the fit without including \(\Delta_{CKM}\) are shown in the Table 17. The minimum value of \(\chi^{2}\) now decreases to 83.8. But, the best fit values of \(C_{HD}\), \(C_{HWB}\) and \(C_{He}\) are slightly on the larger side. The correlations among the WCs change drastically due to the presence of the new LEP-II observables. Correlation between \(C_{Hl_{3}}\) and \(C_{ll}\) become almost negligible while many pairs of WCs become highly correlated. To show the strength of correlation we also plot color density matrix plot in Figure 6(a). Color coding decreases from blue to green showing decrease in strength of the correlation. It can be easily observed that many of them are dark blue with \begin{table} \begin{tabular}{|l|c|c|c|} \hline \hline Observable & EWPO Posterior & Pull & Posterior (EWPO + \(\Delta_{\text{CKM}}\)) & Pull \\ \hline \(\mathsf{f_{Z}}\) & \(2.49556\pm 0.002291\) & -0.019 & \(2.49808\pm 0.0022\) & -0.81 \\ \hline \(\sigma_{h}\) & \(41.4822\pm 0.03162\) & -0.027 & \(41.4600\pm 0.031\) & 0.45 \\ \hline \(\mathsf{R_{e}}\) & \(20.7668\pm 0.02582\) & 0.0036 & \(20.7540\pm 0.025\) & 0.36 \\ \hline \(\mathsf{R_{b}}\) & \(0.216334\pm 0.0006486\) & -0.048 & \(0.216049\pm 0.00063\) & 0.26 \\ \hline \(\mathsf{R_{c}}\) & \(0.171437\pm 0.0009729\) & 0.20 & \(0.171866\pm 0.00095\) & 0.074 \\ \hline \(\mathsf{R_{uc}}\) & \(0.171454\pm 0.0009729\) & -0.60 & \(0.171883\pm 0.00095\) & -0.65 \\ \hline \(\mathsf{A_{i}}\) & \(0.149194\pm 0.001314\) & -0.75 & \(0.148307\pm 0.0013\) & -0.50 \\ \hline \(\mathsf{A_{b}^{\text{\tiny{SLD}}}}\) & \(0.149194\pm 0.001314\) & 0.84 & \(0.148307\pm 0.0013\) & 1.2 \\ \hline \(\mathsf{A_{b}}\) & \(0.906442\pm 0.007754\) & 0.77 & \(0.917793\pm 0.0065\) & 0.24 \\ \hline \(\mathsf{A_{c}}\) & \(0.654988\pm 0.01351\) & 0.49 & \(0.683002\pm 0.0089\) & -0.45 \\ \hline \(\mathsf{A_{s}}\) & \(0.915052\pm 0.007754\) & -0.21 & \(0.926403\pm 0.0065\) & -0.34 \\ \hline \(\mathsf{A_{e}^{\text{\tiny{FB}}}}\) & \(0.0172330\pm 0.0005837\) & -0.11 & \(0.0168394\pm 0.00057\) & 0.22 \\ \hline \(\mathsf{A_{b}^{\text{\tiny{FB}}}}\) & \(0.100039\pm 0.001538\) & -0.19 & \(0.101929\pm 0.0013\) & -1.1 \\ \hline \(\mathsf{A_{c}^{\text{\tiny{FB}}}}\) & \(0.071990\pm 0.002980\) & -0.28 & \(0.0777235\pm 0.0021\) & -1.7 \\ \hline \(\mathsf{M_{W}}\) & \(80.4136\pm 0.009928\) & -0.024 & \(80.4099\pm 0.0098\) & 0.26 \\ \hline \(\mathsf{f_{w}}\) & \(2.08199\pm 0.005137\) & 0.071 & \(2.09552\pm 0.0015\) & -0.25 \\ \hline \(\mathsf{Br_{W\text{-}W}}\) & \(0.108653\pm 0.0003049\) & -0.056 & \(0.108119\pm 0.00023\) & 0.51 \\ \hline \(\Delta_{\text{CKM}}\) & \(0.0096343\pm 0.003966\) & -2.7 & \(0.001176\pm 0.00069\) & -0.32 \\ \hline \end{tabular} \end{table} Table 16: Posterior Value and Pulls for the EWPO observables with 8 WCs, once without \(\Delta_{CKM}\) and once with \(\Delta_{CKM}\) \begin{table} \begin{tabular}{|l|c|c c c c c c c c|} \hline \hline WC & B.F(EWPO+\(\Delta_{CKM}\)) & \multicolumn{8}{c|}{Correlation} \\ \hline & \multicolumn{8}{c|}{(\(\chi^{2}_{\text{fit}}/\chi^{2}_{\text{SM}}=10.88/44.67\))} \\ \hline \(C_{Hd}\) & -0.2125\(\pm\) 0.1453 & 1.00 & \multicolumn{8}{c|}{} & \multicolumn{8}{c|}{} \\ \(C_{He}\) & -0.0166\(\pm\) 0.0083 & -0.24 & 1.00 & \multicolumn{8}{c|}{} & \multicolumn{8}{c|}{} \\ \(C_{Hl_{1}}\) & -0.0136\(\pm\) 0.0107 & 0.08 & 0.35 & 1.00 & \multicolumn{8}{c|}{} & \multicolumn{8}{c|}{} \\ \(C_{Hl_{3}}\) & -0.0238\(\pm\) 0.0148 & -0.31 & 0.19 & -0.63 & 1.00 & \multicolumn{8}{c|}{} \\ \(C_{Hq_{1}}\) & -0.0291\(\pm\) 0.0311 & 0.55 & -0.23 & -0.16 & 0.12 & 1.00 & \multicolumn{8}{c|}{} \\ \(C_{Hq_{3}}\) & -0.0221\(\pm\) 0.0139 & 0.30 & 0.18 & 0.78 & -0.64 & -0.13 & 1.00 & \multicolumn{8}{c|}{} \\ \(C_{Hu}\) & -0.1206\(\pm\) 0.0832 & 0.40 & -0.15 & -0.27 & 0.37 & 0.23 & -0.37 & 1.00 & \multicolumn{8}{c|}{} \\ \(C_{ll}\) & 0.0076\(\pm\) 0.0265 & -0.32 & 0.01 & -0.79 & 0.93 & 0.13 & -0.83 & 0.38 & 1.00 \\ \hline \end{tabular} \end{table} Table 15: Global fit to EWPO and \(\Delta_{CKM}\) for the case with 8 WCs. correlation \(\sim 0.9\). The correlation of all WCs with Cll is small \(\sim 0.3\) and almost no correlation with \(C_{Hl_{3}}\) and \(C_{Hq_{3}}\). The correlation of \(C_{Hl_{3}}\) and \(C_{Hq_{3}}\) is almost 1 whereas both of them are less correlated with others. The posterior results of the fit for the precision observables are shown in Table 19. The discrepancy in all of these observables are within 1\(\sigma\) with the exception of \(\Delta_{CKM}\) where the pull for is -2.7\(\sigma\). Incorporation of \(\Delta_{CKM}\) as an observable in the fit makes it more constraining as shown in the Table 18. The absolute best-fit values of the WCs lie well within 1 with \(C_{ll}\) becoming very small. There is no significant change found in 1\(\sigma\) deviations of WCs and the correlations among the pairs of WCs also do not show much change. The density plot for correlation matrix assuming magnitude of elements is also shown in Figure 6(b). Posteriors results and pull of the EWPO and \(\Delta_{CKM}\) are collected in the fourth column and fifth column respectively of Table 19. All the pulls are again within 1\(\sigma\). As expected, pull for \(\Delta_{CKM}\) improves to -0.36\(\sigma\). The other observables continue to remain well within 1\(\sigma\) making the fit highly competitive with respect to the experimental data. ## 6 Summary and conclusions Electro-weak precision measurements place one of the most stringent constraints on all New Physics extensions of Standard Model. We explore dimension-6 operator subsets of the SMEFT in the context of LEP-I & II data along with Cabibbo anomaly hinting to need for Beyond Standard Model. In addition to EWPO, we analyze the status of Cabbibo anomaly within SMEFT using the most recent data. We compute the contributions to \begin{table} \begin{tabular}{|c|c|c c c c c c c c c c|} \hline & Result & \multicolumn{8}{c|}{Correlation} \\ \hline & \multicolumn{8}{c|}{(\(\chi^{2}_{\rm PH}/\chi^{2}_{\rm SM}=83.82/121.79\))} \\ \hline \(C_{Hd}\) & -0.0841\(\pm\) 0.3937 & 1.00 & & & & & & & & & \\ \(C_{HD}\) & -2.1541\(\pm\) 2.2945 & -0.90 & 1.00 & & & & & & & \\ \(C_{He}\) & 1.0669\(\pm\) 1.1474 & 0.90 & -0.99 & 1.00 & & & & & & \\ \(C_{Hl_{1}}\) & 0.5383\(\pm\) 0.5713 & 0.90 & -0.99 & 0.99 & 1.00 & & & & & \\ \(C_{Hl_{3}}\) & 0.01423\(\pm\) 0.4149 & 0.58 & -0.60 & 0.60 & 0.61 & 1.00 & & & & \\ \(C_{Hq_{1}}\) & -0.1717\(\pm\) 0.1912 & -0.87 & 0.98 & -0.98 & -0.98 & -0.60 & 1.00 & & & \\ \(C_{Hq_{3}}\) & -0.0346\(\pm\) 0.41 & 0.57 & -0.58 & 0.58 & 0.58 & 0.99 & -0.58 & 1.00 & & \\ \(C_{Hu}\) & -0.6179\(\pm\) 0.7532 & -0.90 & 0.99 & -0.99 & -0.99 & -0.61 & 0.98 & -0.59 & 1.00 & & \\ \(C_{HWB}\) & 1.0071\(\pm\) 0.9923 & 0.88 & -0.98 & 0.98 & 0.98 & 0.46 & -0.97 & 0.43 & -0.97 & 1.00 & \\ \(C_{ll}\) & -0.0295\(\pm\) 0.0273 & 0.27 & -0.29 & 0.29 & 0.27 & 0.005 & -0.30 & -0.028 & -0.30 & 0.32 & 1.00 \\ \hline \end{tabular} \end{table} Table 17: Best fit with 1\(\sigma\) deviation of WCs after fitting using EWPO observables. Correlation matrix of coefficients are also shown in the third column. \begin{table} \begin{tabular}{|c|c|c c c c c c c c c c c|} \hline & Result & \multicolumn{8}{c|}{Correlation} \\ \hline & \multicolumn{8}{c|}{(\(\chi^{2}_{\rm PH}/\chi^{2}_{\rm SM}=90.93/126.37\))} \\ \hline \(C_{Hd}\) & -0.1266\(\pm\) 0.3934 & 1.00 & & & & & & & & & & \\ \(C_{HD}\) & -0.5455\(\pm\) 2.2138 & -0.93 & 1.00 & & & & & & & & \\ \(C_{He}\) & 0.2564\(\pm\) 1.1066 & 0.93 & -0.99 & 1.00 & & & & & & & \\ \(C_{Hl_{1}}\) & 0.125\(\pm\) 0.5498 & 0.93 & -0.99 & 0.99 & 1.00 & & & & & & \\ \(C_{Hl_{3}}\) & -0.3029\(\pm\) 0.3974 & 0.59 & -0.57 & 0.57 & 0.58 & 1.00 & & & & & \\ \(C_{Hq_{1}}\) & -0.07295\(\pm\) 0.1876 & -0.88 & 0.98 & -0.99 & -0.99 & -0.58 & 1.00 & & & & \\ \(C_{Hq_{3}}\) & -0.2994\(\pm\) 0.403 & 0.57 & -0.55 & 0.55 & 0.56 & 0.99 & -0.56 & 1.00 & & & \\ \(C_{Hu}\) & -0.2947\(\pm\) 0.7434 & -0.90 & 0.99 & -0.99 & -0.99 & -0.60 & 0.98 & -0.58 & 1.00 & & \\ \(C_{HwB}\) & 0.4054\(\pm\) 0.9663 & 0.90 & -0.98 & 0.98 & 0.98 & 0.42 & -0.97 & 0.39 & -0.97 & 1.00 & \\ \(C_{ll}\) & -0.00004\(\pm\) 0.0249 & 0.31 & -0.44 & 0.44 & 0.43 & 0.14 & -0.42 & 0.08 & -0.41 & 0.47 & 1.00 \\ \hline \end{tabular} \end{table} Table 18: Best fit with 1\(\sigma\) deviation of WCs after fitting using EWPO observables and \(\Delta_{CKM}\). Correlation matrix of coefficients are also shown in the third column. the beta decay and muon decay using the SMEFT operators that enters in the LEFT at one loop matching. We classify the precision study of SMEFT into various frameworks. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline \hline Observable & Posterior(EWPO) & Pull & Posterior (EWPO+\(\Delta_{\text{CKM}}\)) & Pull \\ \hline \(\Gamma_{\text{Z}}\) & \(2.49521\pm 0.007001\) & 0.038 & \(2.49766\pm 0.014513\) & \(-0.14\) \\ \hline \(\sigma_{\text{h}}\) & \(41.4875\pm 0.126318\) & \(-0.049\) & \(41.4649\pm 0.094436\) & 0.16 \\ \hline \(\text{R}_{\text{e}}\) & \(20.7731\pm 0.056579\) & \(-0.098\) & \(20.7588\pm 0.135190\) & 0.059 \\ \hline \(\text{R}_{\text{b}}\) & \(0.216314\pm 0.000653\) & \(-0.025\) & \(0.216059\pm 0.000658\) & 0.24 \\ \hline \(\text{R}_{\text{c}}\) & \(0.171468\pm 0.000979\) & 0.19 & \(0.171851\pm 0.000987\) & 0.078 \\ \hline \(\text{R}_{\text{uc}}\) & \(0.171485\pm 0.000979\) & \(-0.60\) & \(0.171868\pm 0.000987\) & \(-0.64\) \\ \hline \(\text{A}_{\text{i}}\) & \(0.149064\pm 0.008602\) & \(-0.27\) & \(0.148263\pm 0.013819\) & \(-0.12\) \\ \hline \(\text{A}_{\text{i}}^{\text{SLD}}\) & \(0.149064\pm 0.008602\) & 0.25 & \(0.148263\pm 0.013819\) & 0.21 \\ \hline \(\text{A}_{\text{b}}\) & \(0.907419\pm 0.007661\) & 0.72 & \(0.917576\pm 0.006745\) & 0.25 \\ \hline \(\text{A}_{\text{c}}\) & \(0.656948\pm 0.013504\) & 0.43 & \(0.682121\pm 0.011477\) & \(-0.41\) \\ \hline \(\text{A}_{\text{s}}\) & \(0.916029\pm 0.007661\) & \(-0.23\) & \(0.926186\pm 0.006745\) & \(-0.34\) \\ \hline \(\text{A}_{\text{e}}^{\text{FB}}\) & \(0.017175\pm 0.00382\) & \(-0.019\) & \(0.0168195\pm 0.006136\) & 0.045 \\ \hline \(\text{A}_{\text{b}}^{\text{FB}}\) & \(0.100164\pm 0.006418\) & \(-0.085\) & \(0.101849\pm 0.010298\) & \(-0.21\) \\ \hline \(\text{A}_{\text{c}}^{\text{FB}}\) & \(0.072354\pm 0.006192\) & \(-0.23\) & \(0.0775032\pm 0.009421\) & \(-0.67\) \\ \hline \(\text{M}_{\text{W}}\) & \(80.4144\pm 0.08616\) & \(-0.013\) & \(80.4106\pm 0.181024\) & 0.014 \\ \hline \(\text{F}_{\text{w}}\) & \(2.08263\pm 0.011235\) & 0.054 & \(2.09490\pm 0.0198145\) & \(-0.21\) \\ \hline \(\text{BF}_{\text{w}\text{-w}\text{t}}\) & \(0.108559\pm 0.000487\) & 0.039 & \(0.108101\pm 0.000812\) & 0.41 \\ \hline \(\Delta_{\text{CKM}}\) & \(0.009629\pm 0.0040001\) & \(-2.7\) & \(-0.00118290\pm 0.0005185\) & \(-0.36\) \\ \hline \hline \end{tabular} \end{table} Table 19: Observable’s posterior values for the all-parameter fit with 10 WCs Figure 6: Density plot of correlation matrix: (a) LEP-I+LEP-II (b) LEP-I+LEP-II+\(\Delta_{CKM}\) fitting. In the first study, we parameterize the impact of BSM heavy states in oblique parameters \(S\), \(T\) and \(V\) and compute the posterior fit of the EWPO and \(\Delta_{CKM}\). The results are impressive as pulls for almost all the observables (except \(A_{l}^{SLD}\) and \(A_{b}^{FB}\)) are within \(1\sigma\). Another thing to note is that even without including \(\Delta_{CKM}\) into the fit the pull for Cabbibo anomaly comes down to \(\sim-0.52\). Including the \(\Delta_{CKM}\), pull improves to \(\sim-0.27\). In the next study, we choose the SMEFT operators \(C_{Hl_{3}}\) and \(C_{ll}\) as the ones which dictates the UV BSM physics as both of them affect the \(m_{W}\), \(G_{F}\) and \(\Delta_{CKM}\) at tree level. We analyze this case for LEP-I, LEP-I+\(\Delta_{CKM}\). Although they can explain the Cabibo anomaly very well, they fare very badly with \(M_{W}\) and \(A_{b}^{FB}\). Since \(M_{W}\) has the highest pull away from the SM among the observables of our interest, we next considered the BSM case which consists of the four operators that affect the \(M_{W}\) at tree level (\(C_{HD}\), \(C_{HWB}\), \(C_{Hl_{3}}\) and \(C_{ll}\)). The minimum value of \(\chi^{2}\) comes down to \(\sim~{}11\) from \(\sim~{}40\) and the fit fares very well with \(M_{W}\) with discrepancy well within \(1\sigma\). Bosonic operators \(C_{HD}\) and \(C_{HWB}\) are highly correlated in this case. Inclusion of \(\Delta_{CKM}\) shifts the best fit towards slightly higher values while simultaneously shrinking the allowed \(2\sigma\) ranges. Discrepancy in \(A_{l}^{SLD}\) and \(A_{b}^{FB}\) continues to remain towards the higher side. We then choose the set of operators motivated from Vectorlike lepton (VLL) model. Integrating out heavy degrees of freedom, at tree-level, generates the operators \(C_{He},C_{Hl_{1}}\) and \(C_{Hl_{3}}\). The allowed range for these WCs, after constraining using EWPO and \(\Delta_{CKM}\), turn out to be \({\cal O}(0.01)\). Inclusion of \(\Delta_{CKM}\) shifts allowed ranges to slightly larger values. We also explore a few minimal VLL frameworks and set the constraints over ratios of VLF Yukawas and the scale of the model. The posterior fit of the observables show that pulls for \(\Gamma_{Z}\) becomes worse with a discrepancy over \(1.5\sigma\). New posterior value of \(M_{W}\) is improved comparison to SM, but pulls remain at \(\sim 1.9\sigma\) and \(\sim 2.4\sigma\) for the fits without and with \(\Delta_{CKM}\) respectively. Thus we can conclude that minimal VLL frameworks are severely constrained by the precision measurements and slight tension exists with recent measurement of the \(M_{W}\) mass and \(A_{b}^{FB}\). Considering all the dimension-6 SMEFT operators, that appear in the EWPO at the leading order, it can be seen that only eight of ten WCs, which appear at tree-level, can be constrained by the LEP-I observables. We compute their best fit, first with only EWPO and then including \(\Delta_{CKM}\). In this case, best fit \(\chi^{2}\) relaxes to \(\sim 3\) from a SM \(\chi^{2}\) value of 40 from EWPO. On the other hand, inclusion of \(\Delta_{CKM}\) relaxes it to \(\sim 10.8\) from a SM \(\chi^{2}\) of 44. We also analyzed 2D correlations of the pairs of WCs with the highest correlation being of \(C_{Hl_{3}}\) and \(C_{ll}\). We find that SMEFT at the leading order brings pull for all the Electroweak observables within \(1\sigma\). Recent \(M_{W}\) measurement also fares exceptionally well with the pull coming down to \(\sim-0.03\sigma\). However, the EWPO fit worsens the \(\Delta_{CKM}\) compared to the SM as the discrepancy increases to \(-2.7\sigma\). When \(\Delta_{CKM}\) constraint is also taken into the account in the fit, discrepancies in \(A_{l}^{SLD}\), \(A_{b}^{FB}\) and \(A_{c}^{FB}\) go beyond \(1\sigma\) while improving the agreement with \(\Delta_{CKM}\) at \(-0.3\sigma\). Overall, we can conclude that even after including the \(\Delta_{CKM}\) into the fit, the pulls for all the observables are within \(2\sigma\) and with most them within \(1\sigma\). Interestingly, the LEP-II data lifts the two blind directions which are present for LEP-I. These new observables correspond to the pair production of \(W\)'s, which subsequently leads to four fermion final states, resulting in a number of angular observables at various energies. For the case without including the \(\Delta_{CKM}\) constraint, limits on the purely bosonic WCs \(C_{HD}\) and \(C_{HWB}\) are rather weak and their best-fit comes out to be slightly on the higher side. The correlations among the WCs are also generally on the higher side. The pulls, however, on the posterior values of the precesion observables are very small showing excellent agreement with the experimental data. Like the previous 8 parameter fit, this also worsens the \(\Delta_{CKM}\) by increasing the pull to -2.7\(\sigma\). The inclusion of \(\Delta_{CKM}\) constraint in the fit constraints the WCs further, including that of \(C_{HD}\) and \(C_{HWB}\) along with simultaneously making their best-fit values smaller. Another peculiar aspect of this fit is that it makes the best-fit value of \(C_{ll}\) very small. Correlations among the WCs still continues to be on the higher side and the agreement of the posterior values of the precesion observables including \(\Delta_{CKM}\) with the experimental data still remains excellent. Our subsequent aim is to automatise this code with the current observables and make it available for public use very soon. Finally, in order to realise a true global fit, we would also be including the LHC observables in future. ## Acknowledgments The authors would like to thank Dr.Nilanjana Kumar for valuable discussions during the initial days of the work. M.T.A. acknowledges the financial support of Department of Science and Technology, Government of India, India (DST) through INSPIRE Faculty grant DST/INSPIRE/04/2019/002507. K.D. acknowledges Council for Scientific and Industrial Research (CSIR), India for JRF/SRF fellowship with the award letter no. 09/045(1654)/2019-EMR-I. T.S. would like to acknowledge the support from the Dr. D.S. Kothari Postdoctoral fellowship scheme no. F.4-2/2006 (BSR)/PH/20-21/0163. ## Appendix The general correction to \(\hat{\sigma}_{h}^{0}\)_near_ the \(Z\) pole (\(s-M_{Z}^{2}\equiv\Delta\)) in the SMEFT is \[\frac{\delta\sigma_{h}^{0}}{\sigma_{h}^{0}} \simeq \frac{\delta\Gamma_{Z\rightarrow\ell\overline{\ell}}}{\Gamma_{Z \rightarrow\ell\overline{\ell}}}+\frac{\delta\Gamma_{Z\to Had}}{ \Gamma_{Z\to Had}}-\frac{\delta\omega(M_{Z}^{2})}{\omega(M_{Z}^{2})}- \frac{\delta\omega^{\star}(M_{Z}^{2})}{\omega^{\star}(M_{Z}^{2})}. \tag{52}\] where \[\overline{w}(s) = s\frac{\overline{\Gamma}_{Z}}{\overline{M}_{Z}}\mbox{ we get : }\delta w(s)=s\left(\frac{(\Gamma_{Z})_{SM}}{\hat{m}_{Z}}\right)\left(\frac{ \delta\Gamma_{Z}}{(\Gamma_{Z})_{SM}}\right). \tag{53}\] \[\overline{w}(s) = \overline{\Gamma}_{Z}\overline{M}_{Z}\mbox{ we get : }\delta w(s)=(\Gamma_{Z})_{SM}\hat{m}_{Z}\left(\frac{ \delta\Gamma_{Z}}{(\Gamma_{Z})_{SM}}\right). \tag{54}\] but we note the following simplified expressions. In the SMEFT \(\overline{A}_{f}\) can be written as \[\overline{A}_{f}=\frac{2\overline{r}_{f}}{1+\overline{r}_{f}^{2}}, \tag{55}\] where \(\overline{r}_{f}=\frac{\overline{r}_{f}^{2}}{\overline{g}_{A}^{2}}\). The redefinition of the \(Z\) coupling then leads to a shift of \(\overline{A}_{f}\) such that \(\overline{A}_{f}=(A_{f})_{SM}\left(1+\frac{\delta A_{f}}{(A_{f})_{SM}}\right)\) where \[\frac{\delta A_{f}}{(A_{f})_{SM}}=\delta r_{f}\left(1-\frac{2(r_{f}^{2})_{SM}} {1+(r_{f}^{2})_{SM}}\right). \tag{56}\] Here \(\delta r_{f}\) is defined by \(r_{f}=(r_{f})_{SM}\left(1+\delta r_{f}\right)\) with \(\delta r_{f}=\delta g_{V}^{f}/G_{V}^{f}-\delta g_{A}^{f}/G_{A}^{f}\). We again use : \((...)_{SM}\) for leading order SM predictions and \(G_{A,V}^{f}\) for leading order SM predictions for the couplings. Then the corrections to \(A_{FB}^{0,f}\) from the shifts in the effective couplings are \[\delta A_{FB}^{0,f}=\frac{3}{4}\left[\delta A_{\ell}\left(A_{f}\right)_{SM}+ \left(A_{\ell}\right)_{SM}\delta A_{f}\right]. \tag{57}\] The contribution to the total width of \(W\) is \(\overline{\Gamma}_{W}=\Gamma_{W}^{SM}+\delta\Gamma_{W}\) where \[\Gamma_{W}^{SM}=\frac{3\sqrt{2}\hat{G}_{F}\hat{m}_{W}^{3}}{4\pi}, \delta\Gamma_{W}=\Gamma_{W}^{SM}\left(\frac{4}{3}\delta g^{W_{\pm},\ell}+ \frac{8}{3}\delta g^{W_{\pm},q}+\frac{\delta m_{W}^{2}}{2\hat{m}_{W}^{2}} \right). \tag{58}\] Where \(\hat{m}_{W}\) is the SM value.
2306.02381
Sparse Convolution for Approximate Sparse Instance
Computing the convolution $A \star B$ of two vectors of dimension $n$ is one of the most important computational primitives in many fields. For the non-negative convolution scenario, the classical solution is to leverage the Fast Fourier Transform whose time complexity is $O(n \log n)$. However, the vectors $A$ and $B$ could be very sparse and we can exploit such property to accelerate the computation to obtain the result. In this paper, we show that when $\|A \star B\|_{\geq c_1} = k$ and $\|A \star B\|_{\leq c_2} = n-k$ holds, we can approximately recover the all index in $\mathrm{supp}_{\geq c_1}(A \star B)$ with point-wise error of $o(1)$ in $O(k \log (n) \log(k)\log(k/\delta))$ time. We further show that we can iteratively correct the error and recover all index in $\mathrm{supp}_{\geq c_1}(A \star B)$ correctly in $O(k \log(n) \log^2(k) (\log(1/\delta) + \log\log(k)))$ time.
Xiaoxiao Li, Zhao Song, Guangyi Zhang
2023-06-04T15:31:24Z
http://arxiv.org/abs/2306.02381v1
# Sparse Convolution for Approximate Sparse Instance ###### Abstract Computing the convolution \(A\star B\) of two vectors of dimension \(n\) is one of the most important computational primitives in many fields. For the non-negative convolution scenario, the classical solution is to leverage the Fast Fourier Transform whose time complexity is \(O(n\log n)\). However, the vectors \(A\) and \(B\) could be very sparse and we can exploit such property to accelerate the computation to obtain the result. In this paper, we show that when \(\|A\star B\|_{\geq c_{1}}=k\) and \(\|A\star B\|_{\leq c_{2}}=n-k\) holds, we can approximately recover the all index in \(\operatorname{supp}_{\geq c_{1}}(A\star B)\) with pointwise error of \(o(1)\) in \(O(k\log(n)\log(k)\log(k/\delta))\) time. We further show that we can iteratively correct the error and recover all index in \(\operatorname{supp}_{\geq c_{1}}(A\star B)\) correctly in \(O(k\log(n)\log^{2}(k)(\log(1/\delta)+\log\log(k)))\) time. Introduction Computing the convolution \(A\star B\) of two vectors of dimension \(n\) is one of the most important computational primitives. It also has been widely used in many fields such as computer vision [18, 19, 20, 21], signal processing [23, 24], graph mining [25]. For example, it has applications in problems like three number sum problem and all-pairs shortest paths, where the entries of vector \(A\) and \(B\) are non-negative integers. In string algorithms, non-negative convolution is employed when computing the Hamming distance of a pattern as well as the distance between each sliding window of a text [13]. The classical algorithm to compute the non-negative convolution leverages the Fast Fourier Transform (FFT) and its running time complexity is \(O(n\log n)\). Algorithms in [18] give \(O(n^{2})\) for 3-SUM and related problems. Thanks to the key techniques mentioned in [18], the celebrated Balog-Szemeredi-Gowers Theorem (BSG Theorem) [17, 20] and FFT, the authors in [18] gave the first truly subquadratic algorithms for various problems associated with 3SUM. The key techniques come from BSG Theorem [17, 20] and FFT. The BSG theorem has been improved by [24], however the result of [24] did not provide efficient algorithm (with small running time) for solution. They only show the existence of such solution. However, in many scenarios, the vector \(A\) and \(B\) could be very sparse and we can exploit such sparsity to improve the running time complexity compared to the classical algorithm implemented via FFT algorithm. [1] studied the exact \(k\)-sparse case and proves that \(k\)-sparse non-negative convolution can be reduced to dense non-negative convolution with an additive \(k\log\log k\) term. We consider the problem of approximately \(k\)-sparse non-negative convolution and state the assumption as follows: **Assumption 1.1** (Approximate sparse non-negative convolution).: _Assume \(A,B\in\mathbb{R}_{+}^{n}\). Additionally, there exist \(c_{1}=\Omega(1)\) and \(c_{2}=o(n^{-2})\) such that \(\left\|A\star B\right\|_{\geq c_{1}}=k\) and \(\left\|A\star B\right\|_{\leq c_{2}}=n-k\)._ Note that the previous work [1] only considers \(c_{2}=0\). We can handle some error here. We summarize our contributions as follows: * We study the approximately \(k\)-sparse non-negative convolution problem and design an approximate sparse convolution algorithm (Algorithm 1) such that it can approximately recover the all index in \(\text{supp}_{\geq c_{1}}(A\star B)\) with point-wise error of \(o(1)\) in \(O(k\log(n)\log(k)\log(k/\delta))\) time. * We further design another algorithm (Algorithm 2) which can iteratively correct the error and recover all index in \(\text{supp}_{\geq c_{1}}(A\star B)\) correctly in \(O(k\log(n)\log^{2}(k)(\log(1/\delta)+\log\log(k)))\) time. **Roadmap** We first discuss the related work in Section 2. We then present some preliminary definitions and lemmas in Section 3. We present our result for approximate sparse convolution in Section 4. We then show how to iteratively correct the error in Section 5. We conclude our paper in Section 6. ## 2 Related Work Sparse ConvolutionThere has been a lot of previous work on accelerating sparse convolution computation [22, 23, 24, 25, 26]. Hash function has wide applications. In sparse convolution problem: for vectors \(u,v\in\mathbb{R}_{+}^{n}\), compute their classical convolution \(\vec{u}\ast\vec{v}=\vec{z}\) (where \(z_{k}=\sum_{i=0}^{k}u_{i}v_{k-i}\) ) in "output-sensitive" time, close to \(\|\vec{z}\|\), the number of nonzeros in \(\vec{z}\). The problem was raised by Muthukrishnan [14], and previously solved by Cole and Hariharan [13]. Cole et. al. [13] obtained a \(O(k\log^{2}(n)+\operatorname{polylog}(n))\) time complexity for sparse nonnegative convolution case with a Las Vegas algorithm. Their strategy incorporates a number of concepts, including encoding characters with complex entries before using convolution, and constructs on linear hashing and string algorithms to identify \(\operatorname{supp}(A\star B)\). Recent methods [15, 16, 17, 18, 19, 20] rely largely on hashing modulo an arbitrary prime number. This method loses one log factor as a result of the Prime Number Theorem's stipulation for the density of primes and obtained \(O(k\log k)\) or even \(O(k\log^{2}k)\). Nakos et.al.[16] achieves \(\widetilde{O}(k\log^{2}(n)+\operatorname{polylog}(n))\) time complexity for sparse general convolution case. There are several implementation for sparse convolution algorithms in [14, 15]. Sparse convolution is also related to sparse Fourier transform, which has been also widely studied [1, 10]. Sparse Matrix MultiplicationIf most elements in a matrix are zero, we call this matrix is a sparse matrix. However, it would be a waste of space and time to save and compute these sparse matrices. Therefore, it is important to only save the nonzero elements. Standardly in sequential programming languages, sparse matrices are represented using an array with one element per row, each of which comprises a linked list of the nonzero values in that row and their column number. Sparse matrix multiplication is a compute kernel used in a variety of application fields. It plays an important role in many areas such as data analytics, graph processing, and scientific computing. In the past decades, there has been many previous work on algorithm side optimization [11, 12, 13] and hardware acceleration [14, 15]. The appearance of these algorithms accelerates the sparse matrices manipulation greatly. ## 3 Preliminary NotationsFor any natural number \(n\), we use \([n]\) to denote the set \(\{1,2,\ldots,n\}\). We use \(A^{\top}\) to denote the transpose of matrix \(A\). For a probabilistic event \(f(x)\), we define \(\mathbf{1}\{f(x)\}\) such that \(\mathbf{1}\{f(x)\}=1\) if \(f(x)\) holds and \(\mathbf{1}\{f(x)\}=0\) otherwise. We use \(\Pr[\cdot]\) to denote the probability, and use \(\mathbb{E}[\cdot]\) to denote the expectation if it exists. For a matrix \(A\), we use \(\operatorname{tr}[A]\) for trace of \(A\). We use \(\mathcal{T}_{\mathrm{mat}}(a,b,c)\) to denote the time of multiplying an \(a\times b\) matrix with another \(b\times c\) matrix. We then provide several definitions of \(\partial\) notation, non-negative convolution, cyclic convolution, generalized norm and support, and rounding here. **Definition 3.1** (The \(\partial\) notation).: Given a vector \(A\in\mathbb{R}^{n}\), we define \(\partial A\) to be the vector of the same dimension such that \[(\partial A)_{i}:=A_{i}\cdot i. \tag{1}\] **Example 3.2**.: We can choose \(A\) to be a length \(7\) vector, \(A=(3,1,2,1,2,1,1)\). Then we can compute \(\partial A\), which becomes \(\partial A=(3,2,6,4,10,6,7)\). A visualization of \(A\) and \(\partial A\) is shown in Figure 1. **Definition 3.3** (Non-negative Convolution).: Given vectors \(A,B\in\mathbb{N}^{n}\), the vector \(C=A\star B\in\mathbb{N}^{2n-1}\) is defined by \[C_{k}:=\sum_{i=0}^{n}A_{i}\cdot B_{k-i}.\] **Example 3.4**.: If \(A=(1,2,4,3,5,0,7)\), \(B=(1,4,3,6,7,8,9)\), then \(C=(1,6,15,31,48,75,93,129,116,109,94,56,63)\). A visualization of \(A\), \(B\) and \(C\) is shown in Figure 1(a). **Example 3.5**.: If \(A=(0,1,0,1,0,1,0)\), \(B=(0,1,0,1,0,1,0)\), then \(C=(0,0,1,0,2,0,3,0,2,0,1,0,0)\). A visualization of \(A\), \(B\) and \(C\) is shown in Figure 1(b). **Example 3.6**.: If \(A=(0,1,0,1,0,1,0)\), \(B=(0,1,0,1,1,1,1)\), then \(C=(0,0,1,0,2,1,3,2,2,2,1,1,0)\). A visualization of \(A\), \(B\) and \(C\) is shown in Figure 1(c). Given the definition of non-negative convolution, we want to solve the following sparse non-negative convolution problem. **Definition 3.7**.: Given vectors \(A,B\in\mathbb{R}_{+}^{n}\), we want to recover a vector \(D\in\mathbb{R}^{n}\) such that: \[\operatorname{supp}(D) =\operatorname{supp}_{\geq c_{1}}(A\star B)\] \[D_{j} =(A\star B)_{j}+o(1)\ \ \forall j\in\operatorname{supp}_{\geq c_{1}}(A \star B)\] Figure 2: Visualization of \(A\), \(B\) and \(C\). We state our main result in the following two theorems: Theorem 3.8 shows that we can compute approximate sparse non-negative convolution for \(A\star B\). **Theorem 3.8** (Approximate sparse convolution, informal version of Theorem 4.3).: _Let \(c_{1}=\Omega(1)\) and \(c_{2}=o(n^{-2})\). Suppose that the Assumption 1.1 holds._ _There is an Algorithm 1 that runs in time_ \[O(k\log(n)\log(k)\log(k/\delta)).\] _recovers all index in \(\operatorname{supp}_{\geq c_{1}}(A\star B)\) with point-wise error of \(o(1)\), i.e._ * \(\operatorname{supp}(D)=\operatorname{supp}_{\geq c_{1}}(A\star B)\)_,_ * \(D_{j}=(A\star B)_{j}+o(1)\) _for all_ \(j\in\operatorname{supp}_{\geq c_{1}}(A\star B)\)_._ _holds with probability at least \(1-\delta\)._ Theorem 3.9 shows that we can iteratively correct the error for the approximate sparse non-negative convolution. **Theorem 3.9** (Informal version of Theorem 5.1).: _Let \(c_{1}=\Omega(1)\) and \(c_{2}=o(n^{-2})\)._ _Suppose that the Assumption 1.1 holds._ _There is an algorithm (Algorithm 2) that runs in time_ \[O(k\log(n)\log^{2}(k)(\log(1/\delta)+\log\log(k)))\] _recovers all index in \(\operatorname{supp}_{\geq c_{1}}(A\star B)\) correctly, i.e._ * \(\operatorname{supp}(D)=\operatorname{supp}_{\geq c_{1}}(A\star B)\)__ * \(D_{j}=(A\star B)_{j}\) _for all_ \(j\in\operatorname{supp}_{\geq c_{1}}(A\star B)\)__ _holds with probability at least \(1-\delta\)._ **Definition 3.10** (Cyclic convolution).: The cyclic convolution of two length-\(n\) vectors \(A,B\) is the length-\(n\) vector \(A\star_{n}B\) with \[(A\star_{n}B)_{i}:=\sum_{j=0}^{n-1}A_{j}B_{(i-j)\mod n}.\] We define support as follows: \[\operatorname{supp}(A)\ :=\{i\in[n]:A_{i}\neq 0\}\] We define \(\ell_{0}\) norm, \[\|A\|_{0}\ :=|\operatorname{supp}(A)|.\] We define \(\ell_{\infty}\) norm as follows, \[\|A\|_{\infty}\ :=\max_{i\in[n]}|A_{i}|.\] **Definition 3.11** (Generalized norm and support).: For vector \(A\in\mathbb{R}^{n}\), we define its \(\geq C\)-norm and \(\leq c\)-norm as follows: * \(\|A\|_{\geq C}:=\sum_{i\in[n]}1[A_{i}\geq C]\). * \(\|A\|_{\leq c}:=\sum_{i\in[n]}1[A_{i}\leq c]\). We define a matrix \(A\in\mathbb{R}^{n}\) with \(\geq C\)-support as \[\operatorname{supp}(A)_{\geq C}:=\{i\in[n]:A_{i}\geq C\}.\] **Definition 3.12** (Rounding).: Define function \(\operatorname{int}(\cdot):\mathbb{R}\mapsto\mathbb{N}\) which rounds a number to its closest integer. **Definition 3.13** (Affine operator).: We say \(\iota\) is an affine operator if \[\iota(A)-\iota(B)=\iota(A-B).\] In the following, when we refer to "isolated" or "non-isolated" elements, we only consider those in \(\operatorname{supp}_{\geq c_{1}}(A)\). **Definition 3.14** (Isolated index).: Let \(g(x)=x\bmod p\) where \(p\) is a random prime in the range \([m,2m]\). We say that an index \(x\in\operatorname{supp}_{\geq c_{1}}(A\star B)\) is "isolated" if there is no other index \(x^{\prime}\in\operatorname{supp}_{\geq c_{1}}(A\star B)\) with \[g(x^{\prime})\in(g(x)+\{-2p,-p,0,p,2p\})\mod m.\] **Lemma 3.15** (Hash function,[1]).: _When combined with the ideal hash function \(\iota\) gives_ \[\iota(\partial(A\star B))=\iota(\partial A)\star_{m}\iota(B)+\iota(A)\star_{m }\iota(\partial B).\] _The \(b\)-th coordinate of this vector is_ \[\iota(\partial(A\star B))_{b}=\sum_{i:\iota(i)=b}i\cdot(A\star B)_{i}\] _which can be accessed by computing the length- \(m\) convolutions \(\iota(\partial A)\star_{m}\iota(B)\) and \(\iota(A)\star_{m}\iota(\partial B)\) and adding them together. By setting \(m=O(k)\), we can now infer a constant fraction of elements \(i\in\operatorname{supp}(A\star B)\) by performing the division_ \[\frac{\iota(\partial(A\star B))_{b}}{\iota((A\star B))_{b}}=\frac{\sum_{i: \iota(i)=b}i\cdot(A\star B)_{i}}{\sum_{i:\iota(i)=b}(A\star B)_{i}}\] _for all \(b\in[m]\). This yields the locations of all isolated elements in \(\operatorname{supp}(A\star B)\) under \(\iota\)._ We will leverage the following lemma of hashing modulo a random prime during our analysis. **Lemma 3.16** (Hashing Modulo a Random Prime, Lemma 8.2 of [1]).: _With modular hashing, we let the hash function \(g(x)=x\bmod p\) where \(p\) is a random prime in the range \([m,2m]\), the value \(x\) is an integer hash code generated from the key._ _Then the following properties hold:_ **Universality:**: _For distinct keys_ \(x,y\in[U]\)_:_ \[\Pr[g(x)=g(y)]\leq 2\log(U)/m.\] **Affinity:**_For arbitrary keys \(x,y\):_ \[g(x)+g(y)=g(x+y)\mod p.\] We also generalize the definition on \(g(x)\) to \(g(A)\) for a vector \(A\in\mathbb{R}^{n}\). Let \(g(x)=x\mod p\). Then \(g(A)\in\mathbb{R}^{p}\) where \[g(A)_{i}:=\sum_{j\in[n],\ g(j)=i}A_{j}.\] A visualization of \(g(x)=x\mod p\) and Algorithm 1 Line 11 is shown in Figure 2(b). We will also use the Hoeffding bound to obtain high success probability. **Lemma 3.17** (Hoeffding bound [1]).: _Let \(X_{1},\cdots,X_{n}\) be \(n\) independent bounded variables in \([a_{i},b_{i}]\). Let \(X:=\sum_{i=1}^{n}X_{i}\), then we have_ \[\Pr[|X-\mathbb{E}[X]|\geq t]\leq 2\exp\left(-\frac{2t^{2}}{\sum_{i=1}^{n}(b_{i }-a_{i})^{2}}\right).\] ## 4 Approximate sparse non-negative convolution In this section, we present our proposed approximate sparse convolution algorithm architecture as well as corresponding lemmas and proofs. We first describe our approximate sparse non-negative convolution algorithm in Algorithm 1 and theorem stating the approximation guarantee and running time complexity. In the following lemma, we want to prove for all indexes \(i\in\operatorname{supp}_{\geq c_{1}}(A\star B)\), it is isolated in at least \(L/2\) hashing functions with high probability. **Lemma 4.1**.: _Suppose that the Assumption 1.1 holds, with probability at least \(1-\delta\), for all \(i\in\operatorname{supp}_{\geq c_{1}}(A\star B)\), \(i\) is isolated in at least \(L/2\) hashing functions in \(\{g^{(l)}\}_{l=1}^{L}\)._ Proof.: Recall that \(A,B\in\mathbb{R}_{+}^{n}\) and there exist \(c_{1}=\Omega(1)\) and \(c_{2}=o(n^{-2})\) such that \[\|A\star B\|_{\geq c_{1}}=k\] Figure 3: Visualization of \(g(x)=x\mod p\). There are seven elements. \[\|A\star B\|_{\leq c_{2}}=n-k\] according to Assumption 1.1. Let \(g(x)=x\) mod \(p\) where \(p\) is a random prime in the range \([m,2m]\). By Lemma 3.16, we have \[\Pr[i\text{ is non-isolated}] \leq|\text{supp}_{\geq c_{1}}(A\star B)|\cdot\Pr[g(x)=g(y)]\] \[\leq k\cdot\Pr[g(x)=g(y)]\] \[\leq k\cdot 2\log(U)/m\] \[=\frac{2k\log n}{m}\] \[\leq\frac{1}{C\log k}\] \[\leq\frac{1}{4} \tag{2}\] where the first step follows from definition of isolated index, the second step follows from \(|\text{supp}_{\geq c_{1}}(A\star B)|=k\), the third step follows from \(\Pr[g(x)=g(y)]\leq 2\log(U)/m\), the forth step follows from \(U=n\), the fifth step follows from \(m\geq C\cdot k(\log n)\cdot(\log k)\), the last step follows from \(C\geq 4\) and \(\log k\geq 1\). For every fixed \(i\in\text{supp}_{\geq c_{1}}(A\star B)\), since hash functions are i.i.d chosen, by Lemma 3.17, with probability at least \(1-\delta/k\), \[\frac{1}{L}\sum_{l=1}^{L}\mathbf{1}[i\text{ is non-isolated in }g^{(l)}] \leq\ \Pr[i\text{ is non-isolated}]+2\sqrt{\frac{\log \left(k/\delta\right)}{L}}\] \[\leq 1/4+2\sqrt{\frac{\log\left(k/\delta\right)}{L}}\] \[\leq 1/4+1/4\] \[\leq 1/2. \tag{3}\] where the first step follows from \(\frac{1}{L}\sum_{l=1}^{L}\mathbf{1}[i\text{ is non-isolated in }g^{(l)}]\leq\Pr[i\text{ is non-isolated}]+2\sqrt{\frac{\log \left(k/\delta\right)}{L}}\), where the second step follows from Eq. (2), the third step follows from \(L\geq 100\log(k/\delta)\), and the last step follows from simple algebra. Therefore it follows by union bound for all \(i\in\text{supp}_{\geq c_{1}}(A\star B)\). This completes the proof. The goal of the following lemma is to prove that the value \(V_{i}^{l}\) computed in SparseConvolution in Algorithm 1 satisfies \(V_{i}^{(l)}=(A\star B)_{\text{int}(x)}+o(1)\). **Lemma 4.2**.: _Consider \(l\in[L]\) and its corresponding hash function \(g^{(l)}:[n]\rightarrow[p^{(l)}]\). Let_ \[V^{(l)} :=g^{(l)}(A)\star g^{(l)}(B)\] \[W^{(l)} :=g^{(l)}(\partial A)\star g^{(l)}(B)+g^{(l)}(A)\star g^{(l)}( \partial B)\] _as defined in Line 12, 13._ _Let \(i\in\text{supp}_{\geq c_{1}}(A\star B)\) be some coordinate and let_ \[x:=W_{i}^{(l)}/V_{i}^{(l)}\] as stated in Line 15._ _Suppose \(i\) is isolated with respect to \(g^{(l)}\), then \(x\) satisfies \(|x-\mathrm{int}(x)|=o(1)\) and the \(V_{i}^{(l)}\) in Algorithm 1 Line 12 satisfies_ \[V_{i}^{(l)}=(A\star B)_{\mathrm{int}(x)}+o(1).\] Proof.: There exists unique \(\widehat{x}\in\mathrm{supp}_{\geq c_{1}}(A\star B)\) with \(g^{(l)}(\widehat{x})=i\). In this case, we have \[V_{i}^{(l)} = \sum_{y:t^{(l)}(y)=i}(A\star B)_{y} \tag{4}\] \[= (A\star B)_{\widehat{x}}+o(1),\] where the first step follows from \(V_{i}^{(l)}=\sum_{y:t^{(l)}(y)=i}(A\star B)_{y}\) (see Algorithm 1), the second step follows from \(\sum_{y:t^{(l)}(y)=i}(A\star B)_{y}=(A\star B)_{\widehat{x}}+o(1)\). Next, we can rewrite \(x\) as follows: \[x =\frac{W_{i}^{(l)}}{V_{i}^{(l)}}\] \[=\frac{\iota^{(l)}(\partial(A\star B))_{i}}{\iota^{(l)}(A\star B)_{ i}}\] \[=\frac{\sum_{y:\iota^{(l)}(y)=i}y\cdot(A\star B)_{y}}{\sum_{y: \iota(x)=i}(A\star B)_{y}}\] \[=\widehat{x}+o(1).\] where the first step follows from definition of \(x\), the second step follows from definition of \(W_{i}^{(l)}\) and \(V_{i}^{(l)}\), the second step follows from Eq. (1) and Lemma 3.15, the third step follows from Eq. (4). Therefore \[|x-\widehat{x}|=o(1)\] and \[|V_{i}^{(l)}-(A\star B)_{\widehat{x}}|=o(1).\] This completes the proof. With the premise of Lemma 4.1 and Lemma 4.2, it is possible to prove the approximation guarantee and time complexity of SparseConvolution in Algorithm 1 in Theorem 4.3. **Theorem 4.3** (Approximate sparse convolution, formal version of Theorem 3.8).: _Suppose that the Assumption 1.1 holds, let \(c_{1}=\Omega(1)\) and \(c_{2}=o(n^{-2})\). Algorithm 1 runs in time_ \[O(k\log(n)\log(k)\log(k/\delta))\] _recovers all index in \(\operatorname{supp}_{\geq c_{1}}(A\star B)\) with point-wise error of \(o(1)\), i.e._ * \(\operatorname{supp}(D)=\operatorname{supp}_{\geq c_{1}}(A\star B)\)__ * \(D_{j}=(A\star B)_{j}+o(1)\) _for all_ \(j\in\operatorname{supp}_{\geq c_{1}}(A\star B)\)__ _holds with probability at least \(1-\delta\)._ Proof.: We initiate the high probability event in Lemma 4.1. In this event, for each \(i\in I\), we have \(L/2\leq|F_{i}|\leq L\), so we must have \(D_{i}=\operatorname{median}(F_{i})=V_{j}^{(l)}\) for some \(l\) and \(j\) such that \(i\) is isolated and \(i=W_{j}/V_{j}\). Then by Lemma 4.2, this implies \(D_{i}=(A\star B)_{i}+o(1)\) and \(\operatorname{supp}_{\geq c_{1}}(A\star B)\subseteq I\). Hence the first statement is proven. For running time, we notice the followings * Line 12-13 takes \(O(k\log(n)\log^{2}(k))\) time. * Line 14-17 takes one pass of all recovery, which takes \(O(mL)\) time. * Line 21 and Line 23 takes totally \(O(|C|)=O(mL)\) time. We could build up a perfect hash function, scanning though all pairs in \(C\) and take out all possible indexes and its corresponding value in linear time. * Line 25 takes \(O(mL)\) time in total.For each \(i\in I\), take median value of a set \(F_{i}\) takes \(O(|F_{i}|)\) time. Because \(|F_{i}|\leq L\) and \(|I|\leq m\), overall it takes \[O(\sum_{i\in I}|F_{i}|)=O(mL)\] time. Since \(m=O(k\log(n){\log(k)})\), \(L=O(\log(k/\delta))\), then \[mL=O(k\log(n)\log(k)\log(k/\delta)).\] To sum up, the total running time is: \[O(k\log(n)\log^{2}(k)+mL)=O(k\log(n)\log(k)\log(k/\delta)).\] ## 5 Iteratively correcting the errors The previous section illustrates the architecture of algorithms to approximate convolution computation. In this section, we will show how to iteratively corrects the errors in Algorithm 2. **Theorem 5.1** (Formal version of Theorem 3.9).: _Let \(c_{1}=\Omega(1)\) and \(c_{2}=o(n^{-2})\). Suppose Assumption 1.1 holds._ _There is an algorithm (Algorithm 2) that runs in time_ \[O(k\log(n)\log^{2}(k)(\log(1/\delta)+\log\log(k)))\] _recovers all index in \(\operatorname{supp}_{\geq c_{1}}(A\star B)\) correctly, i.e._ * \(\operatorname{supp}(D)=\operatorname{supp}_{\geq c_{1}}(A\star B)\)__ * \(D_{j}=(A\star B)_{j}\) _for all_ \(j\in\operatorname{supp}_{\geq c_{1}}(A\star B)\)__ _holds with probability at least \(1-\delta\)._ Proof.: The proof comes from Lemma 5.7 and Lemma 5.8. This completes the proof. We use the following Lemma from [1]. Since this result only depends on hash function and Lines 12-16, the proof is similar. **Lemma 5.2** (Lemma 8.3 in [1], Most Indices are Isolated).: _Let \(\ell\) be any level. If_ \[\|A\star B-C^{l-1}\|_{\geq c_{1}}\leq\frac{2^{-1.5^{l-1}}k}{\log^{2}(k)},\] _then with probability \(1-\delta/(2L)\), there will be at most_ \[\frac{2^{-1.5^{l}}k}{2\log^{2}(k)}\] _non-isolated elements at level \(l\)._ ``` 1:data structure 2:members 3: Vectors \(A,B\in\mathbb{R}_{+}^{n}\) 4: Integer \(k\) such that \(\mathrm{supp}(A\star B)_{0}\leq k\) 5:end members 6: 7:procedureSparseConvolution(\(A,B\in\mathbb{R}_{+}^{n}\)) 8:\(m\gets 8k\log(n)\mathrm{log}^{2}(k)\) 9:\(L\leftarrow\Theta(\log\log k)\) 10:for\(l\gets 1,\ldots,L\)do 11:\(R_{l}\gets 2\log(2L/\delta)/1.5^{l-1}\) 12:for\(r\gets 1,\ldots,R_{l}\)do 13: Randomly pick a prime \(p\in[m,2m]\) 14:\(g_{r}(x):=x\) mod \(p\) 15:\(V_{r}\gets g_{r}(A)\star g_{r}(B)-g_{r}(C^{l-1})\) using FFT 16:\(W_{r}\gets g_{r}(\partial A)\star g_{r}(B)+g_{r}(A)\star g_{r}(\partial B )-g_{r}(\partial C^{l-1})\) using FFT 17:endfor 18:\(r^{*}\leftarrow\arg\max_{r\in[R_{l}]}|\mathrm{supp}_{\geq c_{1}}(V_{r})|\) 19:\(g\gets g_{r^{*}}\), \(V\gets V_{r^{*}}\), \(W\gets W_{r^{*}}\) 20:\(C^{l}\gets C^{l-1}\) 21:for\(i\in\mathrm{supp}_{\geq c_{1}}(V)\)do 22:\(x\gets W_{i}/V_{i}\) 23:if\(|x-\mathrm{int}(x)|=o(1)\)then 24:\(C_{x}^{l}\gets C_{x}^{l}+V_{i}\) 25:endif 26:endfor 27:endfor 28:\(C\gets C^{L}\) 29:return\(C\) 30:endprocedure 31:endata structure ``` **Algorithm 2** Sparse non-negative convolution. **Definition 5.3** (\(L\) and \(R_{l}\)).: We define \(L\) as \[L:=\Theta(\log\log k)\] For each \(l\in[L]\), we define \[R_{l}:=\Theta(\log(L/\delta))/1.5^{l-1}.\] **Definition 5.4** (Residual and derivative of residual).: Consider \(r\in[R_{l}]\) and its corresponding hash function \(g_{r}:[n]\rightarrow[p_{r}]\). We define \(V_{r}\) and \(W_{r}\) as follows \[V_{r} :=g_{r}(A)\star g_{r}(B)-g_{r}(C^{l-1})\] \[W_{r} :=g_{r}(\partial A)\star g_{r}(B)+g_{r}(A)\star g_{r}(\partial B )-g_{r}(\partial C^{l-1}).\] Also see in Line 15 and 16 in Algorithm 2. **Definition 5.5** (Finding the largest residual).: We define \[r^{*}:=\arg\max_{r\in[R_{l}]}|\mathrm{supp}_{\geq c_{1}}(V_{r})|.\] **Lemma 5.6** (Isolated Indices are Recovered).: _Denoting the number of non-isolated elements at level \(l\) by \(r\), we have_ \[\|A\star B-C^{l}\|_{\geq c_{1}}\leq 2r.\] Proof.: Focus on arbitrary \(l\), and assume that we already picked a hash function \(g\) in Algorithm 2 Lines 12-19. In Definition 5.4, we have provided the definition of \(V\) and \(W\). By the affinity of \(g\) it holds that \(V=g(A\star B-C^{l-1})\), by additionally using the product rule \(W=g(\partial(A\star B-C^{l-1}))\). Now focus on an arbitrary \(i\in[n]\). There are three cases: **Case 1.** For all \(x\in[n]\) with \(g(x)=i\), there is \[0\leq(A\star B-C^{l-1})_{x}\leq c_{2}\cdot n^{l-1}.\] In this case, we have \[V_{i} =\sum_{x:i(x)=i}(A\star B)_{x}\] \[\leq n\cdot c_{2}\] \[=o(1).\] Thus \(i\notin\mathrm{supp}_{\geq c_{1}}(V)\). **Case 2.** There exists unique \(x\), such that \[x\in\mathrm{supp}_{\geq c_{1}}(A\star B-C^{l-1})\] with \(g(x)=i\). In this case, we have \[V_{i}=\sum_{x:i(x)=i}(A\star B)_{x}=x+o(1)\] and \[\frac{\iota(\partial(A\star B))_{i}}{\iota(A\star B)_{i}} =\frac{\sum_{x:i(x)=i}x\cdot(A\star B)_{x}}{\sum_{x:i(x)=i}(A\star B )_{x}}\] \[=x+o(1).\] Therefore we successfully recover \[C^{l}_{x}=\mathrm{int}(V_{1})=(A\star B)_{x}.\] **Case 3.** There exists multiple \(x\in\mathrm{supp}_{\geq c_{1}}(A\star B-C^{l-1})\) with \(g(x)=i\). In this case, we have \[V_{i}=\sum_{x:i(x)=i}(A\star B-C^{l-1})_{x}\] \[\frac{\iota(\partial(A\star B-C^{l-1}))_{i}}{\iota(A\star B-C^{l-1})_{i}}=\frac{ \sum_{x:\iota(x)=i}x\cdot(A\star B-C^{l-1})_{x}}{\sum_{x:\iota(x)=i}(A\star B-C^ {l-1})_{x}}.\] If \(V_{i}\geq c_{1}\) and \(\frac{\iota(\partial(A\star B-C^{l-1}))_{i}}{\iota(A\star B-C^{l-1})_{i}}= \widehat{x}\), then \[(A\star B-C^{l})_{\widehat{x}}=\Omega(1).\] Otherwise this iteration does not recover any coordinate in \(A\star B-C^{l-1}\). **Lemma 5.7** (Correctness of Algorithm 2).: _In Algorithm 2, we can correctly outputs \(C=A\star B\) with probability \(1-\delta\)._ Proof.: We show that with probability \(1-\delta\) it holds that \[\|A\star B-C^{\ell}\|_{\geq c_{1}}\leq 2^{-1.5^{\ell}}k\log^{-2}(k)\] for all levels \(\ell\). At the last level, \[L=\log_{1.5}\log k=O(\log\log k),\] we must have \[\|A\star B-C^{\ell}\|_{\geq c_{1}}=0.\] and thus \[A\star B=C^{\ell}=C.\] The proof is by induction on \(\ell\in[L+1]\). For \(\ell=0\), the statement is true assuming that the SparseConvolution in Algorithm 2 with parameter \(\delta/2\leq\log^{-2}(k)/2\) succeeds. For \(\ell>1\), we appeal to the previous lemmas: By the induction hypothesis we assume that \[\|A\star B-C^{\ell-1}\|_{\geq c_{1}}\leq 2^{-1.5^{\ell-1}}k\log^{-2}(k).\] Hence, by Lemma 5.2, the algorithm picks a hash function \(g\) under which only \(2^{-1.5^{\ell}}k\log^{-2}(k)/2\) elements are non-isolated at level \(\ell\). By Lemma 5.6 it follows that \[\|A\star B-C^{\ell}\|_{\geq c_{1}}\leq 2^{-1.5^{\ell}}k\log^{-2}(k),\] which is exactly what we intended to show. For \(\ell=0\), the error probability is \(\delta/2\). For any other level, the error probability is \(1-\delta/(2L)\) by Lemma 5.2 and there are \(L\) such levels in total. Taking a union bound over these levels, we can obtain the desired error probability of \(1-\delta\). This completes the proof. **Lemma 5.8** (Time complexity of Algorithm 2).: _There is an algorithm (Algorithm 2) that can compute \(C=A\star B\) in_ \[O(k\log(n)\log^{2}(k)(\log(1/\delta)+\log\log(k)))\] _time._ Proof.: The running time of SparseConvolution in Algorithm 2 can be computed in the following steps: * Line 15 and Line 16 takes \(O(k\log(n)\log^{2}(k))\) time to do the FFT. We can bound the number of iterations by : \[\sum_{\ell=1}^{L}[\frac{2\log(2L/\delta)}{1.5^{\ell-1}}] = (2\log(2L/\delta))\sum_{\ell=1}^{L}\frac{1}{1.5^{\ell-1}}\] \[\leq (2\log(2L/\delta))\cdot 10\] \[= O(\log(L/\delta)),\] where the second step follows from the sum of infinite geometric series. Therefore, the total time spent on FFT is \[O(k\log(n)\log^{2}(k)\log(1/\delta)).\] * Line 21 to Line 26 take \[O(mL)=O(k\log(n)\log^{2}(k)\log\log(k)).\] Therefore, the overall time complexity is: \[O(k\log(n)\log^{2}(k)\log(1/\delta))+O(k\log(n)\log^{2}(k)\log \log(k))\] \[= O(k\log(n)\log^{2}(k)(\log(1/\delta)+\log\log(k)))\] This completes the proof. ## 6 Conclusion The computation of the convolution \(A\star B\) of two vectors of dimension \(n\) is considered to be one of the most important and fundamental computational primitives in a wide variety of fields. Utilizing the Fast Fourier Transform, which has a time complexity of \(O(n\log n)\), is the traditional way to solve the problem of non-negative convolution might have a very sparse representation, which is a property that we can use to our advantage to speed up the computation to obtain the result. In this paper, we show that for approximately \(k\)-sparse case, we can approximately recover the all index in \(\operatorname{supp}_{\geq c_{1}}(A\star B)\) with point-wise error of \(o(1)\) in \(O(k\log(n)\log(k)\log(k/\delta))\) time. We further show that we can iteratively correct the error and recover all index in \(\operatorname{supp}_{\geq c_{1}}(A\star B)\) correctly in \(O(k\log(n)\log^{2}(k)(\log(1/\delta)+\log\log(k)))\) time.
2310.10063
1,1-Diphenyl-2-picrylhydrazyl and superoxide anion radical scavenging 1 activities of heterocyclic 2-oxo-1,2,3,4-tetrahydropyrimidines
To investigate1,1-Diphenyl-2-picrylhydrazyl (DPPH) and superoxide radical (SOR) 17 scavenging activities of 2-oxo-1,2,3,4-tetrahydropyrimidines derivatives. Free radicals are 18 highly unstable and reactive molecules/atoms. In the body, free radicals form during 19 normal and abnormal metabolism in the body and cause serious damage to other 20 biomolecules through generating oxidative stress (OS). If free radicals induced OS does 21 not neutralized properly, host multiple pathologies including several types of cancers.
Shahida Perveen, Qurat-ul-Ain, Sarosh Iqbal, Sheeba Wajid, Khalid Muhammad khan, Muhammad Iqbal Choudhary
2023-10-16T04:57:17Z
http://arxiv.org/abs/2310.10063v1
**1,1-Diphenyl-2-picrylhydrazyl and superoxide anion radical scavenging activities of heterocyclic 2-oxo-1,2,3,4-tetrahydropyrimidines** ## Abstract: ### Objective To investigate1,1-Diphenyl-2-picrylhydrazyl (DPPH) and superoxide radical (SOR) scavenging activities of 2-oxo-1,2,3,4-tetrahydropropylidines derivatives. Free radicals are highly unstable and reactive molecules/atoms. In the body, free radicals form during normal and abnormal metabolism in the body and cause serious damage to other biomolecules through generating oxidative stress (OS). If free radicals induced OS does not neutralized properly, host multiple pathologies including several types of cancers. ### Method 1,1-Diphenyl-2-picrylhydrazyl (DPPH) and superoxide (SOR) scavenging activities of 2-oxo-1,2,3,4-tetrahydropropylidines derivatives were performed employing 1,1-Diphenyl-2-picrylhydrazyl (DPPH) and superoxide (SOR) scavenging assays in 96 well plate, ethanol was used as solvent to dissolve the compound. ### Results During current investigation, 2-oxo-1,2,3,4-tetrahydropropylidine derivatives (**1-25**) were allowed to react with 1,1-Diphenyl-2-picrylhydrazyl (DPPH) and superoxide (SOR) radicals. Promising data were collected with IC\({}_{50}\) values ranging from \(3.32\pm 0.08\) uM to \(167.31\pm 0.74\) uM, as compared to positive reference compound quercetin (IC\({}_{50}=94.1\pm 1.2\) uM) in SOR assay. Whereas compound **13** exhibited significant activity in DPPH assay (IC\({}_{50}=\)61.06\(\pm\)0.6 uM), as compare to reference compound ascorbic acid (IC\({}_{50}=40.1\pm 1.1\) uM). #### Conclusions Hence, this preliminary study identifies a potent class of new antioxidant molecules that can serve as a lead towards oxidative stress related pathologies and cancers. #### Keywords: Superoxide anion radicals, nitro blue tetrazolium, pyrimidines, oxidative stress, Radical Scavenger ## Introduction From ancient times, Free radicals are known to affect to human's life in various ways. Chemically, they are highly unstable and very reactive towards other molecules. In the body, not a single cell is prevented by such damage. Understanding the mechanisms of free radicals and related pathologies and their treatments attracted great attention by numerous researchers. Superoxide anion radical (O2 ~) is most prevalent primary radical of human body. It is formed by one-electron transfer/reduction of molecular oxygen (O2). The term "superoxide" coined for O2~ as it is extraordinarily highly reactive, acts as a strong oxidizing agent. They continuously generate in cells under normal physiological conditions and lead to formation of broad range of other deadly free radicals collectively known as reactive oxygen species (ROS). It also known a powerful initiator of chain reactions in body). In the body, mainly two pathways generate superoxide anion radicals. First is the mitochondrial electron transport chain (ETC). During oxidative ATP production, some electrons "leak" to oxygen prematurely, forming superoxide oxygen, which has been reported in the range of serious health conditions (Valko, M et al 2007). The second source is NADPH oxidase enzyme that plays a crucial role in the immune system. Once activated, NADPH oxidase produce a burst of ROS that serve in killing invading pathogen. Unencountered superoxide anion radicals lead to generation of other secondary radicals such as OH~, H2O2, RO~ and ROO~. They all are oxygen-centered radicals. If these radicals are not handled properly, they accumulate in the body and attack various biomolecules in proximity such as proteins, lipids especially polyunsaturated fatty acids (PUFA) and nucleic acids DNA/RNA. oxidative damage to these molecules results in alteration their structure and function, hence, leads to stressful state in body "oxidative stress" (Mohana KN et al 2013). Also, this condition exaggerates other pathological conditions such as cancer, diabetes mellitus, rheumatoid arthritis, neurodegenerative and cardiac diseases, atherosclerosis, and causes early ageing (El-Bahr, 2013 and Basu Abhijit et al. 2022). At the same time, our body has endowed with numerous endogenous antioxidant defense systems that may be enzymatic and non-enzymatic. This system provides prevention and protection against such unwanted accumulation oxidations and oxidative stress. More precisely, antioxidants work by neutralizing deadly free radicals and release oxidative stress. Apart from endogenous antioxidants, exogenous antioxidants revolutionized the management of free radicals-induced bodily damages. They are known to neutralize radicals by accepting or donating electrons (Lobo, V. et al 2010). Some common exogenous antioxidants include vitamin E, vitamin C, Flavonoids, beta-carotene, and some omega FA etc. (Pham-Huy et al 2008). These strong antioxidants can be taken by diet (fruits/vegetables) or as supplements. Health and quality of life of an organism can be improved by essentially keeping the balance between oxidants and antioxidants. Pyrimidines and fused pyrimidines represent a broad class of compounds, which have attracted great attention in medicinal chemistry. Extensive research is going on to discover new tetrahydropropylidines due to their close structural features with clinically important dihydropyridine (calcium-channel blockers) (Fadda, AA et al 2013). Nitrogen containing Pyrimidine derivatives form a component in a number of useful drugs and are associated with many biological and therapeutical activities (Chaudhary A, et al 2011; Mohana KN et al 2013). Also they are reported against several biological activities such as anticancer (Steven, et al. 2010 and Perveen, S., etal, 2018), anti-inflammatory, Analgesic, ulcerogenic activity (El-Gazzar _et al._, 2007), anti-HIV (Gardelli, C et al 2007) etc. Antioxidants are molecules that are proven to be important in treating multiple serious pathologies (cancer, diabetes mellitus etc.) and improving the quality of a patient's life. Recently our research group reported pyrimidine derivatives against xanthine oxidase (Zafar H et al 2018). Xanthine oxidase is an oxidative enzyme that generates O2^-. Excellent results were observed with xanthine oxidase inhibition. In the present study, we evaluated pyrimidone derivatives **1-25** against DPPH and superoxide anion radicals _in vitro_. Derivatives **1-25** exhibited exciting results. Hence, this class of new compounds can serve as strong antioxidants and anti-inflammatory agents. ## 2 Experimental ### Chemistry A series twenty-five 2-oxo-1, 2, 3, 4-tetrahydropropylidines (**1-25**) was synthesized by combining urea, ethyl acetoacetate, and various aldehydes. Copper nitrate trihydrate was used as catalyst. Synthesis of 2-oxo-1, 2, 3, 4-tetrahydropropylidines (**1-25**) was already reported by our research group (Iqbal S _et al_2018). General procedure for synthesis of compounds **(1-25)** is presented below (scheme **1**). \begin{tabular}{|c c|} \hline \multicolumn{1}{|c}{\(\mathbf{H_{3}C}\)} & \multicolumn{1}{c}{\(\mathbf{O}\)} \\ \multicolumn{1}{|c}{\(\mathbf{OC_{2}H_{5}}\)} & \multicolumn{1}{c}{\(\mathbf{H_{2}N}\)} & \multicolumn{1}{c}{\(\mathbf{NH_{2}}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} \\ \multicolumn{1}{|c}{\(\mathbf{H_{3}C}\)} & \multicolumn{1}{c}{\(\mathbf{OC_{2}H_{5}}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} & \multicolumn{1}{c}{\(\mathbf{H_{2}N}\)} \\ \multicolumn{1}{|c}{\(\mathbf{CH_{3}C}\)} & \multicolumn{1}{c}{\(\mathbf{CH_{2}H_{5}}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} & \multicolumn{1}{c}{\(\mathbf{H_{2}N}\)} \\ \multicolumn{1}{|c}{\(\mathbf{CH_{3}C}\)} & \multicolumn{1}{c}{\(\mathbf{CH_{2}H_{5}}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} & \multicolumn{1}{c}{\(\mathbf{H_{2}N}\)} \\ \multicolumn{1}{|c}{\(\mathbf{CH_{3}C}\)} & \multicolumn{1}{c}{\(\mathbf{CH_{2}H_{5}}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} \\ \multicolumn{1}{|c}{\(\mathbf{CH_{3}C}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} & \multicolumn{1}{c}{\(\mathbf{+}\)} \\ \multic presence of active compound. The decrease in absorbance corresponds to compound's activity. Quercetin dihydrate was used as positive control. The solutions of NADH, NBT and PMS were prepared in phosphate buffer. The test and standard samples were dissolved in DMSO (Hazra, B. et al 2008). Superoxide radical scavenging activity was calculated using the following formula. % Superoxide radical scavenging activity of test sample = (1- Abs. of test sample/ Abs. of control) x 100 ### Statistical analysis All data were expressed as mean values (n=3) with SEM (standard error of mean). Inhibitory concentrations by 50 % (IC\({}_{50}\)) of each sample was calculated by EZ Fit enzyme kinetics software (Perrella Scientific, Inc. Amherst, U.S.A.). ### 4. Results Twenty-five derivatives of heterocyclic 2-oxo-1,2,3,4-tetrahydropyrimidines were tested for antioxidant activity by employing two most routinely used SOR and DPPH radical bioassays. Different patterns of activities were observed due to presence of various substitutions as R group (Table-**1**). Interestingly, most of the currently investigated derivatives were highly active except compounds **5**, **7**, **12**, **20**, and **21** against SOR assay, however all derivatives remained inactive against DPPH radical except compound **13**. This observation depicts the involvement of different mechanisms attained to cope with different radicals. Pyrimidine derivatives **1-25** are substituted with various R groups, most small such as R=H (compound **1**) and bulky as R= anthracene moiety (compound **16**). Depending on such substituents attached to basic nucleus, varying degree of anti-radical activities were observed between IC\({}_{50}\) = 3.32 uM to 167.31 uM, in SOR assay as compared to sample/ Abs. of control) x 100 ### Statistical analysis All data were expressed as mean values (n=3) with SEM (standard error of mean). Inhibitory concentrations by 50 % (IC\({}_{50}\)) of each sample was calculated by EZ Fit enzyme kinetics software (Perrella Scientific, Inc. Amherst, U.S.A.). ### 4.1 Results Twenty-five derivatives of heterocyclic 2-oxo-1,2,3,4-tetrahydropyrimidines were tested for antioxidant activity by employing two most routinely used SOR and DPPH radical bioassays. Different patterns of activities were observed due to presence of various substitutions as R group (Table-**1**). Interestingly, most of the currently investigated derivatives were highly active except compounds **5**, **7**, **12**, **20**, and **21** against SOR assay, however all derivatives remained inactive against DPPH radical except compound **13**. This observation depicts the involvement of different mechanisms attained to cope with different radicals. Pyrimidine derivatives **1-25** are substituted with various R groups, most small such as R=H (compound **1**) and bulky as R= anthracene moiety (compound **16**). Depending on such substituents attached to basic nucleus, varying degree of anti-radical activities were observed between IC\({}_{50}\) = 3.32 uM to 167.31 uM, in SOR assay as compared to sample/ Abs. of control) x 100 ### Statistical analysis All data were expressed as mean values (n=3) with SEM (standard error of mean). Inhibitory concentrations by 50 % (IC\({}_{50}\)) of each sample was calculated by EZ Fit enzyme kinetics software (Perrella Scientific, Inc. Amherst, U.S.A.). ### 4.2 Results The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). The results of the experiments were obtained using the same method as those reported in the literature (see Table 1). number and position of -Cl group substitutions makes drastic change in activities. -Cl group present at _para_ position (compound **4**) exhibited remarkable anti-radical activity (IC\({}_{50}\) = 15.65\(\pm\)1.7 \(\upmu\)M), as compared to compound **6** where the addition of another -Cl group at _ortho_ position reduces the activity up to several folds (IC\({}_{50}\) = 167.31\(\pm\)0.7 \(\upmu\)M). Whereas compounds **5** and **7** remain inactive. This observation suggested that para -Cl group seems enhance anti-radical activity on the other hand _ortho_ Cl group is not OH, group is present, showed significant superoxide anion radical scavenging activity (IC\({}_{50}\) = 68.91\(\pm\)6.3 \(\upmu\)M), but lesser than mono-OH group. Compound **13** also exhibited promising activity against DPPH radicals with IC\({}_{50}\) = 61.06\(\pm\)0.6 \(\upmu\)M as compared to vitamin C (IC\({}_{50}\) = 40.1 \(\pm\) 1.1 \(\upmu\)M), might be due to presence of two -OH on vicinal carbons. Isopropyl substitution at para position of phenyl ring (compound **14**) showed significant activity (IC\({}_{50}\) = 55.03\(\pm\)0.76 \(\upmu\)M), however this activity is several folds lesser than compound **15** (R= _p_-dimethylamine phenyl) which exhibited second most potent activity among the series (IC\({}_{50}\) = 13.0\(\pm\)0.8 \(\upmu\)M). R\(=\) 9-Anthracene in compound **16** was found to possess higher antioxidant activity in comparison with compounds **17** (R=2-naphthalene), and **18** (R=1-naphthalene) with IC50's values of 15.37\(\pm\)1.0 \(\upmu\)M, 47.51\(\pm\)1.2 \(\upmu\)M, 49.48\(\pm\)1.4 \(\upmu\)M, respectively. This observation indicates the polynuclear aromatic substitutions especially anthracene, found to be effective in promoting antioxidant activities of under effective rather declining the activity. Compound **9** was found to have slightly higher activity (IC\({}_{50}\) = 32.69\(\pm\)1.2 \(\upmu\)M) than compound **8** (IC\({}_{50}\) = 27.51\(\pm\)0.41 \(\upmu\)M) therefore it can be suggested that _meta_-NO\({}_{2}\) group is most suitable than _para_-NO\({}_{2}\). Compounds **10-12** have -OH group at _meta_, _para_, and _ortho_ positions, respectively. Interestingly, _o_-OH position was found not suitable for anti-radical activity. The order of activity is _para_\(>\)_meta_\(>\)_ortho_ (Table-1). Compound **13** where \(m\), p di-investigated pyrimidines. Compound **19** where R=3-Furan ring showed similar activity to compounds **17** and **18**. Such potency of compounds, as we discussed earlier, might be due to resonance stabilization of resultant free radical by different electron donating substituents attached as R group. Whereas compounds **20** and **21** were inactive, showed percent RSA less than 50. Compound **22** (R = \(o\), \(m\), _p_-tri-OCH\({}_{3}\)-phenyl) showed lesser activity (IC\({}_{50}\) = 77.7\(\pm\)2.56 \(\upmu\)M), than compounds **23** (R=_o_-mono-OCH\({}_{3}\)-phenyl) and **24** (R=_p_-mono-OCH\({}_{3}\)-phenyl). The decline in antioxidant activity might be because of steric hindrance caused by three -OCH\({}_{3}\) groups on vicinal carbons. A slight increase in activity was observed with compound **25** where thio-methyl group is present at _para_ position. In general, it can be said conclusively that potent activity of each active compound might attributed to their ability to form resonance-stabilized free radical (Khan, KM et al., 2012). Moreover, we also observed that electron-donating substituents are more effective in enhancing anti-radical activity of this class. Preliminary bio-evaluation of this class discovered **21** derivatives as potent SOR scavengers and only one found to be active in DPPH 4. Discussions In the present study, twenty-five heterocyclic 2-oxo-1,2,3,4-tetrahydropyrimidines **(1-25)** were evaluated for their antioxidant activities by superoxide anion and DPPH radical scavenging assays _in vitro_. Out of twenty-assay. Some more studies are still needed to establish their oxidative stress potential _in vivo_. five investigated derivatives, twenty-one derivatives showed promising potential against superoxide anions varying from as potent as IC\({}_{50}\) = 3.32 \(\pm\) 0.08 uM to moderate activity IC\({}_{50}\) = 167.31\(\pm\)0.7 uM, depending on type \begin{tabular}{|c|c|c|c|} \hline 5. & 48\% & N.D \\ \hline 6. & 51\% & 96\% & 167.31\(\pm\)0.7 \\ 7. & 96\% & 167.31\(\pm\)0.7 \\ 8. & 10\% & 99\% & 32.69\(\pm\)1.2 \\ 9. & 10\% & 99\% & 27.51\(\pm\)0.41 \\ 10. & 10\% & 99\% & 25.28\(\pm\)0.78 \\ 11. & 10\% & 98\% & 21.23\(\pm\)0.66 \\ 12. & 10\% & 10\% & 10\% \\ 13. & 10\% & 92\% & 68.91\(\pm\)6.3 \\ 14. & 10\% & 96\% & 55.03\(\pm\)0.76 \\ \hline \end{tabular} Page 8 | 11 * [15] of substitutions. All compounds **1-25** were inactive in DPPH assay except compound **13**, exhibited strong DPPH activity. This implies that derivative **1-25** encounter different radicals with different mechanisms. Conclusively, this preliminary study identifies a new efficient anti-radical compound. These derivatives have potential to fight against deadly free radicals. However, more studies needed to find their effect against oxidative stress in cellular and animal models. ## Acknowledgement We are thankful to Higher Education Commission Pakistan "Indigenous 5000 Fellowship Program" for carrying out this work. ## Conflict of Interest Authors declare no conflict of interest.
2306.04928
Underwater Intention Recognition using Head Motion and Throat Vibration for Supernumerary Robotic Assistance
This study presents a multi-modal mechanism for recognizing human intentions while diving underwater, aiming to achieve natural human-robot interactions through an underwater superlimb for diving assistance. The underwater environment severely limits the divers' capabilities in intention expression, which becomes more challenging when they intend to operate tools while keeping control of body postures in 3D with the various diving suits and gears. The current literature is limited in underwater intention recognition, impeding the development of intelligent wearable systems for human-robot interactions underwater. Here, we present a novel solution to simultaneously detect head motion and throat vibrations under the water in a compact, wearable design. Experiment results show that using machine learning algorithms, we achieved high performance in integrating these two modalities to translate human intentions to robot control commands for an underwater superlimb system. This study's results paved the way for future development in underwater intention recognition and underwater human-robot interactions with supernumerary support.
Yuqin Guo, Rongzheng Zhang, Wanghongjie Qiu, Harry Asada, Fang Wan, Chaoyang Song
2023-06-08T04:19:23Z
http://arxiv.org/abs/2306.04928v2
# Underwater Intention Recognition using Head Motion and ###### Abstract This study presents a multi-modal mechanism for recognizing human intentions while diving underwater, aiming to achieve natural human-robot interactions through an underwater superlimb for diving assistance. The underwater environment severely limits the divers' capabilities in intention expression, which becomes more challenging when they intend to operate tools while keeping control of body postures in 3D with the various diving suits and gears. The current literature is limited in underwater intention recognition, impeding the development of intelligent wearable systems for human-robot interactions underwater. Here, we present a novel solution to simultaneously detect head motion and throat vibrations under the water in a compact, wearable design. Experiment results show that using machine learning algorithms, we achieved high performance in integrating these two modalities to translate human intentions to robot control commands for an underwater superlimb system. This study's results paved the way for future development in underwater intention recognition and underwater human-robot interactions with supernumerary support. ## I Introduction Diving with Self-Contained Underwater Breathing Apparatus (SCUBA) is a popular activity for exploring the ocean, which involves a series of professional equipment wearable on the human body for life-support and body movement [1]. However, the level of intelligence of these diving gears remains primarily mechanical by design. There remains a research gap in introducing robotic solutions toward autonomous, natural interactions between human divers and the underwater environment, where novel designs in wearable robots and interactive mechanisms need further exploration [2]. Before introducing wearable robots to assist human divers, intention recognition underwater becomes a critical issue due to challenges brought by the aquatic environment. Currently, hand gestures are the most effective method for diver communication [3]. However, when submerged underwater, the divers must constantly move all limbs to maintain body postures against the water, making it physically demanding and mentally exhausting to spare extra attention to hand gestures or tool operations. The water greatly limited divers' sense of the environment while restricting regular verbal communications or facial expressions, making it urgently necessary to develop novel solutions for intention recognition underwater for effective human-robot interactions [4]. One way to drive novel designs for aquatic systems is by drawing inspiration from on-land systems to underwater applications, where the integration of head motion and throat vibration seems viable. For example, recent work by Yang [5] demonstrated an artificial throat to check vocal vibrations to recognize everyday words vaguely spoken by a patient after a laryngectomy. Wang [6] proposed a method through detecting eye motions and throat vibrations to interpret the intention of patients with amyotrophic lateral sclerosis (ALS). Severin [7] developed a system using inertial sensors to detect head movement for intention recognition. Machangpa [8] designed a wheelchair controlled by head gestures for Fig. 1: **Summary of the underwater intention recognition method using wearable IMU and throat microphone. (A) Diver with standard diving equipment, including a diving backpack, diving mask, diving computer, diving suit, flipper, oxygen cylinder, etc. (B) The IMU headband can collect the motion information of the six types of head motion, including extension/flexion, bending let/fright, and rotating let/n/ight. (C) The throat microphone can acquire the throat vibration, including the first five musical scales (“do”, “re”, “mi”, “fa”, and “so”). (D) Based on the servo angle definition of the superlipting, we define five types of motion modes. \(\delta_{1}\) and \(\delta_{2}\) are the angles of the left and right thrusters. \(n_{1},n_{2}\) and \(T_{1},T_{2}\) are the rotational speed and the thrust force of the left and right thrusters, which can be controlled continuously via PWM. (E) The five motion modes mapping from the classification token with head motion or/and throat vibration.** quadriplegic patients. Although the divers' limbs are busy maintaining body postures and the mouth is filled with the breathing mouthpiece, we can leverage such limitations to use the head and throat to express intentions for controlling wearable robots underwater, such as an underwater superlimb [9]. In this paper, we propose a novel solution for underwater intention recognition by simultaneously detecting the diver's head motion and throat vibration, as shown in Fig. 1, to enable multi-modal human-robot interactions with an underwater supernumerary robotic limb designed for providing propulsion assistance. The design features customization of the headband with a waterproof IMU sensor mounted on the top and a throat microphone on the neck for hands-free interaction. The system determines the diver's intention by sensing the diver's head motion through the IMU sensor, or confirms the control commands by detecting the diver's vocal vibration through the throat microphone using learning algorithms. By designing mapping commands to the underwater superlimb, the system recognizes the diver's intention for posture control underwater, aiming at reducing the diver's physical load and mental fatigue for nature interactions without using hands. The contributions of this study are as the following: * Proposed a novel design for underwater intention recognition by sensing the diver's head motion and throat vibration in a compact form factor for diving scenarios. * Developed a multi-modal, real-time classification algorithm based on five musical scales and six head motion types for intention recognition underwater. * Verified the feasibility of the proposed method for controlling an underwater superlimb prototype with continuous motion commands for underwater propulsion assistance. The rest of this paper is organized as the following. Section II presents the diver intention recognition method of the wearable sensing device, including the engineering design and classification algorithm for head motion and throat vibration. Section III reports the experiment results using head motion, throat vibration, and combined modalities for superlimb control. Section IV discusses the experiment results and implications. The conclusion, limitations, and future work are in the final section. ## II Method ### _Engineering Design_ We designed a multi-modal sensing system as shown in Fig. 2 for underwater intention detection. The IMU sensor can be fixed on the head using the headband, as shown in Figs. 2A&C, picking up Euler angles and accelerations of head motion at up to 500Hz in 16 bits. The throat microphone is wearable on the neck to detect throat vibration at 16k or 60kHz in 16 bits, as shown in Fig. 2B. To protect the IMU from water eroding, we sealed the waterproof shell with silicone and sealant (Epoxy sealant for seawater from ROVMAKER). We modified the mask design (M8038 from SMACO), as shown in Fig. 2D, to adjust the IMU sensor's angle by turning the knob of the joint on top. The design of the IMU headband is compatible with the full-face diving mask for SCUBA divers, which connects the oxygen tank with a regulator for SCUBA diving. ### _Detecting Head Motions and Throat Vibrations_ We used different methods to process the two time-series data, as shown in Fig. 3. We adopted the Dynamic time warping (DTW) algorithm to distinguish six types of head motion (Bending left/right, Rotating left/right, Extension/Flexion), commonly used to process IMU data [10]. The raw data from the IMU (Accelerations and Euler Angles) was smoothed by a low-pass filter. Then, a self-adapting threshold segmentation method extracted the segment with the practical meaning of instructions. The DTW algorithms maximize the difference between different head motion types and minimize the distance between those of the same kind [11]. Since the Adaptive DTW barycenter averaging (ADBA) algorithm can average the motion data sequences in time and space, this time-series averaging template method has a higher recognition accuracy than a randomly averaged method. It was used to generate the data sequence template for the six head motion types [12]. 1,436 head motion samples were collected from two male and two female participants (Bending left/right: 280/288, Extension/Flexion: 270/266, Rotating left/right: 296/306). Half of the dataset was used to generate the head motion template, and the rest was used for testing. Results show that the head motion recognition accuracy is measured at an average of 94%, as shown in Fig. 4A. However, the result was unsatisfactory when we applied the same method for throat vibration data. Instead, we adopted the Mel-frequency cepstral coefficients (MFCCs) to extract features in speech recognition [13]. Alternatively, we can also use Long Short Term Memory (LSTM) as a candidate algorithm for acoustic modeling of speech [14]. We Fig. 2: **Engineering design of the wearable sensing devices to collect the head motion and throat vibration data underwater.** (A) IMU headband with a waterproof shell integrated with a 9-axis IMU (3DX-GX3-25 from Parker Hanninf), sealed by a silicone layer and a silicone sealant layer. (B) Throat microphone (Z033 from WADSN Corp.) smeared with a polyurethane waterproofing spray (from SKSHU). (C) A test user is wearing the waterproof IMU sensor with the headband (on land). (D) The IMU sensor mounted on a full-face diving mask (for underwater). collected throat vibration signals using a throat microphone and the Mel filter banks to transform the audio signal to MFCCs. After pre-processing the raw data, we obtained a 20 \(\times\) 20 matrix by cutting off the MFCCs matrix or padding the zeros matrix into the time dimension of the MFCCs matrix, which describes the response of the human auditory system for the specific audio signal. Then, we fused the MFCCs matrix as the input of LSTM to get a classification result for the throat vibration. Ten participants (seven males, and three females) were invited to collect the throat vibration signal for data acquisition. They were asked to phonate musical scales with the throat microphone shown in Fig. 2B. We collected a dataset of 3,253 musical scale audio segments in WAV format containing 647 "do", 660 "re", 594 "mi", 668 "fa", and 684 "so", which were then split with 70% for training and 30% for testing. The model's average accuracy for testing is about 86%. The confusion matrix of the classification results is shown in Fig. 4B. ## III Results ### _Intention Recognition via Head Motion_ We divided the head motions into two groups to control the speeds and angles of the two thrusters, respectively, as shown in Table I. Fig. 5 demonstrates the human-robot interactions experiments. The time series of accelerations and Euler angles along the \(x/y/z\) axis recorded by IMU are shown in Figs. 5A&B. The corresponding control command sequence is shown in Fig. 5C, where the mapping of head motions to control command index is (Bending right/left, Extension/Flexion, Rotating left/right) \(\mapsto\) command index: \((1,2,3,4,5,6)\). We executed four actions in each of the three DoFs of the rotation. The Euler angles range smoothly within [45\({}^{\circ}\),50\({}^{\circ}\)] for Flexion, [70\({}^{\circ}\),80\({}^{\circ}\)] for Extension, [35\({}^{\circ}\),45\({}^{\circ}\)] for Bending left/right, and [65\({}^{\circ}\),75\({}^{\circ}\)] for Rotating left/right, respectively. The system recognized all 12 head motions correctly, and the corresponding control commands were sent to the superlimb afterward. Fig. 6 compares the control command and actual feedback of the servos and thrusters from the superlimb, indicating that the pipeline can detect human intentions and achieve robot control continuously with low latency (less than one second). However, to achieve precise control of the thrusters through the Euler angles of the head motion, an operator would need Fig. 4: **Accuracy of recognition-related performance in the classification experiments.** (A) Confusion matrix of head motion classification with IMU. (B) Confusion matrix of throat vibration classification with a throat microphone. Fig. 5: **Control of the superlimb using head motion.** (A) Accelerations along the \(x-y-z\) axis of the head motion from the IMU during the control process of the superlimb. (B) Euler angles along the \(x-y-z\) axis of the head motion from the IMU during the control process of the superlimb. (C) According to the motion information of the data sequence by the head motion recognition algorithm, the head motion types were classified and the mapped motion control command was sent to the control unit of the superlimb. Fig. 3: **Underwater interaction method based on the wearable sensing devices integrated with throat microphone and IMU.** (A) The IMU can sense the head motion information, including acceleration and Euler angles. After the endpoint detection method based on adaptive thresholds, segments would be matching using DTW algorithm to distinguish the head motion types based on the head motion templates. (B) The throat microphone can acquire the vibration of the throat. After noise reduction, the significant fragments of the raw signal are extracted through the endpoint detection method. Mel-filter bank analysis transforms these fragments into Mel-frequency cepstral coefficients (MFCCs). After LSTM processing and classification recognition, the command index can be mapped to a user-defined motion mode sent to control the superlimb. training and practice to obtain muscle memories of finer-grained mapping between head motions and robot control. Such activity involves humans in the loop as a human-robot system though the robot system alone is an open control loop. ### _Intention Recognition via Throat Vibration_ We designed a mapping between musical scales and both thrusters' angle and speed for the throat vibration signal in Table II. Three types of musical scales and lengths distinguish six kinds of control commands. Meanwhile, the amplitude \(A\) of the musical scale signal (within 64 _ms_) is detected in real-time continuously and is used as a coefficient in each of the six commands. For example, a short type of "do" is mapped to rotating both thrusters to positive angles (show in Fig.1 E(v)) determined by its amplitude \(A_{1}\). We demonstrate the human-robot interaction through throat vibration. Fig. 7 shows the raw signal (solid purple line) and the recognized musical scales. Every intention consists of two sequential waveform segments, with the first indicating the type of command (pink shaded areas) and the second for the amplitude of action (blue shaded areas). Although one could still express control intentions with the throat vibration, the user must be trained in vocal control for the system to recognize the intention effectively. Fig. 8 shows the theoretical and actual feedback of the servo angles and the control command of PWM (Range from \([1100,1900]\)) sent to the control module of the thrusters. The blue shaded area is the action execution of the superlimb based on the corresponding motion mode shown in Fig. 7. ### _Multi-modal Intention Recognition and Interactions_ In this experiment, we test the feasibility of using head motion and throat vibration simultaneously as a multi-modal mechanism for controlling the underwater superlimb based on intention recognition. Table III defines the action vectors to describe the diver's robot control intentions mapped to the throat vibration and head motion. In this experiment, the musical scale "so" was defined as the mode switch from the control of servo angle to the rotational speed of the thruster. Fig. 9 compares theoretical output and actual measurement of the superlimb. We made two observations in the multi-modal integration experiment. The noticeable lat Fig. 8: **Experimental results of superlimb control with Throat vibration recognition experiment: comparing theoretical output and actual feedback of the superlimb.** (A) The Control command and the actual feedback of the left servo. (B) The Control command and the actual feedback of the right servo. (C) The PWM command is sent to the left and right thruster based on the classification token output from the throat vibration recognition algorithm. Fig. 6: **Experimental results of superlimb control with head motion recognition experiment: comparing theoretical output and actual feedback of the superlimb.** (A) The Control command and the actual feedback of the left servo. (B) The Control command and the actual feedback of the right servo. (C) The PWM command is sent to the left and right thruster based on the classification token output from the head motion recognition algorithm. Fig. 7: **Waveform of throat vibration acquired by throat microphone during superlimb control with throat vibration recognition experiment.** Marked in pink shaded areas are the different types of throat vibration, and marked in blue shaded areas are the throat vibrations used to continuously control the servo angle and thruster speed of the superlimb according to the amplitude of the vibration signals. increase to two seconds when controlling the servos using head motion because of the different frequencies between these two recognition methods. The misalignment in Fig. 9C is caused by the system transforming \((re,long,null)\) to \((so,long,null)\). The other observation is when the extension was not correctly recognized in Fig. 9, which was caused by an unexpected servo angle during the experiment. After all, the head motion recognition module has to lower the frequency of data collection and classification to meet the frequency of the throat vibration recognition module. ## IV Discussion ### _Towards Underwater Intention Recognition_ This study presents the engineering design and experiment results of an underwater multi-modal interaction mechanism for intention recognition using head motions and throat vibrations. Although the reported system is still a lab prototype that requires further testing underwater, the simple design in a compact form factor makes the proposed solution promising for human-robot interactions underwater. For on-land scenarios, the head motion or the throat vibration has been demonstrated effective in intention recognition for different applications with no need for integration. In this study, due to the need for aquatic interaction, we propose to combine these two modalities for intention recognition underwater. The high classification performance reported in this study aligns with the literature. Our results further demonstrated that when combining these two modalities, they formulate an intuitive mechanism for intention expression that is effective in intention recognition through learning algorithms, which could be a practical solution for controlling an underwater superlimb robot. This study tested only three pairs of head motion and five musical scales for intention expression and recognition. However, one can quickly expand the vocabulary by extending the head motion to the total five degree-of-freedoms of the head to include the two translational motions. Some users may need further training before being able to do so fluently. On the other hand, one can also expand the musical scales to a broader range or develop the system to recognize a sequence of them. For example, it is easier for people to remember a piece of tune rather than the specific musical scale for differentiating different meanings. The artificial throat [5] provides an excellent inspiration to expand this work towards a more natural expression of intentions for the system to recognize effectively, which we intend to further explore by using sensors of more compact sizes [15]. ### _Learning Intentions via Head Motion & Throat Vibration_ In this study, we proposed a multi-modal learning framework for integrating head motion and throat vibration in intention recognition. For on-land scenarios with a clear voice, one could directly use conventional methods to differentiate the volume, pitch, and tune from voice signals, a mature technology already used in commercial products. However, the experiments in this study specifically chose a learning approach as the signal detected was through the throat, which had more noise than those collected from the mouth. On the other hand, when submerged underwater, the noise signal from the water would further reduce the quality of the sound signals, making it a challenging task to classify different vocal signals clearly. However, our experiment results show that the learning algorithms effectively classified the different musical notes hymned by the test user. We intend to further test the proposed system by collecting training data from underwater to refine the model for a more realistic scenario. On the other hand, the experiment results of the combined modalities in controlling the superlimb robot were successful, real-time, and continuous (please refer to the supplementary materials for demonstration). Throughout the experiment, only the head movement and throat vibrations were used to support hands-free interaction with the underwater superlimb. Further testing in the aquatic environment is needed in future work to test the proposed system's Fig. 9: **Experimental results of multi-modal intention recognition and interactions experiment: comparing theoretical output and actual feedback of the superlimb using head motion and throat vibration.** (A) The control command and the actual feedback of the left servo. (B) The control command and the actual feedback of the right servo. (C) The PWM command sent to the left and right thrusters based on the token output from the intention recognition algorithm. performance thoroughly. On the other hand, one can easily extend the application of the proposed method to interact with other underwater robots, such as UAVs and robotic fishes, on-land robots, such as robotic manipulators, legged robots, aerial robots, or mobile robots, or common Internet-of-Things (IoTs) devices in domestic scenarios. Another application area is for people with vocal impairment, where slight modifications could make the system wearable for users with disabilities. ### _Human-Robot Interactions for an Underwater Superlimb_ Humans are not biologically evolved for natural activities underwater, which influenced the priority of design considerations when developing diving gear. The current diving devices mainly provide life-support and swimming assistance in an aquatic environment while being wearable to the diver's body forms. With the limited space, the complexity of need, and the waterproof requirement, two significant challenges remain in introducing robotic intelligence to diving gear. One is the design problem for a wearable robot compatible with the existing diving gear while providing meaningful assistance underwater to reduce the physical load of the body limbs. In our previous work [9], we developed a reconfigurable underwater jetpack to enable wearable propulsion with compatible connections to the current Buoyancy control device (BCD) system, aiming at sharing the burdens of manual posture control underwater so that the diver can spare their hands for tool operation. We named it an underwater superlimb due to its functionalities in providing supernumerary limb support for manual posture control during diving and swimming, similar to other superlimb designs for on-land scenarios [16]. Intention recognition is the other problem that is yet to be solved when developing an intelligent wearable underwater robot, which is shared by the other superlimbs for on-land scenarios. The proposed solution, while being inspired by many recent works for on-land scenarios, is found to be practical for underwater human-robot interactions. With limited sensory feedback and limited eyesight underwater, divers usually need to turn their heads constantly for a better inspection of the surrounding environment, providing a more explicit cue for expressing intentions without using their hands. While the diver's mouth is usually filled with breathing tubes of the regulator, common language expression becomes a challenge. Even with a full-face mask, there are still difficulties in underwater signal or voice transmission. However, as demonstrated in this study, one can still leverage the throat vibration to express a wide range of vocal commands for intuitive and direct interaction. ## V Conclusions In this study, we proposed a novel mechanism for underwater intention recognition using head motion and throat vibration. Experiment results showed that the system accurately classified different intention expressions coded through these two signals. The proposed multi-modal learning algorithm effectively recognized the intentions of the test user and controlled the underwater superlimb robot through various commons. This study still needs to be improved in collecting data from the underwater environments for training, which we intend to conduct once the water pools are open for testing. The current system was tested on a breadboard, which needs further integration with the underwater superlimb's controller for an integrated system design. Further refinement of the command mapping between these two modalities and the robot control commands could be arranged. Nevertheless, the results of this study paved the foundation for future development in underwater intention recognition and underwater human-robot interactions with supernumerary support.
2307.14812
Impact of Black Swan Events on Ethereum Blockchain ERC20 Token Transaction Networks
The Ethereum blockchain and its ERC20 token standard have revolutionized the landscape of digital assets and decentralized applications. ERC20 tokens developed on the Ethereum blockchain have gained significant attention since their introduction. They are programmable and interoperable tokens, enabling various applications and token economies. Transaction graphs, representing the flow of the value between wallets within the Ethereum network, have played a crucial role in understanding the system's dynamics, such as token transfers and the behavior of traders. Here, we explore the evolution of daily transaction graphs of ERC20 token transactions, which sheds light on the trader's behavior during the Black Swan Events -- 2018 crypto crash and the COVID-19 pandemic. By using the tools from network science and differential geometry, we analyze 0.98 billion of ERC20 token transaction data from November 2015 to January 2023. Our analysis reveals that ERC20 financial ecosystem has evolved from a localized wealth formation period to a more mature financial ecosystem where wealth has dispersed among the traders in the network after the crypto crash and during the pandemic period. Before the crash, most sellers only sell the tokens, and buyers only buy the tokens. However, after the crash and during the pandemic period, sellers and buyers both performed buying and selling activities. In addition, we observe no significant negative impact of the COVID-19 pandemic on user behavior in the financial ecosystem.
Moturi Pradeep, Uday Kumar Reddy Dyapa, Sarika Jalan, Priodyuti Pradhan
2023-07-27T12:38:48Z
http://arxiv.org/abs/2307.14812v1
# Impact of Black Swan Events on Ethereum Blockchain ERC20 Token Transaction Networks ###### Abstract The Ethereum blockchain and its ERC20 token standard have revolutionized the landscape of digital assets and decentralized applications. ERC20 tokens developed on the Ethereum blockchain have gained significant attention since their introduction. They are programmable and interoperable tokens, enabling various applications and token economies. Transaction graphs, representing the flow of the value between wallets within the Ethereum network, have played a crucial role in understanding the system's dynamics, such as token transfers and the behavior of traders. Here, we explore the evolution of daily transaction graphs of ERC20 token transactions, which sheds light on the trader's behavior during the Black Swan Events - 2018 crypto crash and the COVID-19 pandemic. By using the tools from network science and differential geometry, we analyze 0.98 billion of ERC20 token transaction data from November 2015 to January 2023. Our analysis reveals that ERC20 financial ecosystem has evolved from a localized wealth formation period to a more mature financial ecosystem where wealth has dispersed among the traders in the network after the crypto crash and during the pandemic period. Before the crash, most sellers only sell the tokens, and buyers only buy the tokens. However, after the crash and during the pandemic period, sellers and buyers both performed buying and selling activities. In addition, we observe no significant negative impact of the COVID-19 pandemic on user behavior in the financial ecosystem. keywords: Ethereum Blockchain, Financial Networks, Transaction graph, Forman-Ricci Curvature + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction The growing enthusiasm worldwide to understand the financial ecosystem is largely due to several Black Swan events like the credit crisis of 1772, the great depression of \(1929-39\), the OPEC oil price shock of 1973, the Asian crisis of 1997, and \(2007-2008\) financial crisis [1; 2]. Modeling a financial system as a network has helped us to understand a wide range of phenomena crucial for financial professionals, economists, and researchers [3]. Analysis of a financial network sheds light on underlying salient features which may not be evident without the holistic approach of network science [5], thereby providing a better understanding of how the traders interact with each other and how their interactions affect the whole system [6; 7]. However, it was not as much of a success as thought it would be, as there exist constraints in modeling the underlying networks of traditional financial systems arising due to many reasons; for example, confidentiality issues where a financial institution or bank may not fully provide all the transaction details due to the intellectual property restrictions, and privacy rules. Consequently, various features of the traditional economy still need to be explored. We consider the Ethereum Blockchain transaction data to analyze the trader's behavior during the 2018 crypto crash and the COVID-19 pandemic [8; 9]. Ethereum blockchain may be modeled using networks as entities are connected through transactions of many assets [10; 11; 12]. Initially, transaction graphs within the ERC20 token financial ecosystem were relatively simple and characterized by straightforward transfers between token holders. However, as the financial ecosystem evolved, transaction graphs became increasingly complex, reflecting the growth and diversification of token-related activities [13]. New patterns, including token swaps [14], lending protocols [15], and decentralized exchanges [16], led to intricate and intertwined transaction graphs. Analyzing and understanding ERC20 transaction graphs has become crucial for researchers, developers, and regulators seeking to comprehend token movements, identify patterns, and assess network health. Tools and techniques, such as graph analysis algorithms and visualization frameworks, have emerged to extract meaningful insights from transaction graphs, aiding in risk assessment, fraud detection, and market analysis within the Ethereum Financial Ecosystem [17; 18; 19; 20; 21]. A few existing works analyze the ERC20 transaction data and crypto crash [22]. However, the impact of critical events and the behavior of traders still needs to be discovered as the system is continuously evolving. This article studies the impact of the crypto crash and the COVID-19 pandemic on the trader's behavior of the ERC20 token transactions. We use network methods to model and analyze the structural and dynamic behavior of the blockchain's transaction graphs. To examine the financial ecosystem, we create daily transaction graphs from November 2015 to January 2023. We investigate the evolution of traders' behavior in the Ethereum blockchain. Our analysis unveils that before the crash, most sellers only sell the tokens, and buyers only buy the tokens. Few transactions among the small traders lead to the localization of wealth among the individual traders. However, after the crash and during the pandemic, the seller sells the token, and buyers buy the token. But at the same time, the seller buys the tokens, and the buyer sells the token leading to the dispersal of the wealth among the traders and making the ERC20 financial ecosystem more stable during the pandemic. In addition, we show no significant negative effect of the COVID-19 pandemic on user behavior in the financial system. The article is organized as follows: Section 2 discusses preliminaries of the Ethereum Blockchain and ERC20 tokens. Section 3 illustrates the details of the extraction and preprocessing of the ERC20 transaction data and the modeling of the daily transaction network. It also contains the notations and definitions used in the later discussions. Section 4 explains the results and analysis. Finally, section 5 summarizes the current study and discusses the open problems for further investigation. ## 2 Preliminary Blockchain is an underlying technology on which the famous cryptocurrency, BitCoin, was built; nowadays, blockchain applications are widespread, which cover supply chains, financial services, healthcare, and public registers [23; 4; 24; 25]. The core components of blockchain are transparency and trustlessness, through which transactions are validated and broadcasted. In the blockchain financial ecosystem, a block comprises several transactions and is linked to its previous block via a digital link, thus forming a chain of blocks. ### Ethereum Blockchain In the year 2015, Ethereum came into existence [8]. Ethereum allows for the creation and direct peer-to-peer exchange of digital assets without intermediaries. Ethereum platform is a software built on blockchain technology that enables the creation of cryptocurrency (Ether), crypto-assets (e.g., ERC20 tokens, ERC721 tokens [26]), and Decentralized Applications (DApps) [27]. The Ethereum blockchain is a digital ledger where Ether and crypto-assets can be securely stored and exchanged. Ether is the backbone of the platform, which facilitates transactions and pays for the deployment of smart contracts on the Ethereum Blockchain. The primary focus of the platform is to use decentralized blockchain technology for smart contracts [25; 28]. The smart contract is a computer protocol used to create and develop DApps, and crypto assets. Smart contracts are conditional codes on the blockchain executed when smart contract conditions are met. In other words, they are "\(if\ldots then\ldots\)" statements written in the form of code and deployed on the blockchain. For example, a certificate contract in which the smart contract will provide the certificate when the participants attend the required number of classes of a course and score more than or equal to 60 marks in that course. The usage of smart contracts is very diverse and includes digital identity, real estate [29], insurance, flash loans [15], gaming [30], and decentralized finance [15]. In the Ethereum Financial ecosystem, users interact with the Ethereum network through their Ethereum account. With the help of accounts, users can transfer assets, create or invoke smart contracts, and interact with DApps [8]. A user account consists of a 40-byte public address (like a bank account number) with the prefix "\(0x\)" (e.g., \(0x52d3fbd8fc248c\ldots 25c37c5f5\)), which other users use to transfer assets. A transaction in the Ethereum platform can execute various things, such as transferring assets (ERC20 tokens), deploying smart contracts, and triggering the smart contract [8]. To deploy a smart contract, a person uses an Ethereum account and sends a transaction containing compiled code of the smart contract without the recipient of the transaction [31]. This article limits our discussion to the transactions related to the ERC20 tokens. ### ERC20 Token Ethereum Blockchain platform provides a more accessible opportunity for companies and individuals to develop blockchain products instead of building their own blockchain platform [13]. The Ethereum Request for Comment 20 (ERC20) standard allows developers to create smart contract-enabled tokens that can be used with other products and services, such as DApps on the Ethereum network, which started on Nov 2015 [32]. Sometimes we refer to tokens and coins as the same, but they are different in what they represent and their functions. In both cases, they are digital assets, but the coin is a native asset of the platform, which facilitates operations on the platform, whereas tokens are built on the platform for the creation and flow of wealth. For instance, Ether is the native coin of Ethereum, and Polygon MATIC [33] and USDT [34] are tokens built on the Ethereum platform. In the Ethereum Blockchain, the digitalization of the value of a particular asset into tradeable digital units is known as tokenization, and the digital assets are represented as tokens. Tokens allow a seamless, borderless, and almost free flow of value in the form of digital assets across the globe. Once any product is tokenized, these tokens can be managed, detected, accounted for, and leveraged in the context of incentives that may promote fair wealth. For example, XAUt (Tether Gold) is a token representing gold as a digital token on the Ethereum platform. One XAUt token equals 31.1035 grams of gold. Hence, XAUt tokens digitally represent the value associated with gold assets so that they can be traded across the globe using the Ethereum Platform. The above example of the XAUt token is an asset-backed token; there are various other types of tokens on the Ethereum platform with multiple functions and features [32]. The ERC20 token can be created by any individual or organization that defines the rules governing them, such as monetary policy, token features, user incentive systems, etc. The current market cap of Figure 1: Illustrate the ERC20 token transaction data over the Ethereum blockchain and the associated transaction graph. For simplicity, we assign a unique integer number corresponding to the ‘from’, ‘to’, and ‘token Address’ columns. Here, ‘from’ is the seller’s, ‘to’ is the buyer’s wallet, and the edge label shows which tokens are traded. The edge thickness represents multiple transactions of the same tokens between buyer-seller. For instance, between node 5 and 7, two transactions of ‘token 4’. Ethereum is approximately $229.56\(B\), and ERC20 tokens are approximately $112.7\(B\), around 49% of the total Ethereum blockchain [35]. A high market capitalization implies that the market highly values the asset, and our interest lies in studying the trader's behavior involved in ERC20 token transaction. ## 3 ERC20 Token Transaction Data and Network Modeling ### Transaction Data sets To analyze the underlying network of ERC20 transactions in the Ethereum Blockchain, we use the past 8 years of ERC20 transaction data [36]. We analyzed \(982,119,361\) ERC20 token transaction data from November 2015 to January 2023. The data set consists of 9 columns (Fig. 1); each column gives us specific information regarding the ERC20 transaction data and can be summarized as follows. 1. **blockNumber:** block number in which the transaction information has been stored. 2. **timeStamp:** time in which the block was minted, and every transaction in a block has the same timestamp. 3. **transactionHash:** unique identifier that serves as proof of transaction validation. 4. **tokenAddress:** the hash value refers to the actual smart contract address of the ERC20 token, which also acts as an identifier for an ERC20 token. 5. **from:** address of the sender of ERC20 token 6. **to:** address of the receiver of ERC20 token 7. **fromisContract:** if this field value is 1, it signifies the 'from' column is a smart contract address otherwise an externally owned account address. 8. **toisContract:** if this field value is 1, it signifies the 'to' column is a smart contract address otherwise an externally owned account address. 9. **value:** tells about the number of tokens transferred Each row provides information about an ERC20 token transaction in the data set. The 'from' and 'to' columns are the addresses between whom the transaction has taken place (Fig. 1). For our analysis, we use four columns 'timeStamp', 'tokenAddress', 'from', and 'to'. The 'timestamp' column is in seconds, which we convert into \((YYYY-MM-DD)\) format. For instance, after transforming the timestamp in Fig. 1, \(1455451585\) becomes \(2016-02-14\) where base time \((1970-01-01)\) is considered standard time \(00:00:00\) UTC [37]. The rest of the three columns' data are in hash value which is very difficult to analyze. For better viewing and analyzing the data, we iterated over the 'from' and 'to' columns and mapped every unique address with a unique integer number. The same iteration process is carried out for the 'tokenAddress' column. Finally, we divide the whole data set in day-wise. Figure 2: Portray the evolution of Ethereum blockchain transaction data as wallets (nodes), transactions (edges), and the number of unique traded tokens. Tokens are the attributes on the edges. We examine the daily transaction graph from November 2015 to January 2023. The shaded region reflects the testing period of ERC20 tokens. We observe a rapid increase in all three variables between July 2016 and July 2018. After that, the number of nodes reaches stability, and edges gradually increase, showing the growing activity between the nodes. ### Transaction Network To model the ERC20 transaction data, we use the graph model [38]. In the Ethereum Financial Ecosystem, wallets are the nodes that buy or sell ERC20 tokens, and transactions between two wallets are the edges (links). For instance, let wallets \(A\) make a transaction in which \(A\) sends 1 token to \(B\), then the link will be directed from \(A\) to \(B\) (\(A\leadsto B\)). Further, if a wallet, \(A\) makes 2 transactions with 2 other wallets (\(B\) and \(C\)) in a day, then there will be two directed edges between the nodes as \(A\leadsto B\) and \(A\leadsto C\). Here, the node \(A\) has out-degree 2 and \(B,C\) both having in-degree 1. A node can have 10 edges with another node if it makes 10 transactions with the same node in a day with different tokens; then, there will be 10 parallel edges between them. Therefore, ERC20 token transaction graph is a multi-edges directed graph consisting of source and target nodes, where source nodes are the wallets that sell the ERC20 tokens, and target nodes are the wallets that buy the ERC20 tokens. We can think of tokens as the attributes on the edges of the transaction graphs (Fig. 1). The transaction graph for a given day \(t\), represented as \(\mathcal{G}_{t}(V_{t},\,E_{t})\) where set of vertices (\(V_{t}\)) consists of all wallets trading during that day as [17] \[V_{t}=\{\ v\ ||\text{wallets}\ v\text{ buy or sell any assets at day }t\} \tag{1}\] and the set of edges \(E_{t}\subseteq V_{t}\times V_{t}\) is defined as: \[E_{t}=\{\ (u,v)\ ||\text{ wallet $u$ sell to wallet $v$ any asset at day }t\} \tag{2}\] We denote the adjacency matrices corresponding to multi-edge directed graph \(\mathcal{G}_{t}\) as \(\mathbf{A}_{t}\in\mathbb{R}^{n_{t}\times n_{t}}\) and which can be defined as \(a_{ij}=l\) if there are \(l\) edges from \(i\) to \(j\) and 0 otherwise. The out-degree of a node, \(i\) on day \(t\) can be represented as \(k^{out}_{i,t}=\sum_{j=1}^{n_{t}}a_{ij}\) and in-degree as \(k^{in}_{i,t}=\sum_{j=1}^{n_{t}}a_{ji}\). The average out-degree and in-degree of \(\mathcal{G}_{t}\) can be defined as \(\langle k^{out}_{t}\rangle=\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}k^{out}_{i,t}\) and \(\langle k^{in}_{t}\rangle=\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}k^{in}_{i,t}\), respectively. Here, we consider number of wallets participated on day \(t\) as \(|V_{t}|=n_{t}\), and number of transactions as \(|E_{t}|=\sum_{i=1}^{n_{t}}k^{out}_{i,t}=\sum_{i=1}^{n_{t}}k^{in}_{i,t}=m_{t}\), thus \(\langle k^{out}_{t}\rangle=\langle k^{in}_{t}\rangle\). Further, a node that sent the maximum number of transactions in a day as a max-out-degree node and represented as \(k^{out}_{max,t}=\max_{i\in V_{t}}k^{out}_{i,t}\). Similarly, a node that receives a maximum number of transactions in a day as max-in-degree and defined as \(k^{in}_{max,t}=\max_{i\in V_{t}}k^{in}_{i,t}\). We can define sets containing all the nodes having out-degree equal to \(\alpha\) as \(\mathcal{D}^{out}_{i,t}=\{i\in V_{t}||\ k^{out}_{i,t}=\alpha,\alpha=1,2,\dots, k^{out}_{max,t}\}\) and in-degree equal to \(\beta\) as \(\mathcal{D}^{in}_{\beta,t}=\{i\in V_{t}||\ k^{in}_{i,t}=\beta,\beta=1,2,\dots, k^{in}_{max,t}\}\), where \(N^{out}_{\alpha,t}=|\mathcal{D}^{out}_{\alpha,t}|\) and \(N^{in}_{\beta,t}=|\mathcal{D}^{in}_{\beta,t}|\), are the number of elements inside the sets [17]. Hence, sets containing all the nodes having out-degree and in-degree equal to 1 as \(\mathcal{D}^{out}_{1,t}=\{i\in V_{t}||\ k^{out}_{i,t}=1\}\) and \(\mathcal{D}^{in}_{1,t}=\{i\in V_{t}||\ k^{in}_{i,t}=1\}\), where \(N^{in}_{1,t}=|\mathcal{D}^{in}_{1,t}|\), and \(N^{out}_{1,t}=|\mathcal{D}^{out}_{1,t}|\). From the economic perspective - \(k^{out}_{max,t}\) is a wallet that is a maximum selling hub, \(k^{in}_{max,t}\) is a wallet that is a maximum buying hub, \(N^{out}_{1,t}\) is the number of wallets which sell once and \(N^{in}_{1,t}\) is the number of wallets which buy once on a daily basis. Note that in the later discussion, we omit \(t\) from the above notations for convenience. Figure 3: Dynamic behavior of average degree (\(\langle k\rangle\)), in-degree (\(\langle k^{in}\rangle\)), and out-degree (\(\langle k^{out}\rangle\)) of daily transaction graphs. The average degree of the transaction graph provides the average number of transactions a wallet carries out in a day. The average in-degree (out-degree) is around 3. The fluctuations in the inception period arise due to a large number of parallel edges (transactions) between a pair of nodes (wallets) during the testing of the ERC20 token. ## 4 Results and Discussion In January 2018, the Ether price reached its record high of $1431, and by the middle of December 2018, the Ether price was down by 94% [39]. This period was marked as the 2018 Crypto Market Crash, where various other cryptocurrencies also hit record lows [40]. On the other hand, in March 2020, COVID-19 was declared a pandemic by the World Health Organization, resulting in severe societal and economic ramifications worldwide [41]. During these events, significant changes occurred in the trading behavior of the Ethereum ERC20 Financial Ecosystem. To understand, we analyze the behavior of the daily transaction graphs. ### Dynamics of the System After the inception of Ethereum ERC20 tokens, the number of wallets and transactions was lower; however, after July 2016, we can see a notable increase in the development of nodes, edges, and the volume of tokens traded over time (Fig. 2). After July 2018, the everyday number of wallets (nodes) involved in trading is approximately constant. Still, the number of daily transactions (edges) increases gradually, which infers the growing activity between the wallets of the Ethereum ERC20 Financial Ecosystem. From the daily transaction graph, we can also predict that on average, \(10^{5}\) wallets perform around \(10^{5}\) transactions, and on average, \(10^{3}\) distinct types of tokens traded (Fig. 2). Additionally, for the initial period, the average number of transactions carried out by wallets per day is around 4 and gradually grows to around 6 after July 2020 (Fig. 3). However, if we separately look into the average out-degree and in-degree, it is close to 3. On the contrary, Fig. 4 reveals that the max-out-degree (\(k^{out}_{max}\)) is very large as compared to the average out-degree (\(\langle k^{out}\rangle\)). Also, we can notice a large number of nodes having one out-degree (\(N_{1}^{out}\)). Similar, behavior for the max-in-degrees (\(k^{in}_{max}\), \(\langle k^{in}\rangle\) and \(N_{1}^{in}\)). It infers degree distribution might be heavy-tailed where \(N_{1}\) and \(k_{max}\) are the extreme points of the degree distribution [17]. If we randomly pick a daily transaction graph, it shows a heavy-tailed degree distribution for both the out-degrees and in-degrees. The out-degree distribution is of the seller's wallet, and the in-degree distribution is of the buyer's wallet of the ERC20 token. The distribution clearly shows that the Ethereum ERC20 Financial Ecosystem follows heavy-tailed distribution for daily transaction graphs, which coincides with numerous previous works showing that the degree distribution of blockchain transaction data is heavy-tailed [13; 17]. To get insights on the buyers' and sellers' behavior before and after the crypto crash, we examine the dynamical behavior of the extreme points of degree distribution - maximum selling hub (\(k^{out}_{max}\)), number of wallets which sell once (\(N_{1}^{out}\)), maximum buying hub (\(k^{in}_{max}\)), and the number of wallets which buy once (\(N_{1}^{in}\)) daily (Fig. 4). We observe that until July 2018, all four variables grow substantially. But after that, the number of wallets buying once daily reached stability. In contrast, the number of wallets selling once is still gradually increasing but not substantially, and there is a decrease in the maximum selling and buying hub until July 2020 (Fig. 4). Figure 4: Represents the dynamics of the maximum selling hub (\(k^{out}_{max}\)), maximum buying hub (\(k^{in}_{max}\)), number of wallets buying once (\(N_{1}^{in}\)), and number of wallets selling once (\(N_{1}^{out}\)). We observe that \(N_{1}^{in}\) has more steeper increase than \(N_{1}^{out}\) until July 2018, after that \(N_{1}^{in}\) reaches its stability whereas there is gradual increase in \(N_{1}^{out}\). After July 2020, we observe strong co-movement between \(N_{1}^{in}\) and \(N_{1}^{out}\). Furthermore, we calculate the ratios between the extreme points of the degree distribution for each day. In that case, we can observe significant changes in the network's global dynamics during the 2018 crypto crash and the COVID-19 pandemic. We can define the ratio as follows [17] \[R_{in}(\mathcal{G}_{t})=\frac{\log(N_{1,t}^{in})}{\log(k_{max,t}^{in})},\ \ \text{and}\ \ R_{out}(\mathcal{G}_{t})=\frac{\log(N_{1,t}^{out})}{\log(k_{max,t}^{out})} \tag{3}\] The ratios show the interplay between the buyers' and sellers' behavior of the Ethereum ERC20 token transactions and provide insight into their evolution over time. We can observe high volatility in the dynamics of the ratios (Fig. 6). However, close observation of \(R_{in}\) and \(R_{out}\) reveals a change in the dynamical behavior of the ratios before and after the crypto crash, which suggests a change in the trader's trading behavior. The moving average of the ratios denoted as \(\langle R_{in}\rangle\) and \(\langle R_{out}\rangle\) can prominently show the behavioral changes of the buyers and sellers. We remark that before July 2018, when buyers' activity Figure 5: Evolution of Tokens. Presents the dynamics of the addition of new ERC20 tokens to the network. For each day, we extract the count of new tokens added to the network. For instance, on \(7^{th}\) July 2017, 25 new tokens are added, s\({}^{th}\) July 2017, 40 new tokens are added to the network, and so on. We observe that until July 2018, there is an increase in the addition of new tokens to the network, but after that, the count remains approximately constant until July 2020. There exists a volatile behavior of the token evolution during the COVID-19 pandemic. The total number of unique tokens traded over the whole period is 301428. Figure 6: Dynamical behavior of buyers and sellers ratios is represented as in-degree ratio (\(R_{in}\)) and out-degree ratio (\(R_{out}\)), respectively. For each transaction graph, we calculate \(R_{in}\), \(R_{out}\) using Eq. (3). To observe the evolution of the Financial Ecosystem’s dynamics, we calculate the moving average of \(R_{in}\) and \(R_{out}\) (\(\langle R_{in}\rangle\) and \(\langle R_{out}\rangle\)) for each day. From July 2016 to July 2018, we observe an anti-phase oscillation between \(R_{in}\) and \(R_{out}\). However, after July 2018, we see a change in the dynamics of \(R_{in}\) and \(R_{out}\) with co-movement between the two, which grows stronger after July 2020 (COVID-19 period). For a given day \(t\), \(\langle R_{in}\rangle\) is calculated by taking the mean of window length \(p+t+s\) that includes the \(R_{in}\) value of the day \(t\), \(p\) is the number of \(R_{in}\) values preceding the day \(t\) and \(s\) is the number of \(R_{in}\) values succeeding the day \(t\). The window length is truncated at the initial and final days when there are insufficient \(R_{in}\) values to fill the window. The mean value is taken over only the \(R_{in}\) that fill the window. Here, we consider the window size to be 70. For the initial days, the size of \(p\) is dynamically growing, and \(s\) is kept constant until \(p\) equals 34. For final days, the size of \(s\) is dynamically growing, and \(p\) is kept constant when the successive days are less than 35. Similarly, we calculate the \(\langle R_{out}\rangle\) values over time. (\(\langle R_{in}\rangle\)) increasing, sellers' activity (\(\langle R_{out}\rangle\)) decreasing and vice-versa (Fig. 6). We characterize this day-wise phenomenon in transaction graphs as anti-phased oscillations [17]. Notably, after July 2018, there was a co-movement of the buyers' and sellers' activity (Fig. 6). The daily transaction graph size is very large and dynamic, so it is difficult to understand the internal behavior. Therefore, we use the correlation measure and regression analysis among the variables in Eq. (3). Anti-phase oscillation of \(\langle R_{in}\rangle\) and \(\langle R_{out}\rangle\) to each other in the initial period is resulted due to a strong correlation between entities in Eq. (3) - maximum selling hub (\(k_{max}^{out}\)) vs. number of wallets buying once (\(N_{1}^{in}\)), and maximum buying hub (\(k_{max}^{in}\)) vs. the number of wallets selling once (\(N_{1}^{out}\)) (Fig. 7(a-b)). Simultaneously, the correlation between the number of wallets selling once vs. the number of wallets buying once, and a weak correlation between the maximum buying hub and maximum selling hub (Fig. 7(c-d)). The slopes in the regression analysis also show that after July 2018, the value of the slope decreases (Fig. 7)(a-b). From the correlation and slope analysis, we might conclude that during the initial period, most of the transactions of the small traders are with big traders, fewer among small traders, and similarly, fewer transactions between big traders. However, after July 2018, both \(\langle R_{in}\rangle\) and \(\langle R_{out}\rangle\) show co-movement to each other, which grows stronger over the period, especially after July 2020 (COVID-19 period). We observe the co-movement of the ratios lead to a decrease in the correlations between the maximum selling hub and the number of wallets buying once (Fig.7(a)), as well as the maximum buying hub and the number of wallets selling once (Fig. 7(b)). Simultaneously, there is an increase in the correlation between the number of wallets selling and buying once and between maximum buying and selling hubs (Fig.7(c-d)). One can notice the decrement of the slopes during the co-movement for 2 relations and an increase in other 2 relations. In other words, the increase in the trading activity among small traders and among the big traders, and at the same time, a decrease in the trading activity between big traders and small traders has resulted in the co-movement of the ratios. From the above analysis of trading activity before and after the crypto crash, there is an evolution in the trading behavior of the traders. Before the crash, small traders perform most Figure 7: Buyer’s and seller’s behavior. Illustrate the relation between (a) largest seller (\(k_{max}^{out}\)) vs. small buyers (\(N_{1}^{in}\)) (b) largest buyer (\(k_{max}^{in}\)) vs. small seller (\(N_{1}^{out}\)), (c) small sellers vs. small buyers and (d) largest seller vs. largest buyer. The color bar corresponds to the date. We calculate the slope between the entities for two different periods. The red line refers to the slope from July 2016 to July 2018, and the blue line refers to the slope from July 2018 to January 2023. We can observe a large slope value in the initial period for panels (a-c) (\(k_{max}^{out}\) vs. \(N_{1}^{in}\), \(k_{max}^{in}\) vs. \(N_{1}^{out}\) and \(N_{1}^{out}\) vs \(N_{1}^{in}\)) and a decrease in the later periods. However, from July 2018 to January 2023, we can observe a larger slope value between \(N_{1}^{out}\) vs \(N_{1}^{in}\), and \(k_{max}^{out}\) vs \(k_{max}^{in}\) as compared to other panels. of the transactions with the big traders, but after the crash, small traders make most of the transactions among themselves. Also, there was an increase in trading activity among the big traders after the crash. Further from the dynamics of ratios, we observe a stronger co-movement during the pandemic period, which indicates the absence of a significant impact of COVID-19 on the trading behavior of the traders in the Ethereum platform. However, volatility exists in the evolution of the new ERC20 token inclusion to the platform during the COVID-19 period, whereas, after the crypto crash, the dynamics remained constant until July 2020 (Fig. 5). Note that the key difference between correlation and regression is that correlation measures the degree of a relationship between two independent variables. In other words, the correlation between two variables captures how both variables are related. In contrast, regression is how one variable affects another. Both of the measures can not say whether variables are directly interacting with each other or not. ### Forman-Ricci curvature analysis Now we use discrete Forman-Ricci curvature of networks introduced by R. Forman [42] to provide better insight into the trading behavior of the system. Forman-Ricci curvature is an edge-based concept that measures how fast edges spread in different directions [42]. Importantly, edges with negative curvature are vital in spreading information in a network. Previously, it has been used to characterize complex networks, which yield insights into their dynamical structure [43]. Since our networks are directed, we use the Forman-Ricci curvature of directed networks. The curvature of a directed edge \(e\) of weight \(\omega_{e}\), \(u\rightsquigarrow v\) is defined as follows: \[R(e)=\omega_{e}\left(\frac{\omega_{u}}{\omega_{e}}-\sum_{e_{u}\sim e}\frac{ \omega_{u}}{\sqrt{\omega_{e}\omega_{e_{u}}}}\right)+\omega_{e}\left(\frac{ \omega_{v}}{\omega_{e}}-\sum_{e_{v}\sim e}\frac{\omega_{v}}{\sqrt{\omega_{e} \omega_{e_{v}}}}\right) \tag{4}\] where \(e_{u}\), \(e_{v}\) are the edges connected to node \(u\), \(v\) and \(\omega_{e_{u}}\), \(\omega_{e_{v}}\) are weights associated with the edges, Figure 8: From an economic perspective, we get the Forman-Ricci curvature (\(R(e)\)) of a transaction (\(e\)) between two wallets \(u\) and \(v\) to be (a) \(R(e)=2\) when the wallet \(u\) can not buy any token and wallet \(v\) can not sell any token; however, wallet \(u\) can sell and wallet \(v\) can buy tokens. (b) Similarly, \(R(e)=1\), we observe the same economic scenarios that we observe in \(R(e)=2\), with additionally, wallet \(u\) can buy once or wallet \(v\) can sell once. (c) In the case of edge with \(R(e)=0\), the wallet \(u\) can buy at most twice, and wallet \(v\) can not sell, or wallet \(u\) cannot buy, and wallet \(v\) can sell at most twice. However, the wallet \(u\) can sell, and the \(v\) can buy tokens. (d) Edge will have \(R(e)<<0\) when wallet \(u\) can buy and wallet \(v\) can sell tokens. In other words, \(R(e)<<0\) when in-degree of \(u\) and the out-degree of \(v\) is high (Eq. (5)). The edges in pink color contribute to the Forman-Ricci curvature of the edge \(e\) (red) under consideration. respectively. Here, we only consider those directed edges that terminate at node \(u\) and originate at node \(v\). Since edges are unweighted, the above expression (Eq. (4)) reduces to \[R(e)=2-\mathrm{indeg}(u)-\mathrm{outdeg}(v) \tag{5}\] where \(u\) is the seller wallet, \(v\) is the buyer wallet and \(e\) is the transaction from \(u\) to \(v\). Here, \(R(e)\leq 2\) as \(\mathrm{indeg}(u)\geq 0\) and \(\mathrm{outdeg}(u)\geq 0\). The curvature infers the structural properties of a network. Fig. 8 shows some examples of the respective curvature of edges and the structure around them. The positive curvature of an edge \(e\) infers limited types of trading activity between seller and buyer (Fig. 8). For instance, if a seller wants to buy more than 2 times or a buyer wants to sell more than 2 times, it can not be captured by the positive curvature (Fig. 8(a-c)). In other words, positive curvature refers to buyer-seller interaction with other traders in an isolated or restricted manner. There are few in-degree of the seller and fewer out-degree of the buyer, so wealth flows across the wallets in the network will be very slow and sometimes localized among peers (Fig. 8). On the other hand, the negative curvature of an edge \(R(e)<<0\) refers to various trading activities carried out by the seller and buyer. It infers that the seller and the buyer can buy and sell multiple times. Therefore, increasing negative curvature infers dispersion of wealth across the network. We calculate the fraction of edges \(\mathbf{(}m^{-}\mathbf{)}\) contributing to the negative Ricci curvatures for the daily transaction network (Fig. 9). We observe an increase in the fraction over time; it signifies an increase in the trading activity among the traders where simultaneously the seller sells and buys the tokens (Fig. 9(c)). Similarly, buyers can also buy and sell the tokens. It shows the evolution in the behavior of the traders, where before the COVID-19 pandemic, most of the sellers only sold the tokens and buyers only bought the tokens, which resulted in a small percentage of edge with negative Ricci curvature, thus resulting in large positive Ricci curvature, and wealth localizes among the buyers (Fig. 9(c)). However, Figure 9: Forman-Ricci Curvature of buyers’ and sellers’ behavior. We calculate the Forman-Ricci curvature (\(R(e)\)) of each edge (\(e\)) in a day using Eq. (5). We consider two snapshots of the \(R(e)\) vs. frequency plot from July 2016 to January 2023. (a) the \(R(e)\) vs. frequency plot for \(12^{th}\) July 2018 shows fewer spreads of the negative curvature values, where \(n=336542\), and \(E=818739\). On the other hand, (b) \(12^{th}\) June 2021 plots large spreads the negative Forman-Ricci curvature, where \(n=254903\), \(E=943758\). We represent the total number of edges \(m_{t}=m_{t}^{+}+m_{t}^{-}\), where \(m_{t}^{+}\) and \(m_{t}^{-}\) are the number of edges with positive and negative Ricci curvature on day \(t\) and after normalizing \(m_{t}^{+}+m_{t}^{-}=1\). (c) the light blue line represents the fraction of negative Forman-Ricci curvature \((m^{-})\) contribution from daily transaction graphs. The dark blue line represents the moving average value (\(\langle m^{-}\rangle\)). We observe that the fraction keeps increasing after July 2018 and becomes stable during the COVID-19 pandemic. The moving average window size is 70 and calculated as in Fig. 6. after the crypto crash and during the COVID-19 pandemic, sellers and buyers both performed buying and selling activity which led to an increase in the percentage of edges having negative Ricci curvature value. Notably, one can observe that during both events, the number of daily transactions remains stable; only the trader's behaviors change. ## 5 Conclusion In conclusion, the evolution of transaction graphs within the Ethereum blockchain's ERC20 token financial ecosystem from simple token transfers to complex DeFi protocols [15] reflects on the growth, complexity, and innovation occurring in the tokenized economy. Understanding and harnessing the insights from transaction graphs will be pivotal in addressing scalability challenges, fostering regulatory compliance, and unlocking opportunities for decentralized finance and digital asset utilization. Using complex network analysis and differential geometry tools, we analyzed the dynamic evolution of transaction graphs in the ERC20 token financial ecosystem. We observed the evolution in the trading activity of the traders and the dynamics of ERC20 tokens in the financial ecosystem. We focused here on two big events - the 2018 crypto crash and the COVID-19 pandemic. We started the investigation by analyzing the evolution of wallets, transactions, and tokens for the period of November 2015 to January 2023. There existed a constant addition of new tokens to the financial ecosystem until the pandemic; however, after that, there were fluctuations. Our analysis of the daily transaction graphs unveiled that before the crash, the trading activities of the traders led to the localization of wealth among individual traders. However, after the crash and during the pandemic, the change in trading activity by most traders led to the dispersal or continuous flow of wealth over the network. Here though, we used the extreme points of the degree distribution, incorporating other variables (\(N_{\alpha,t}^{out}\) and \(N_{\beta,t}^{in}\)) in the analysis can provide more insight into the system which requires further investigation. Moreover, we use 4 fields from the extracted data, and including other data, fields can provide greater insights into the financial ecosystem's underlying features. For instance, if we include the 'value' field, the transaction networks become weighted and can provide insights into the flow of wealth in the financial ecosystem during the black swan events. Here, we examined the pairwise interactions of the transaction data, which cannot capture the higher-order interactions of the traders' behavior [44]. We intend to pursue higher-order interactions, which will provide insights into the existence and role of simultaneous many-body interactions in the financial market. As financial ecosystems continue to mature, further research and innovation in transaction graph analysis will be essential to unlock the full potential of ERC20 tokens and drive the adoption of decentralized applications built on the Ethereum blockchain. ## 6 Acknowledgement MP is thankful to the Complex Systems Lab (IIT Indore) members for the useful discussion. SJ acknowledges DST grant \(SPF/2021/000136\). PP acknowledges SERB grant \(TAR/2022/000657\).
2309.02504
To be, or not to be: Balmer breaks in high-z galaxies with JWST
Standard models of structure formation allow us to predict the cosmic timescales relevant for the onset of star formation and the assembly history of galaxies at high redshifts ($z > 10$). The strength of the Balmer break represents a well-known diagnostic of the age and star formation history of galaxies, which enables us to compare observations with contemporary simulations - thus shedding light on the predictive power of our current models of star formation in the early universe. Here, we measure the Balmer break strength for 23 spectroscopically confirmed galaxies at redshifts $6 \lesssim z \lesssim 12$ using public JWST NIRSpec data from the cycle 1 GO 1433 and GO 2282 programs (PI Coe), as well as public spectroscopic data from the JWST Deep Extragalactic Survey (JADES). We find that the range of observed Balmer break strengths agree well with that of current simulations given our measurement uncertainties. No cases of anomalously strong Balmer breaks are detected, and therefore no severe departures from the predictions of contemporary models of star formation. However, there are indications that the number of outliers in the observed distribution, both in direction of strong and weak Balmer breaks, is higher than that predicted by simulations.
Anton Vikaeus, Erik Zackrisson, Stephen Wilkins, Armin Nabizadeh, Vasily Kokorev, Abdurrouf, Larry D. Bradley, Dan Coe, Pratika Dayal, Massimo Ricotti
2023-09-05T18:00:06Z
http://arxiv.org/abs/2309.02504v2
# To be, or not to be: Balmer breaks in high-z galaxies with JWST ###### Abstract Standard models of structure formation allow us to predict the cosmic timescales relevant for the onset of star formation and the assembly history of galaxies at high redshifts (\(z>10\)). The strength of the Balmer break represents a well-known diagnostic of the age and star formation history of galaxies, which enables us to compare observations with contemporary simulations - thus shedding light on the predictive power of our current models of star formation in the early universe. Here, we measure the Balmer break strength for 23 spectroscopically confirmed galaxies at redshifts \(6\lesssim z\lesssim 12\) using public _JWST_ NIRSpec data from the cycle 1 GO 1433 and GO 2282 programs (PI Coe), as well as public spectroscopic data from the JWST Deep Extragalactic Survey (JADES). We find that the range of observed Balmer break strengths agree well with that of current simulations given our measurement uncertainties. No cases of anomalously strong Balmer breaks are detected, and therefore no severe departures from the predictions of contemporary models of star formation. However, there are indications that the number of outliers in the observed distribution, both in direction of strong and weak Balmer breaks, is higher than that predicted by simulations. keywords: galaxies: high-redshift, star formation, formation - techniques: spectroscopic - infrared: general ## 1 Introduction When estimating the star-forming age of an evolved galaxy, one of the more powerful proxies is the strength of the so-called Balmer break. This apparent discontinuity in the spectra of galaxies appears as a result of the complete ionization of hydrogen atoms occupying the second excited atomic state - producing a break in the restframe continuum emission around a wavelength of \(\sim 3600\) A. The strength of this continuum break is primarily governed by stellar physics which identifies stars of spectral type A as the dominant stellar component responsible for strong Balmer breaks in galaxy spectra. As such, the Balmer break will evolve with time due to stellar evolution and the galaxy's star-forming history. The break grows strongest at around 0.3-1 Gyr for a simple, single-age stellar population, while leveling out as the galaxy continues to age (Kriek et al., 2006). Furthermore, in galaxies with ongoing star formation, the continuous replenishing of young stars enhances the integrated flux at shorter wavelengths, resulting in a bluer spectrum with a less pronounced Balmer break. In normal circumstances, one therefore generally requires a mature stellar population with an age of \(\gtrsim 0.3\) Gyr that dominates the integrated flux in order to expect a significant Balmer break (e.g, Steinhardt et al., 2023). The Balmer break is also sensitive to nebular reprocessing of the stellar radiation which becomes evident in cases of ongoing star formation and young stellar populations. The nebular continuum emission strongly enhances the flux at wavelengths blueward of the break - resulting in a smaller Balmer break ratio. Furthermore, dust attenuation has the opposite effect such that a significant dust component enhances the Balmer break due to reddening (Wilkins et al., 2023). The Balmer break, in conjunction with detections of strong Balmer emission lines, such as H\(\alpha\) or H\(\beta\), makes it possible to constrain the star formation history and the overall age of the stellar population making up a galaxy. The utility of the Balmer break as an age indicator is commonly used when analyzing spectra of galaxies at lower redshifts, where an evolved stellar population is likely a more significant component in the integrated galaxy spectra. Reaching towards the very high redshifts (\(z\gtrsim 10\)) now probed by _The James Webb Space Telescope_ (_JWST_) suggests a theoretically declining strength in the Balmer break due to the predominantly young stellar populations that formed close to the onset of galaxy formation - thus encouraging our endeavour for observational confirmation of this very assumption. Recent discoveries (Roberts-Borsani et al., 2020; Laporte et al., 2021) have complicated this picture due to the identification of \(z\approx 7\)-9 galaxies that, if based solely on the Balmer break, argue for an onset of star formation very early in the Universe through short (\(\sim 10-100\) Myr) bursts of star formation followed by quiescent phases or very low star formation rates. Hashimoto et al. (2018) provided additional evidence for such a \(z\approx 9\) galaxy (MACS1149-JD1), showing a strong Balmer break that suggests the galaxy likely formed the bulk of its stars around 250 million years after the Big Bang. Whether such bursts and following quiescent phases are typical for the assembly of the first galaxies, is currently being investigated by _JWST_ which enables us to study their star formation histories back to the very first 100 Myrs of cosmic time. This was just recently subject to investigation by new _JWST_ data, arguing for a much weaker Balmer break in MACS1149-JD1, and thus a less extreme star formation history (Bradac et al., 2023; Stiavelli et al., 2023). As of now, a few observations by _JWST_ consistent with Balmer breaks at high redshift have been presented. Photometrically, Labbe et al. (2023b) provides evidence of a Balmer break in six galaxies with \(7.4\leq z\leq 9.1\). However, at the same time, objects with similar photometric signatures have been linked with active galactic nuclei (Labbe et al., 2023a; Kokorev et al., 2023), revealing the challenge of photometrically identifying clear and unambiguous Balmer breaks. So far, spectroscopic confirmation of a high-redshift galaxy with a distinct Balmer break has only been reported by Looser et al. (2023a) at \(z=7.3\). More recently, Curtis-Lake et al. (2023) spectroscopically confirmed multiple galaxies at extreme redshifts \(z\approx 10-13\), but ruled out the existence of any prominent Balmer breaks in the data presented. Trussler et al. (2023) presents photometric data for a large number of galaxies at \(7<z<12\) where they search for an excess in the F444W and F356W NIRCam filters of _JWST_ as a means to identify candidate high-z Balmer break galaxies. Similarly (Atek et al., 2023) reports three galaxies at \(z\sim 9.5-10.2\) with a strong F444W excess. A key aspect in determining the star-forming age of a galaxy from the Balmer break is the assumptions made regarding the star formation history and chemical enrichment. Recent high-redshift simulations such as FIRE (Feedback in realistic environments) by Ma et al. (2020), FLARES (First light and reionization epoch simulations) by Vijayan et al. (2020); Lovell et al. (2020), the SPHINX simulations (Rosdahl et al., 2018; Katz et al., 2021) as well as the simulations in Garcia et al. (2023), provide a picture different than the perhaps oversimplified models assuming a constant, or slowly varying star formation rate. Feedback processes, chemical enrichment, magnetic fields, supernova explosions, etc. all play significant roles in the way star formation proceeds after onset. Oversimplifications would evidently render our predictions less accurate and could therefore call for some revision of our contemporary models of star formation - advocating a more detailed treatment which at this point in time is only amenable to numerical simulations such as those mentioned above. In this work, we present spectroscopically-determined Balmer break strengths for a sample of 23 galaxies studied with _JWST_ and explore the agreement of these measurements with Balmer breaks derived for simulated galaxies from the First Light And Reionization Epoch Simulations (FLARES; Vijayan et al., 2020; Lovell et al., 2020; Wilkins et al., 2023) and DELPHI (Dayal et al., 2014, 2022; Mauerhofer & Dayal, 2023) simulation suites. The paper is structured as follows: In Sect. 2, we outline our methodology for measuring the Balmer break strength and introduce the theoretical and observational data used in this study. In Sect. 3 we present our observational findings regarding the Balmer break of high-redshift galaxies while in Sect. 4 we discuss and summarize our conclusions. A flat \(\Lambda\)CDM cosmological model with \(H_{0}=67.3\,\mathrm{km\,s^{-1}Mpc^{-1}}\), \(\Omega_{\Lambda}=0.685\), \(\Omega_{\mathrm{m}}=0.315\) and \(\Omega_{\mathrm{b}}=0.0487\)(Planck Collaboration et al., 2014) is adopted throughout the paper. ## 2 Methods We define the strength of the Balmer break following the approach explained in Binggeli et al. (2019), where the continuum (in units of \(F_{\nu}\)) is fitted both at wavelengths longward (at 4200 A) and shortward (at 3500 A) of the break. The ratio of the two continuum levels forms an index quantifying the strength of the Balmer break, i.e. \(B=F_{\nu}(4200\,\mathrm{\AA})/F_{\nu}(3500\,\mathrm{\AA})\). Entangled within the Balmer break region is also another feature known as the 4000 A break. This feature has a similar effect on the spectrum and is therefore usually merged with the Balmer break into one single break. However, the mechanisms behind this break are different, with a notably stronger dependence on metallicity. The closely spaced absorption features from ionized metals reduce the Figure 1: JWST NIRSpec/PRISM spectra and NIRCam photometry (yellow boxes) for two of the high-redshift galaxies (JADES-8115 and WHL0137-3429) from our sample with notable Balmer breaks. The gray shaded area marks the two wavelength ranges used to calculate the Balmer break. Inside the shaded regions, two solid black lines indicate the continuum level fitted and used for the Balmer break calculations. flux at redder wavelengths, therefore strengthening the break feature (Bruzual A., 1983; Kriek et al., 2011). For the purposes of high redshift observations with telescopes like _JWST_, the dim nature of distant and faint objects generally does not provide sufficient resolution to resolve the two break features. Therefore, in this paper, we do not distinguish the two and instead quantify the Balmer/4000 A break through wavelength ranges covering either sides of the break region as a whole (Binggeli et al., 2019; Wilkins et al., 2023). Utilizing the above-mentioned definition, we determine the Balmer break from spectroscopic continuum measurements, coupled to spectroscopic redshifts, obtained from _JWST_/NIRSpec. This ensures that we deal with well-established redshift values for the galaxies under consideration, which allows us to constrain the Balmer break while minimizing potential contamination from strong emission lines. The necessity for spectroscopic studies of the Balmer break becomes evident due to the prevalent presence of strong emission lines such as [O III]\({}_{\lambda 4959,5007}\) and H\(\beta\), which can complicate the analysis of photometric data and mimic the presence of a strong Balmer break as they enter into the filter depending on redshift (e.g., Stefanon et al., 2023). ### Theoretical predictions of the Balmer break from simulations #### 2.1.1 Flares We present the simulated Balmer break strengths derived from the FLARES simulations within the redshift range of \(z=6\)-10. Wilkins et al. (2023) provides a thorough analysis of the physical mechanisms affecting the Balmer break in high-z galaxies. They showed the various factors contributing to the observed strength of the break, including aspects such as star formation history (SFH), metallicity, dust, escape of Lyman-continuum (LyC) radiation, stellar initial mass function (IMF), and also the employed stellar population synthesis (SPS) models. These simulations demonstrate that a galaxy with a given far ultraviolet luminosity (or total stellar mass) may exhibit Balmer breaks that may deviate by \(\sim 20\)-\(30\) percent from the median value. Considering the variations seen in the simulations, we expect that observations should frequently fall within the 2.2-97.8th percentile if our contemporary models of galaxy evolution are accurate. #### 2.1.2 Delphi We also compare our observational results to those from the delphi semi-analytical model (Dayal et al., 2014, 2022; Mauerhofer and Dayal, 2023). This model includes all the key processes of mergers and accretion in assembling the dark matter halo mass and gas mass \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline ID & RA & Dec & \(B=\frac{F_{\nu}(2400\,\mathrm{\AA})}{F_{\nu}(3500\,\mathrm{\AA})}\) & \(z_{\mathrm{spec}}\) & \(\mu\) & \(\log_{10}M_{\star}\) & \(\log_{10}L_{\mathrm{FUV}}\) & F200W \\ _v4_ (_v7_) & deg & deg & & & ( \(M_{\odot}\)) & (erg/s/Hz) & nJy \\ \hline **MACS0647** & & & & & & & & \\ [MISSING_PAGE_POST] up to \(z\sim 40\), starting at \(z\sim 4.5\) with a time resolution of 30 Myr and a halo mass resolution of \(10^{8}\,M_{\odot}\). The available gas mass in any halo can form stars with an "effective efficiency" of \(f_{\rm eff}^{\rm eff}\), which is the minimum between the efficiency that produces enough Type II Supernova (SNII) energy to eject the remainder of the gas (\(f_{\rm s}^{\rm ej}\)) and an upper maximum (mass- and redshift- independent) threshold (\(f_{\rm s}\)). The models include the key processes of production, astration, destruction (of dust into metals), ejection and dust grain growth in the ISM (that leads to a corresponding decrease in the metal mass) to calculate the total dust and metal masses for each galaxy. Crucially, this model contains only two mass- and redshift-independent free parameters to match observations. These are the maximum (instantaneous) star formation efficiency of \(f_{\rm s}=8\%\) and the fraction \(f_{\rm w}(\sim 7.5\%)\) of the SNII explosion energy that is available to drive an outflow. These parameters have been tuned to simultaneously reproduce the observed stellar mass function and the UV luminosity function at \(z\sim 5-12\). The integrated spectrum for each galaxy is obtained by summing the spectrum from each burst of star formation, accounting for its metallicity, and using a Salpeter IMF between \(0.1-100\,M_{\odot}\) in the starburst99 (Leitherer et al. 1999) stellar population synthesis model. For the delphi model we consider the fiducial case of \(f_{\rm esc}=0\) resulting in a maximal contribution from both continuum nebular emission and nebular emission lines. For nebular emission lines we use the metallicity-dependent results tabulated in Anders & Fritze-v. Alvensleben (2003). ### NIRSpec/NIRCam observations of high-z candidates In order to spectroscopically constrain the Balmer break in high redshift galaxies, we utilized prism data from the NIRSpec instrument on _JWST_, which covers the the near-infrared wavelength range (0.6-5.3\(\mu\)m) and allows continuum measurements beyond the rest-wavelength Balmer break for redshifts up to \(z\lesssim 11.5\). The data analyzed here partly comes from the JWST cycle-1 GO 1433 program (PI Coe), which observed the cluster MACS J0647.7+7015 (hereafter MACS0647), and GO 2282 (PI Coe), which observed the cluster WHL0137-08 (hereafter WHL0137). These data were retrieved from MAST, which goes through STScI JWST pipeline1 version 1.9.2. Spectral lines and features are fitted using masexp2 version 0.6.0 to determine the redshift. Photometric data is processed through grzzi (Brammer et al. 2022)3. Footnote 1: [https://github.com/spacetelescope/jwst](https://github.com/spacetelescope/jwst) Footnote 2: [https://github.com/phrammer/masexp](https://github.com/phrammer/masexp) Footnote 3: Repositories at: [https://dawn-cph.github.io/dja](https://dawn-cph.github.io/dja) MACS0647 and WHL0137 are known cluster lenses which implies that gravitational lensing of the observed flux is to be expected for several of the galaxies. In this dataset, we find two cases of strong gravitational lensing. The galaxy WHL0137-1968 is gravitationally lensed into an arc and estimated to have a magnification of \(\mu\sim~{}7.9^{+12}_{-6}\)(Bradley et al. 2022) based on the photometric redshift estimate of Figure 3: Same as in Fig. 2, here for \(z=8\). Strong gravitational lensing (\(\mu\sim 7.9^{+12}_{-5}\)) is responsible for the wide error bars in \(L_{\rm FUV}\) for one of the objects (identified as WHL0137-1968). Figure 2: The Balmer break strength \(B=F_{\nu}(4200\,{\rm\AA})/F_{\nu}(3500\,{\rm\AA})\) as a function of far ultraviolet luminosity (\(L_{\rm FUV}\)) at redshifts \(z=6\) and \(z=7\). The light blue dots with accompanying error bars represent the measured Balmer break strength derived from the observations that have spectroscopic redshifts within \(z=6\pm 0.5\) (left) and \(z=7\pm 0.5\) (right). Red markers indicate galaxies with either missing NIRCam photometry, or photometric data that cannot be trivially reconciled with spectroscopy. The large errors in \(L_{\rm FUV}\) for some of the galaxies are due to gravitational lensing. The black solid line is the median Balmer break strength from the FLARES simulations, while the shaded regions correspond to the 2.2-97.8th (lighter shade) and 15.8-84.2th (darker shade) percentiles of the distribution. Orange dots correspond to simulated Balmer break strengths from the DELPHI models. \(z_{\rm ph}=9\). With a spectroscopic redshift of \(z=8.22\) we find a similar magnification estimate of \(\mu\sim 8.6^{+14}_{-6}\) using the same lensing models as in Bradley et al. (2022). These models include Lenstool (Jullo & Kneib, 2009), WSLAP (Diego et al., 2005, 2007), glafic (Oguri, 2010), and Light-traces-mass (Broadhurst et al., 2005; Zitrin et al., 2009, 2015). The galaxy WHL0137-249 is not located near the cluster core which places it in region with very minor magnifications, here we use conservative estimates of \(\mu\sim 1.1\pm 0.1\). The galaxy MACS0647-3754 is also strongly lensed as indicated in imaging. Here, we estimate a magnification \(\mu\sim 6.3^{+0.5}_{-0.4}\) using the glafic lens model. The remaining galaxies from the MACS0647 field were estimated to have smaller magnifications, found to be in the range \(1.1\lesssim\mu\lesssim 2.5\) with errors in magnification of \(\sim 20.1\). This introduces wider error bars in the calculated restframe far ultraviolet luminosity (\(L_{\rm FUV}\)), but will not be large enough (for the weaker lensing estimates) to impact our conclusions regarding the agreement between observations and the simulated distributions, shown in figures 2-4. Additional galaxies from the JADES program are retrieved from fully reduced public data (Bunker et al., 2023) with spectroscopic redshift estimates. In order to reduce the impact of noise and unwanted features the spectrum is rebinned, resulting in a smoother continuum from which we can calculate the Balmer break strength. This slightly affects the width (i.e., wavelength range) of the regions we picked for the calculation. However, care has been taken to make sure that parts of the spectrum that the feature strong emission lines have been left out. The far ultraviolet luminosity is calculated from the spectra using the restframe continuum at 1500 A after scaling the observed spectra to the photometric measurements. Scaling the spectrum also enables us to check the consistency between photometry and spectroscopy in the region where the Balmer break is measured. Discrepancies between the two would make the inferred Balmer break strength less robust, and a few such cases have been noted, which are further discussed in section 4. The total stellar masses for the objects in MACS0647 and WHL0137 are estimated using piXedfit (Abdurro'uf et al., 2021, 2023) and corrected for gravitational lensing. Note that we include no corresponding plots for the Balmer break strength as a function of total stellar mass in this paper (see Wilkins et al., 2023, for simulations). We furthermore include no total stellar mass estimates for the galaxies taken from the JADES program. This is not an exhaustive list of all spectroscopically confirmed galaxies with possible Balmer breaks, but this data set should serve as a quantitative sample for the purposes of seeing whether present simulations are in line with observations and to rule out frequent departures from model predictions. Presently there are several galaxies in the literature showing significantly strong Balmer breaks. Hashimoto et al. (2018) estimates a Balmer break strength \(\sim 2.3\pm 0.4\) (recently challenged by Bradac et al., 2023) using only slightly different wavelength ranges to define the break. Wilkins et al. (2023) also presents a Balmer break strength estimate from (Carnall et al., 2023) which suggests a break strength of \(\sim 2.5\), although in a galaxy at lower redshift (\(z\approx 4.7\)). The galaxy Figure 4: Same as in Fig. 2, here for \(z=9\) and \(z=10\). Spectra for the outliers at \(z=9\) can be seen in fig. 6, where the clear outlier is identified as JADES-10058975. Similarly, spectra for the \(z=10\) galaxies can be seen in fig. 7, where the weaker break is identified with JADES-6438. The red markers indicate galaxies with either missing NIRCam photometry, or, photometric data that cannot be trivially reconciled with spectroscopy. Figure 5: JWST NIRSpec/PRISM spectra and NIRCam photometry for the outlier at \(z=6.33\), seen in the left panel of fig. 2. This galaxy has a quite small Balmer break strength with low error margin, placing it outside the 2.2-97.8th percentile even if considering the error. The spectrum shows that the break is extracted in a region where a notable fluctuation in the continuum shape is present. This has an impact on the calculated break strength and could therefore possibly explain the low value. studied by Looser et al. (2023a) is also represented in our sample (JADES-8115) where we find a clear Balmer break with a strength of \(\sim 1.6\) (see fig. 1). We moreover include one galaxy with \(z=11.58\) (see fig. 9) for which the rest-frame 4200 A continuum data point longward of the Balmer break falls at the very edge of the NIRSpec range. The shifting of the Balmer break to these very red wavelengths combined with the low brightness introduces severe noise, which makes the extraction of reliable Balmer break measurements very challenging. No comparison to simulations is presented for this object, since we currently lack simulated data at \(z\gtrsim 11\). ## 3 Results We assemble spectra from 23 galaxies with spectroscopically confirmed redshifts and calculated their Balmer break strength (see table 1). Based on the simulations included in this paper we find that the majority (18 out of 23) of our observed Balmer breaks can be accounted for considering the overall distribution of simulated galaxies as the calculated values fall within the shaded areas surrounding the median in figures 2-4. Considering the uncertainties in the calculated Balmer break strength we find that 19 out of 23 galaxies are consistent with the simulations. Some of the galaxies have far ultraviolet luminosities below the resolution limit of the simulations but can be extrapolated to fall within the predictions. However, we find three clear candidates (JADES-18846 and 10058975, MACS0647-3754) which show Balmer breaks that deviate significantly from the simulated data such that they are consistently lower than those predicted by the simulations, even when considering the 2.2-97.8th percentile variations from the median in FLARES and the measurement uncertainties of the break strength. Since MACS0647-3754 is gravitationally lensed with resulting large errors in \(L_{\rm FUV}\), this outlier can potentially be alleviated, such that it falls into the distribution if sufficiently lensed. Using one lens model (glafic), we estimate that Figure 6: JWST NIRSpec/PRISM spectra and NIRCam photometry for the two \(z\approx 9\) outliers in the left panel of fig. 4. The JADES-10058975 galaxy lacks photometric data, which raises some concern due to its significantly weak Balmer break strength ratio with a suspiciously small error margin. MACS0647-3568 lies outside of the 2.2-97.8th percentile of the FLARES simulations but has error margins that places it within the distribution and shows an adequate agreement with photometry/spectroscopy in the break region. Figure 7: JWST NIRSpec/PRISM spectra and NIRCam photometry for the \(z=10.38\) (left) and \(z=9.7\) (right) galaxies seen in fig. 4. JADES-10014177 shows a spectrum with significant noise/variation in the Balmer break region where spectra and photometry are hard to reconcile. MACS0647-3754 is lensed by \(\mu\sim 6.3\pm 0.5\), which is just slightly too low to place it within the simulated distribution. The discrepancy between observation and simulation is more considerable for the higher redshifts (\(z\gtrsim 8\)) where we albeit have fewer observed galaxies with a well-constrained Balmer break. Our analysis shows that the galaxies included in the JADES program reveal an acceptable level of statistical agreement between the observed and simulated datasets. The majority of our JADES observations exhibit consistency with the simulated results for redshifts \(z\gtrsim 6\). However, an exception arises in the case of JADES-10058975 (see left panel fig. 6 for spectra), showing a weak Balmer break strength of \(0.74\pm 0.06\) at \(z\sim 9\). Considering the relatively minor uncertainty associated with this measurement, it deviates significantly from the corresponding simulations by more than \(3\sigma\) from the 97.8% significance interval. The galaxies in the MACS0647 and WHL0137 clusters also show good consistency with the simulations. There are no objects in our sample with extreme Balmer break strengths (\(\gtrsim 2\)) that suggest any major tensions with standard models of galaxy formation. ## 4 Discussion and Conclusion In this paper, we evaluated the magnitude of the Balmer break in 23 spectroscopically confirmed galaxies spanning redshifts from 6.1 to 11.6. For this, we utilized JWST/NISRSpec data obtained from the GO 1433 and GO 2282 programs together with the publicly available JADES observations. Despite our analysis revealing a reasonable agreement between the observed strength of the Balmer break and contemporary simulations, given the uncertainties inherent in our measurements, some outliers were identified when comparing to the included simulations (see Sec. 3). These Balmer breaks fall below the predictions by about 60% from the median with respect to the FLARE simulations while slightly less (by \(\sim 10\) %) than that when comparing to the median of the DELPHI models. Currently, other observations of galaxies at high redshifts suggest a significantly more bursty star formation history than normally expected in standard models of star formation (Sun et al. 2023a; Endsley et al. 2023a; Looser et al. 2023b; Sun et al. 2023b; Endsley et al. 2023b). Such bursty star formation implies a sporadically replenished young stellar component in the galaxy, resulting in a highly varying Balmer break strength which cannot grow particularly strong if burst recoccus on timescales shorter than \(\sim 300\) Myr. Overall, this would push the simulated Balmer breaks down and increase the scatter in figures 2-4, and in doing so somewhat improving the agreement between models and observations. We find no cases of extreme Balmer breaks, such as those previously found in, e.g., Hashimoto et al. (2018). Taking our observations at face value suggests that such extreme Balmer breaks are indeed rare. However, one must acknowledge that the 23 galaxies presented in this paper do not provide a sufficiently large sample in order to draw statistically relevant conclusions regarding the observed and simulated distributions for all included redshifts. We therefore risk missing some of the important details of the true overall distribution of Balmer breaks, especially for high-redshift galaxies (\(z\gtrsim 10\)) where the inherent challenge of attaining high-quality spectra of significant numbers of galaxies is evident. We conclude that FLARES (Wilkins et al. 2023) make predictions that agree fairly well with observations as \(\sim 82\)% of the galaxies included in this paper agree with the predictions given that we allow reasonable uncertainties and variations from the simulated median. We do however find several accounts of particularly low Balmer break strengths, falling below the simulated median with as much as \(\sim 60\)% in the more extreme cases. Looking at the spectra of one of these galaxies (JADES-10058975, left panel fig. 6) reveals a very blue slope and spectral features/emission lines in the break region, resulting in the weak break strength. The underlying reason for this is speculative but could be explained by a significantly young stellar population with strong nebular emission from highly ionizing stars. While we find reasonably good agreement between simulations and our observed galaxy population, we observe several galaxies with Balmer breaks quite far from the median predictions of FLARES. If this is the actual case, one could argue that his indicates there is some missing ingredient from the simulations related to feedback of Figure 8: JWST NIRSpec/PRISM spectra and NIRCam photometry for the \(z=6.33\) galaxy in table 1. The weak Balmer break strength in comparison with simulations and small errors makes this galaxy an outlier. The spectrum reveals a significant spectral feature exactly where the break strength is measured. Figure 9: JWST NIRSpec/PRISM spectra and NIRCam photometry for the \(z=11.58\) galaxy in table 1. The high redshift of this galaxy makes the extraction of the Balmer break strength challenging due to severe noise at the wavelengths where the break is measured. e.g. early growth of black holes, stochastic star formation histories etc.. The predictions on the Balmer break strength from the DELPHI model have a slightly lower median and a smaller distribution than those seen in FLARES. The discrepancy between model and observation for the outliers with weaker Balmer breaks is therefore lower here. However, the objects with particularly weak Balmer break strengths are still to be considered as outliers in the distribution. Considering the lower median and more narrow distribution of the DELPHI models, we find that the galaxies with high observed Balmer break strengths (see e.g., \(z=7\) in fig. 2) are harder to reconcile with the models. Several galaxies show spectral features and noise that make the measurement of the Balmer break strength challenging. This can be seen clearly in, e.g., JADES-10014177 (left panel, fig.7). The spectrum shows a lot of variation which is reflected in the relatively large error in the measured Balmer break strength. This also reveals an example where photometry and spectroscopy are hard to reconcile in order to help constrain the Balmer break. JADES-18846 (fig. 8) also reveals a scenario where a spectral feature complicates the break measurement. The estimated errors are very small due to a relatively clean spectrum in the break region, but the spectral feature pushes the break strength down slightly. The true Balmer break strength could potentially be closer to \(\sim 1\), which would bring this galaxy right to the very edge of the 97.8th percentile of the simulated FLARES distribution. While one of the aims of this paper was to utilize spectroscopy as a powerful tool to alleviate some of the obstacles related to photometric measurements of the Balmer break strength, we see that even with spectroscopic data our analysis is rarely straightforward. Due to noise and spectral features in the observed wavelength ranges corresponding to the Balmer break for high-redshift galaxies, we find few clear examples of a Balmer break (such as in fig. 1), but often rather measure the slope of the continuum. This, on the other hand, is still revealing of the underlying physical processes of star formation that is at play and therefore serves an important purpose when measured. Larger observational datasets with more calculated Balmer breaks will improve the robustness of our findings. We conclude however that given our current dataset, we can find no significant deviations from predictions based on standard models of structure and star formation. So the conspicuous title: To be or not to be; Balmer breaks in high-z galaxies with _JWST_, cannot be unambiguously answered by this paper alone. While our findings indicate that the simulated predictions agree fairly well with observations we find no cases similar to the extreme Balmer breaks presented in the literature - suggesting such cases are indeed rare. ## Acknowledgements AV and EZ acknowledge funding from the Swedish National Space Agency. AN and EZ acknowledge funding from Olle Engkvists Stiftelse. EZ also acknowledges grant 2022-03804 from the Swedish Research Council. PD acknowledges support from the Dutch Research Council (NWO) through the award of the VIDI Grant 016.VIDI.189.162 ("ODIN") and the European Commission's and University of Groningen's CO-FUND Rosalind Franklin program. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2303.00348
Compactified extra dimension and entanglement island as clues to quantum gravity
We show that the compactified extra dimension and the emergence of the island can provide clues about quantum gravity because their combination can solve the deepest puzzles of black hole physics. Suppose that the time dimension and the extra dimension compactified on a circle are symmetric under \emph{double Wick rotation}, the curvature singularity would be removed due to the end of spacetime as a smooth bubble hidden behind the event horizon. The smooth bubble geometries can also be interpreted as microstates leading to the Bekenstein-Hawking entropy because the smooth bubble geometries live in the same region of mass and charge as the black string. In addition, by applying the quantum extremal surface prescription, we show the emergence of the island at late times of the black string evaporation where it is located slightly outside the event horizon. Due to the dominant contribution of the island configuration, the entanglement entropy of the radiation grows no longer linearly in time but it reaches a finite value that is twice the Bekenstein-Hawking entropy at the leading order. This transition shows the information preservation during the black string evaporation. Furthermore, we calculate the Page time which determines the moment of the transition between the linearly growing and constant behaviors of the entanglement entropy as well as the scrambling time corresponding to the information recovery time of the signal falling into the black string.
Tran N. Hung, Cao H. Nam
2023-03-01T09:23:40Z
http://arxiv.org/abs/2303.00348v1
# Compactified extra dimension and entanglement island as clues to quantum gravity ###### Abstract We show that the compactified extra dimension and the emergence of the island can provide clues about quantum gravity because their combination can solve the deepest puzzles of black hole physics. Suppose that the time dimension and the extra dimension compactified on a circle are symmetric under _double Wick rotation_, the curvature singularity would be removed due to the end of spacetime as a smooth bubble hidden behind the event horizon. The smooth bubble geometries can also be interpreted as microstates leading to the Bekenstein-Hawking entropy because the smooth bubble geometries live in the same region of mass and charge as the black string. In addition, by applying the quantum extremal surface prescription, we show the emergence of the island at late times of the black string evaporation where it is located slightly outside the event horizon. Due to the dominant contribution of the island configuration, the entanglement entropy of the radiation grows no longer linearly in time but it reaches a finite value that is twice the Bekenstein-Hawking entropy at the leading order. This transition shows the information preservation during the black string evaporation. Furthermore, we calculate the Page time which determines the moment of the transition between the linearly growing and constant behaviors of the entanglement entropy as well as the scrambling time corresponding to the information recovery time of the signal falling into the black string. ## I Introduction General relativity (GR) predicts the black hole solutions with a curvature singularity surrounded by an event horizon and they have been directly confirmed by the experimental observations [1; 2; 3]. This singularity is unphysical because the spacetime curvature and the densities become infinite there. Hence, the presence of singularity in the black hole solutions has been perceived as signaling the breakdown of GR in extreme conditions [4]. In addition, another important question is about a microscopic description of the black hole entropy: according to statistical mechanics, the black hole entropy would be determined by the number of quantum microstates \(\Omega\) as \(S_{\rm BH}=\log\Omega\) which resemble the black hole, but what are the degrees of freedom accounting for the microstates of the black hole entropy? In particular, the black hole would evaporate due to Hawking radiation which is black body radiation [5]. This means that if a black hole is formed from the collapse of the matter in a quantum state corresponding to zero entropy, the final state of the black hole evaporation would be in a thermal state corresponding to infinite entropy, leading to the information loss paradox [6]. It is widely believed that all issues above can be solved in a consistent theory of quantum gravity that describes how gravity behaves in the short-distance regime where the quantum effects cannot be ignored. Unfortunately, the obstacles regarding the conceptual and technical aspects have prevented attempts to develop a consistent theory of quantum gravity which is thus lacking. However, it is reasonable to expect that quantum gravity would leave clues in the low-energy regime which provide a bridge from quantum gravity to general relativity or classical gravity. The compactified extra dimensions may be an important clue to seeking a consistent theory of quantum gravity. This is because they are the essential ingredients in constructing superstring/M theory which is regarded as a leading candidate for a quantum theory of gravity. And they may also offer one of the most beautiful and attractive ways towards a geometric unification of gravity with the non-gravitational interactions [7; 8] or dark matter [9] as well as provide the phenomenological models for the open problems in astrophysics, cosmology, and particle physics [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. The black hole solutions have been found in the situation that the geometry of the compactified extra dimensions is a \(n\)-dimensional torus \(T^{n}\) where they possess a horizon with the topology \(S^{2}\times T^{n}\). In particular, for \(n=1\) the black hole solution is called the black string, which is extensively investigated in the literature [31; 32; 33; 34; 35; 36; 37; 38]. Like its counterpart in GR, the black hole solution with the compactified extra dimensions has a curvature singularity (that is different from the point and surrounded by an event horizon) and a temperature associated with Bekenstein-Hawking entropy. Recently, Bah and Heidmann showed that the compactified extra dimensions could solve some issues of the black hole, thus providing insights into quantum gravity. By considering the vacuum solutions symmetric under double Wick rotation, they found a bubble behind the horizon where the spacetime ends at the bubble's radius [39]. As a result, the usual unphysical singularity does not appear or in other words, the black hole solution, in this case, is regular. In addition, it provides microstate geometries which can be identified as some of the degrees of freedom corresponding to the black hole entropy, which can be coherent enough to be classically described through geometric transitions. Although the investigation of the compactified extra dimensions in Ref. [39] can provide a potential solution for two of three very robust issues of black hole physics, it cannot describe the time evolution of the black hole evaporation compatible with the unitarity principle of quantum mechanics or the Page curve [40; 41]. This means that the compactified extra dimensions are insufficient to exhibit clues of quantum gravity in the low-energy regime and we need other clues beyond the extra dimensions. Interestingly, some recent works have shown that the gravitational Euclidean path integral (which is of the main methods to explore quantum gravity) with new saddle points which are the replica wormholes leads to the emergence of the island configuration, which allows for deducing the Page curve [42; 43]. The islands \(I\) are some regions that appear in the complementary of the radiation region \(R\) assumed to be far away from the black holes. Its boundaries extremize the generalized entropy functional and thus are called the extremal surfaces. In the presence of the island configuration, the entanglement entropy of the Hawking radiation is computed as follows [44; 45; 46; 47; 48; 49; 50; 51; 52] \[S(R)=\min\left\{\exp\left[\frac{\mathcal{A}(\partial I)}{4G_{N}}+S_{\rm mat}( R\cup I)\right]\right\}, \tag{1}\] where \(G_{N}\) is the gravitational constant, \(\partial I\) denotes the island boundaries, \(\mathcal{A}(\partial I)\) refers to the total area of \(\partial I\), and \(S_{\rm mat}\) is the von Neumann entropy of the quantum fields on the radiation region and the islands. The island rule has attracted enormous attention in calculating the entanglement entropy of the Hawking radiation and the corresponding Page curve for various black hole geometries. The entanglement entropy is studied via the island rule in the context of Jackiw-Teitelboim (JT) gravity [53] and the different 2D black hole solutions [54; 55; 56]. The extensions of the island proposal for the higher-dimensional black holes using the approximation of the 2D conformal field theory (CFT) have been studied concerning the Schwarzschild black holes [57; 58], the Reissner-Nordstrom (RN) black holes [59; 60], the charged/neutral dilaton black holes [61; 62; 63], the Kaluza-Klein black holes [64], nonextremal asymptotically flat or AdS black hole [67], and the rotating black holes [65; 66]. Many interesting works have indicated the entanglement island in moving mirrors models [68], the higher-dimensional black hole in the presence of an end-of-the-world brane defining the time-dependent region of the effective Hawking radiation [69], the braneworld geometries [70; 71; 72; 73], flat-space cosmologies [74], de Sitter spacetime [75; 76; 77; 78], and the context of the AdS/BCFT correspondence [79; 80; 81]. Applying the extremal surface technique, it was pointed that the island configuration also emerges to make the entanglement entropy remain constant in the late times in the gravity theories with the massive graviton [82; 83; 84; 85], the holographic axion gravity [86], the gravity including the higher derivative terms [87; 88; 89], and deformed JT gravity [90]. In this work, we will apply the quantum extremal surface technique to investigate the entanglement entropy of the Hawking radiation and the corresponding Page curve for the regular black solution found in Ref. [39] with a bubble behind the event horizon. The calculation finds that the entanglement island emerges at late times and its radius rises with the size of the bubble. The emergence of the island configuration prevents the entanglement entropy of the Hawking radiation from growing infinitely but instead it is nearly a constant value which is twice the Bekenstein-Hawking entropy after the Page time. In this way, the time evolution of the entanglement entropy of the Hawking radiation follows the Page curve. As a result, the evaporation process of the black hole respects the unitarity principle. This clearly implies that the compactified extra dimensions and the entanglement island can together lead to the solutions for three important issues of black hole physics (namely the unphysical curvature singularity, microstates of the Bekenstein-Hawking entropy, the unitary time evolution of the black hole evaporation) in some sense. Therefore, they provide essential clues to constructing a consistent theory of quantum gravity. This paper is organized as follows. In Sec. II, we briefly review the regular black string solutions found in Ref. [39]. In Sec. III, we calculate the entanglement entropy of Hawking radiation for the five-dimensional black string in the configurations without and with the island. We also determine the Page and scrambling times, thereby recovering the Page curve of the entanglement entropy. In addition, we investigate the entanglement entropy for the higher dimensional black strings and evaluate the effects of the size of the bubble on the radius of the island. In Sec. IV, we apply the island formula to calculate the entanglement entropy in the case of higher-dimensional black strings with a compactified extra dimension. In the last section, we conclude our main results. ## II Black string solution without curvature singularity In this section, we briefly present the black string solution without curvature singularity due to the presence of a smooth bubble behind the event horizon [39]. The spacetime ends at the radius of the bubble and as a result, the region of the usual curvature singularity is naturally removed. We consider the Einstein-Maxwell system in five dimensions whose action is given as follows \[S=\int d^{5}x\sqrt{-g}\left(\frac{1}{2\kappa_{5}^{2}}R-\frac{1}{4}F_{\mu\nu} F^{\mu\nu}\right), \tag{2}\] where \(\kappa_{5}\) is the five-dimensional gravitational coupling. We find the spherically symmetric solution of the system with a magnetic charge, which is described by the following ansatz \[ds^{2}=-f_{S}(r)dt^{2}+f_{B}(r)dy^{2}+\frac{dr^{2}}{h(r)}+r^{2}\left(d\theta^{2}+ \sin^{2}\theta d\phi^{2}\right), \tag{3}\] \[F=P\sin\theta d\theta\wedge d\phi. \tag{4}\] Because the fifth dimension is compactified on a circle \(S^{1}\) with the radius \(R_{y}\), the coordinate \(y\) is periodic with the period \(2\pi R_{y}\). For the vacuum solutions corresponding to the magnetic flux turned off or \(P=0\), one can find two solutions as * The product of 4D Schwarzchild black hole with \(S^{1}\): \[f_{S}(r)=h(r)=1-\frac{r_{S}}{r},\ \ \ \ \ f_{B}(r)=1.\] (5) This solution has a timelike Killing vector \(\partial_{t}\) which shrinks at the event horizon \(r=r_{S}\). * The smooth geometry solution corresponding to a static bubble of nothing at \(r=r_{B}\): \[f_{B}(r)=h(r)=1-\frac{r_{B}}{r},\ \ \ \ \ f_{S}(r)=1.\] (6) This solution has a spacelike Killing vector \(\partial_{y}\) which shrinks at \(r=r_{B}\). We suppose that these solutions are both vacuum solutions, which means that they are symmetric under the double Wick rotation \((t,y,r_{S},r_{B})\rightarrow(iy,it,r_{B},r_{S})\). This can be solved by turning on the magnetic fluxes, which leads to \[h(r) = f_{B}(r)f_{S}(r), \tag{7}\] \[P = \pm\frac{1}{\kappa_{5}}\sqrt{\frac{3r_{S}r_{B}}{2}}. \tag{8}\] where \(f_{S}(r)=1-r_{S}/r\) and \(f_{B}(r)=1-r_{B}/r\). There are two coordinate singularities appearing at \(r=r_{S}\) and \(r=r_{B}\). The solution that is given by Eqs. (3) and (4) with \(h(r)\) and \(P\) given by Eqs. (7) and (8), respectively, can lead to two types of topology depending on the values of \(r_{S}\) and \(r_{B}\). The solution is either a massive magnetic bubble for \(r_{B}>r_{S}\) or a black string of the magnetic charge for \(r_{S}\geq r_{B}\). **The smooth bubble solution (topological star).** In the case of \(r_{B}>r_{S}\), the compactified extra dimension shrinks to a zero size at \(r=r_{B}\), which implies the end of the spacetime there. The spacetime geometry near \(r_{B}\) is \[ds^{2}=-\frac{r_{B}-r_{S}}{r_{B}}dt^{2}+r_{B}^{2}\left[d\rho^{2}+\frac{r_{B}-r _{S}}{4r_{B}^{3}}\rho^{2}dy^{2}+d\theta^{2}+\sin^{2}\theta d\phi^{2}\right], \tag{9}\] where \(\rho\equiv 2\left[(r-r_{B})/(r_{B}-r_{S})\right]^{1/2}\to 0\). In general, the local metric corresponding to the \((\rho,y)\) subspace has a conical defect (which describes a localized object within string theory) associated with a topology \(\mathbb{R}^{2}/\mathbb{Z}_{k}\) with \(k\in\mathbb{Z}_{+}\). Then, one can determine the product of the quantum number \(k\) and the radius of the compactified extra dimension in terms of the parameters of the bubble solution as follows \[k^{2}R_{y}^{2}=\frac{4r_{B}^{3}}{r_{B}-r_{S}}. \tag{10}\] The mass and charge of the topological star are \[M=\frac{2\pi r_{B}}{\kappa_{4}^{2}}\left(3-8\frac{r_{B}^{2}}{k^{2}R_{y}^{2}} \right),\quad Q_{m}^{2}=\frac{3r_{B}^{2}}{2\kappa_{4}^{2}}\left(1-4\frac{r_{B }^{2}}{k^{2}R_{y}^{2}}\right). \tag{11}\] We have an upper bound of the radius of the bubble as \(2r_{B}\leq kR_{y}\). The charge vanishes at \(r_{B}=kR_{y}/2\) and the solution is a vacuum bubble of nothing. **The black string.** In the case of \(r_{S}>r_{B}\), there are two coordinate singularities at \(r=r_{S}\) and \(r=r_{B}\). The horizon appears at the first singularity. In order to see the topology of the horizon, we consider the near-horizon geometry as follows \[ds^{2}=-\frac{r_{S}-r_{B}}{4r_{S}^{3}}\rho^{2}dt^{2}+d\rho^{2}+ \frac{r_{S}-r_{B}}{r_{S}}dy^{2}+r_{S}^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right), \tag{12}\] where \(\rho\equiv 2\left[(r-r_{S})/(r_{S}-r_{B})\right]^{1/2}r_{S}\to 0\). From this local geometry, we find that the horizon of the black string has topology \(S^{1}\times S^{2}\) where the radii for the \(S^{1}\) and \(S^{2}\) are \(\sqrt{1-r_{S}/r_{B}}R_{y}\) and \(r_{S}\), respectively. The second singularity at \(r=r_{B}\) that is located behind the horizon defines a bubble \(S^{2}\) which is a timelike surface where the local geometry is given by \[ds^{2}=\frac{r_{S}-r_{B}}{r_{B}}dt^{2}+r_{B}^{2}\left[-d\rho^{2}+ \frac{r_{S}-r_{B}}{4r_{B}^{3}}\rho^{2}dy^{2}+d\Omega_{2}^{2}\right], \tag{13}\] where \(\rho\equiv 2\left[(r-r_{B})/(r_{S}-r_{B})\right]^{1/2}\to 0\). With the transformation \(T=-\rho\cosh(\gamma\varphi)\) and \(R=-\rho\sinh(\gamma\varphi)\) where the \(2\pi\)-periodic angle \(\varphi\) is defined as \(\varphi=y/R_{y}\) and the parameter \(\gamma\) is given by \(\gamma^{2}=(r_{S}-r_{B})R_{y}^{2}/(4r_{B}^{3})\), we can be written the two-dimensional line element of the \((\rho,y)\) subspace in terms of the new coordinates \((T,R)\) as follows \[-d\rho^{2}+\frac{r_{S}-r_{B}}{4r_{B}^{3}}\rho^{2}dy^{2}=-dT^{2}+dR^{2}. \tag{14}\] This \((T,R)\) subspace is a cone in the two-dimensional Minkowski space \(\mathbb{R}^{1,1}\). The bubble \(S^{2}\) sits at the apex of the cone where the spacelike Killing vector \(\partial_{y}\) shrinks. The causal structure of the black string is described by the Penrose diagram depicted in Fig. 1. In four-dimensional spacetime, a black string displays a magnetic black hole of mass and charge. This black hole has a horizon at \(r=r_{S}\) corresponding to the Bekenstein-Hawking entropy and the Hawking temperature as follows \[S_{\rm BH} = \frac{8\pi^{2}}{\kappa_{4}^{2}}\sqrt{r_{S}^{3}(r_{S}-r_{B})}, \tag{15}\] \[T_{\rm H} = \frac{1}{4\pi r_{S}}\sqrt{1-\frac{r_{B}}{r_{S}}}. \tag{16}\] By studying the phase space in the plane of the four-dimensional mass and charge, it was pointed out that there is a regime of the mass and charge where the smooth bubble solution, black string, and RN black hole can live together [39; 91]. On the other hand, the mass and the charge of the smooth bubble solutions are the same as those of the black string and the four-dimensional charged black holes. Hence, it is interesting that the smooth bubble geometries can be realized as microstates that lead to black hole entropy. ## III Entanglement entropy In this section, we shall evaluate the entanglement entropy of the Hawking radiation emitted from the regular black string discussed above in the s-wave approximation. We consider the contribution of the configurations without and with the islands to the entanglement entropy. It is easy to show the infinity of the entanglement entropy of the radiation in the configuration without any island. Otherwise, we will indicate that the entanglement entropy is finite at the late times Figure 1: Penrose diagram of the regular black string. The gray regions refer to the ends of spacetime as a smooth bubble. in the configuration of a single island. Consequently, the Page curve describing the evaporation of the black hole is consistent with the unitarity principle. In order to calculate the entanglement entropy, first we need to introduce the tortoise coordinate as follows \[\begin{split} r_{*}(r)&=\int\frac{dr}{\sqrt{f_{B}(r)} f_{S}(r)}\\ &=r\sqrt{1-\frac{r_{B}}{r}}+\left(\frac{r_{B}}{2}+r_{S}\right) \log\left[-1+\frac{2r}{r_{B}}\left(1+\sqrt{1-\frac{r_{B}}{r}}\right)\right]\\ &\quad-\frac{r_{S}^{3/2}}{\sqrt{r_{S}-r_{B}}}\log\left[\frac{2r}{ r-r_{S}}\sqrt{1-\frac{r_{B}}{r}}+\frac{r(r_{B}-2r_{S})+r_{B}r_{S}}{(r_{S}-r) \sqrt{r_{S}(r_{S}-r_{B})}}\right].\end{split} \tag{17}\] Then, we define the Kruskal coordinate as \[U\equiv-e^{-\kappa(t-r_{*})},V\equiv e^{\kappa(t+r_{*})}, \tag{18}\] where \(\kappa\) is the surface gravity of the black string and it is defined by \[\kappa=\frac{1}{2r_{S}}\sqrt{1-\frac{r_{B}}{r_{S}}}. \tag{19}\] With these coordinate transformations, the line element (3) is rewritten in terms of the Kruskal coordinate as \[ds^{2}=-W^{2}(r)dUdV+f_{B}(r)dy^{2}+r^{2}d\Omega^{2}, \tag{20}\] where the conformal factor reads \[W^{2}(r)=\frac{f_{S}(r)}{\kappa^{2}e^{2\kappa r_{*}}}. \tag{21}\] ### Without islands Let us now calculate the entanglement entropy of the radiation in the case of the configuration without islands. The Penrose diagram of the regular black string without the islands is shown in Fig. 2. The radiation region includes two regions \(R_{-}\) and \(R_{+}\), which are located in the left and right wedges, respectively. The cutoff surfaces of the regions \(R_{-}\) and \(R_{+}\) are denoted by \(b_{-}\) and \(b_{+}\), respectively. These endpoints \(b_{\pm}\) have the coordinates \((t_{b},b)\) for \(b_{+}\) and \((-t_{b}+i\beta/2,b)\) for \(b_{-}\), where \(\beta\) is the inverse of the Hawking temperature. Assuming that the distance between two endpoints \(b_{-}\) and \(b_{+}\) is large enough compared to the size of these boundaries, we can use the two-dimensional conformal field theory (CFT) to approximately calculate the entanglement entropy. For the situation where the initial state of the whole system is in the pure state, the entanglement entropy of the radiation region outside \([b_{-},b_{+}]\) is equal to the one within the interval. In this way, the entanglement entropy is calculated as follows [92; 93] \[\begin{split} S_{R}&=\frac{c}{3}\log d(b_{+},b_{-}) \\ &=\frac{c}{6}\log\left[W(b_{+})W(b_{-})(U(b_{-})-U(b_{+}))(V(b_{+} )-V(b_{-}))\right]\\ &=\frac{c}{6}\log\left[4W^{2}(b)e^{2\kappa r_{*}(b)}\cosh^{2}( \kappa t_{b})\right]\\ &\simeq\frac{c}{6}\log\left[(1-\frac{r_{S}}{b})\cosh^{2}(\kappa t _{b})\right],\end{split} \tag{22}\] where \(c\) is the central charge of the two-dimensional CFT and \(d(b_{+},b_{-})\) is the distance between \(b_{+}\) and \(b_{-}\). At early times, we use the approximation \(t_{b}\ll 1/\kappa\) to deduce \[S_{R}\simeq\frac{c}{6}\log(1-\frac{r_{S}}{b})+\frac{c}{6}(\kappa t_{b})^{2}. \tag{23}\] In the above formula, the first term is the initial entanglement entropy of the radiation region which is independent of the radius of the bubble and the second term shows the quadratic evolution of the entanglement entropy with time. For late times corresponding to the approximation \(t_{b}\gg 1/\kappa\), Figure 2: The Penrose diagram of the eternal non-extremal regular black string without islands. The radiation region is the union of two parts \(R_{\pm}\) whose boundaries are denoted by \(b_{\pm}\). we find the corresponding entanglement entropy as \[S_{R}\simeq\frac{c}{3}\kappa t_{b}. \tag{24}\] This result exhibits that the entanglement entropy of the Hawking radiation grows linearly in time and thus it will reach the infinite value as \(t_{b}\rightarrow\infty\). Clearly, this result conflicts with the finiteness of the total von Neumann entropy of the black string. However, this conflict will be resolved when taking into account the island configuration evaluated in the next section, where it can reproduce the correct Page curve. ### With an island When including the island configuration, the generalized entropy is a sum of two contributions where the first contribution is relevant to the gravitational part that is proportional to the total area of the island boundaries and the second contribution is the entanglement entropy of the matter. It is given as \[S_{\rm gen}=\frac{16\pi^{2}}{\kappa_{4}^{2}}\sqrt{a^{3}(a-r_{B})}+S_{\rm mat}( R_{-}\cup R_{+}\cup I), \tag{25}\] where the first term is the Bekenstein-Hawking entropy of the island and the second term is the entanglement entropy of the matter (which lives on the radiation region and the island) is calculated Figure 3: The Penrose diagram of the eternal non-extremal regular black string with an island. \(I\) refers to the island which has two boundaries denoted by \(a_{\pm}\). by the following formula \[\begin{split} S_{\text{mat}}&=\frac{c}{3}\log\left[d(a_ {+},a_{-})d(b_{+},b_{-})\right]+\frac{c}{3}\log\left[\frac{d(a_{+},b_{+})d(a_{- },b_{-})}{d(a_{+},b_{-})d(a_{-},b_{+})}\right]\\ &=\frac{c}{6}\log\left[2^{4}W^{2}(a)W^{2}(b)e^{2\kappa(r_{*}(a)+ r_{*}(b))}\cosh^{2}(\kappa t_{a})\cosh^{2}(\kappa t_{b})\right]\\ &\quad+\frac{c}{3}\log\left[\frac{\cosh(\kappa(r_{*}(a)-r_{*}(b) ))-\cosh(\kappa(t_{a}-t_{b}))}{\cosh(\kappa(r_{*}(a)-r_{*}(b)))+\cosh(\kappa(t _{a}+t_{b}))}\right].\end{split} \tag{26}\] By extremizing the generalized entropy with respect to the temporal and spatial location of the island boundaries, we find the corresponding minimum value. If it exists, this value would be identified as the entanglement entropy of the Hawking radiation. In the following, we study the behavior of the entanglement entropy at early and late times of the evaporation process of the black string. **Early times.** At early times, the entanglement entropy of Hawking radiation is small. Accordingly, we expect the island to lie deep inside the black string. In the approximation that the cutoff surface is far away from the event horizon, given by \[r_{S}\ll b;\quad t_{a},t_{b}\ll 1/\kappa\ll r_{*}(b)-r_{*}(a), \tag{27}\] we can obtain the generalized entropy as \[\begin{split} S_{\text{gen}}&\simeq\frac{16\pi^{2}} {\kappa_{4}^{2}}\sqrt{a^{3}(a-r_{B})}+\frac{c}{6}\log\left[|(f_{S}(a))|\cosh^{2 }(\kappa t_{a})\right]+\cdots\\ &\simeq\frac{16\pi^{2}}{\kappa_{4}^{2}}\sqrt{a^{3}(a-r_{B})}+ \frac{c}{6}\log(\frac{r_{S}}{a}-1)+\frac{c}{6}(\kappa t_{a})^{2}+\cdots.\end{split} \tag{28}\] The terms that are relevant to \(t_{b}\) and \(r_{*}(b)\) only are ignored without affecting the extremizing condition of the generalized entropy. The partial derivative of the generalized entropy with respect to the position of the island boundary is given as follows \[\begin{split}\frac{\partial S_{\text{gen}}}{\partial a}& =\frac{16\pi^{2}}{\kappa_{4}^{2}}\frac{a(4a-3r_{B})}{2\sqrt{a(a- r_{B})}}-\frac{c}{6}\frac{r_{S}}{a(r_{S}-a)}\\ &\simeq\frac{16\pi^{2}}{\kappa_{4}^{2}}\frac{a(4a-3r_{B})}{2\sqrt {a(a-r_{B})}}-\frac{c}{6}\frac{1}{a},\end{split} \tag{29}\] where we use the approximation \(a\ll r_{S}\) in the second line. From the extremizing condition \(\frac{\partial S_{\text{gen}}}{\partial a}=0\) we can deduce the following relation \[a\sim\sqrt{c\kappa_{4}^{2}}\sim l_{P}, \tag{30}\] where \(l_{P}\) is the Planck length. The expression (30) implies the presence of an island of the Planck scale inside the black string. However, we observe that the bubble radius must be much larger than the Planck length because the bubble geometries which are expected to identify as the microstates are not quantumly but the classical description coming from the decoherence of some quantum microstates. In this sense, the island which appears at early times should be located in the gray regions (given in Fig. 3), which correspond to the end of spacetime at the bubble. In addition, the Planck size of the Planck also conflicts with the calculating approach in the derivation of the island rule, which requires the upper cutoff length scale much larger than the Planck length. Therefore, these facts indicate that no island actually emerges at early times. Consequently, the entanglement entropy would be determined by the geometry configuration without the island. **Late times.** At the late stage of the evaporation process of the black string, the radiation entering the cutoff surface becomes more and more. The contribution of the radiation thus grows with time. We should expect that the coarse-grain entropy increases linearly, but the fine-grain entropy will reach a finite value to satisfy the unitarity of quantum theory. At late times, we can assume that the radiation region is far from the horizon. Therefore, we use the following approximation \[1/\kappa\ll r_{*}(b)-r_{*}(a)\ll t_{a},t_{b}, \tag{31}\] which leads to \[\begin{split}\cosh\kappa t_{a,b}\simeq\frac{1}{2}e^{\kappa t_{ a,b}},\\ \cosh\kappa(t_{a}+t_{b})\gg\cosh\kappa(r_{*}(b)-r_{*}(a)).\end{split} \tag{32}\] With this approximation, we can calculate the time-dependent component of the generalized entropy as \[\begin{split} S_{\text{time}}&=\frac{c}{3}\log \left[\cosh\kappa t_{a}\cosh\kappa t_{b}.\frac{\cosh(\kappa(r_{*}(a)-r_{*}(b)) )-\cosh(\kappa(t_{a}-t_{b}))}{\cosh(\kappa(r_{*}(a)-r_{*}(b)))+\cosh(\kappa(t_ {a}+t_{b}))}\right]\\ &\simeq\frac{c}{3}\log\left[\cosh\kappa(r_{*}(a)-r_{*}(b))-\cosh \kappa(t_{a}-t_{b})\right].\end{split} \tag{33}\] We find that extremizing the generalized entropy with respect to \(t_{a}\) leads to \(t_{a}=t_{b}\). Substituting this result into the generalized entropy, we obtain the first-order approximate formula as \[S_{\rm gen} = \frac{16\pi^{2}}{\kappa_{4}^{2}}\sqrt{a^{3}(a-r_{B})}+\frac{c}{6} \log\left[W^{2}(a)W^{2}(b)\right]+\frac{c\kappa}{3}\left(r_{*}(a)+r_{*}(b)\right)\] \[\simeq \frac{16\pi^{2}}{\kappa_{4}^{2}}\sqrt{a^{3}(a-r_{B})}+\frac{c}{6} \log\left[W^{2}(a)W^{2}(b)\right]+\frac{2c}{3}\kappa r_{*}(b)+\frac{c}{3}\log \left[1-2e^{-\kappa(r_{*}(b)-r_{*}(a))}\right]+\cdots\] \[\simeq \frac{16\pi^{2}}{\kappa_{4}^{2}}\sqrt{a^{3}(a-r_{B})}+\frac{c}{6} \log\left[f_{S}(a)f_{S}(b)\right]-\frac{c}{3}\kappa r_{*}(a)+\frac{c}{3} \kappa r_{*}(b)-\frac{2c}{3}e^{-\kappa(r_{*}(b)-r_{*}(a))}+\cdots.\] This expression of \(S_{\rm gen}\) weakly depends on time, showing that the generalized entropy would have a finite value at late times. This shall induce the convergence of the entanglement entropy at late times instead of growing linearly with time. When considering the island located near the event horizon, using the extremizing condition \(\frac{\partial S_{\rm gen}}{\partial a}=0\) we find \[a\simeq r_{S}\left[1+\left(\frac{c\kappa_{4}^{2}}{r_{S}^{2}}\right)^{2}\frac{ K^{2}}{576\pi^{4}}\frac{\sqrt{1-r_{B}/r_{S}}}{(4-3r_{B}/r_{S})^{2}}\right], \tag{35}\] where \(K\) is defined as follows \[K\equiv\exp\left\{\kappa\left[\sqrt{r_{S}(r_{S}-r_{B})}+\left(r_{S}+\frac{r_{ B}}{2}\right)\log\left[-1+\frac{2r_{S}}{r_{B}}\left(1+\sqrt{1-\frac{r_{B}}{r_{S}}} \right)\right]-r_{*}(b)\right]\right\}. \tag{36}\] Because of the extremely small factor \(c\kappa_{4}^{2}/r_{S}^{2}\), the second term in (35) is very small but positive. This implies that the location of the island is nearly outside the event horizon of the black string. In addition, the effect of the bubble on the island location predominantly comes from the term \(\sqrt{1-r_{B}/r_{S}}(4-3r_{B}/r_{S})^{-2}\) due to \(\ln K\) suppressed strongly by the Planck length. From the behavior of \(\sqrt{1-r_{B}/r_{S}}(4-3r_{B}/r_{S})^{-2}\) in terms of \(r_{B}/r_{S}\) as depicted in Fig. 4, we observe that for \(r_{B}/r_{S}<8/9\) increasing the ratio of the bubble radius to the event horizon would make the island shifted outside the black string. On the contrary, in the regime of \(r_{B}/r_{S}>8/9\), the growth of this ratio would lead to the island being closer to the black string. Finally, by substituting the location of the island just obtained above into the approximate expression of the generalized entropy (34), we derive the entanglement entropy as follows \[S_{\rm EE} \simeq \frac{16\pi^{2}}{\kappa_{4}^{2}}\sqrt{r_{S}^{3}(r_{S}-r_{B})}+ \frac{c\kappa}{3}\left[\sqrt{b(b-r_{B})}-\sqrt{r_{S}(r_{S}-r_{B})}\right] \tag{37}\] \[+\frac{c\kappa}{3}\left(r_{S}+\frac{r_{B}}{2}\right)\log\frac{2(b +\sqrt{b(b-r_{B})})-r_{B}}{2(r_{S}+\sqrt{r_{S}(r_{S}-r_{B})}-r_{B})}+{\cal O} \left(\frac{c\kappa_{4}^{2}}{r_{S}^{2}}\right).\] The first term in the above expression is twice the Bekenstein-Hawking entropy of the black string, which comes from the contribution of the island configuration. The second and third terms come from the quantum nature of the matter fields, which are suppressed by the inverse Planck scale \(\kappa\). Hence, the terms which are proportional to the nonzero powers of \(\kappa\) are very small compared to the first term and thus they are negligible. As a result, the entanglement entropy is approximately given as \(S_{\rm EE}\simeq 16\pi^{2}\sqrt{r_{S}^{3}(r_{S}-r_{B})}/\kappa_{4}^{2}\) which is twice the Bekenstein-Hawking entropy of the black string and constant in time. Therefore, the configuration with an island can resolve the information conservation issue during the evaporation of the black strings. ### Page time and scrambling time The above calculations suggest the time evolution of the entanglement entropy as shown in Fig. 5. When the configuration without the island is applied at the early times of the black string evaporation, the entanglement entropy increases linearly with time. However, with the appearance of the island at late times, the entanglement entropy approaches a maximum value at the moment called the Page time and does not change in time. This corresponds to the dominance of the configuration with the island, which is formed near the black hole horizon. From Fig. 5, we can approximately determine the Page time at the intersection point between the purple line and the green curve, which corresponds to the configurations without and with the island, respectively. Accordingly, by using Eqs. (24) and (37) we obtain the following equation \[\frac{c\kappa}{3}t_{\rm Page}\simeq\frac{16\pi^{2}}{\kappa_{4}^{2}}\sqrt{r_{S} ^{3}(r_{S}-r_{B})}, \tag{38}\] which determines the Page time as \[t_{\rm Page}\simeq\frac{48\pi^{2}r_{S}^{3}}{c\kappa_{4}^{2}}=\frac{3S_{\rm BH}}{ \pi cT_{\rm H}}. \tag{39}\] In this way, the Page time is proportional to the ratio of the black string entropy and the Hawking temperature, which are the thermodynamic quantities of the black string. This result shows a universal behavior of the Page time, which has been found in various black hole geometries [57; 58; 59; 60; 61; 62; 63; 64]. Indeed, we can apply a simple argument given by D. Page [40; 41]: for a total system that consists of a sufficiently small subsystem, the entanglement entropy can be approximately determined by the thermal entropy of that subsystem. With the black string having a large entropy, it needs to take more time in order for a large amount of the radiation to enter into the cutoff surface in such a way that the black hole is a subsystem at late times and according to the argument of D. Page the entanglement entropy would be approximated by the black hole entropy. This situation is also similar to the black hole of low temperature, which requires more time for the evaporation to become a subsystem that is small enough compared to the total system. Because the island is in the entanglement wedge of radiation, the signal falling into the black hole would be decoded from the outgoing radiation when this signal enters the island. We can calculate Figure 5: The Page curve for the entanglement entropy of the five-dimensional black string. The solid and dashed green lines represent the time evolution of the entanglement entropy for the configuration without an island. The solid purple line refers to the entanglement entropy at the late times for the configuration with an island. the scrambling time, which is defined as the minimum duration to retrieve the information of the signal falling into the black hole according to the Hayden-Preskill protocol [94]. Suppose that one sends a signal at the time \(t=0\) from the cutoff surface at \(r=b\) to the island boundary \(r=a\), the minimal time that it takes is approximately the scrambling time and is givens as follows \[\begin{split} t_{\text{scr}}&=r_{*}(b)-r_{*}(a)\\ &=-a\sqrt{1-\frac{r_{B}}{a}}+b\sqrt{1-\frac{r_{B}}{b}}+\left( \frac{r_{B}}{2}+r_{S}\right)\log\left[-1+\frac{2b}{r_{B}}\left(1+\sqrt{1-\frac {r_{B}}{b}}\right)\right]\\ &\quad-\left(\frac{r_{B}}{2}+r_{S}\right)\log\left[-1+\frac{2a}{r _{B}}\left(1+\sqrt{1-\frac{r_{B}}{a}}\right)\right]\\ &\quad+\frac{r_{S}^{3/2}}{\sqrt{r_{S}-r_{B}}}\log\left[\frac{2b} {b-r_{S}}\sqrt{1-\frac{r_{B}}{b}}+\frac{b(r_{B}-2r_{S})+r_{B}r_{S}}{(r_{S}-b) \sqrt{r_{S}(r_{S}-r_{B})}}\right]\\ &\quad-\frac{r_{S}^{3/2}}{\sqrt{r_{S}-r_{B}}}\log\left[\frac{2a} {a-r_{S}}\sqrt{1-\frac{r_{B}}{a}}+\frac{a(r_{B}-2r_{S})+r_{B}r_{S}}{(r_{S}-a) \sqrt{r_{S}(r_{S}-r_{B})}}\right]\\ &\simeq\frac{1}{\kappa}\log\frac{\sqrt{r_{S}^{3}(r_{S}-r_{B})}} {\kappa_{4}^{2}}\\ &\simeq\frac{1}{2\pi T_{\text{H}}}\log S_{\text{BH}}.\end{split} \tag{40}\] The black string entropy \(S_{\text{BH}}\) can be expressed in terms of the number of microstates \(N\) as \(S_{\text{BH}}=e^{N}\). With respect to the macroscopic scale black string, the number of microstates is very large. Therefore, due to the scrambling time proportional to the logarithm of the black hole entropy, the scrambling time is much smaller than the Page time. ## IV The case of higher dimensions In this section, we calculate the entanglement entropy of the Hawking radiation, which is emitted from the higher-dimensional non-singular black strings relying on the island formula. By generalizing the five-dimensional regular black string solution to higher dimensions, one can find \[ds_{D+1}^{2}=-f_{S}(r)dt^{2}+f_{B}(r)dy^{2}+\frac{dr^{2}}{f_{S}(r)f_{B}(r)}+r ^{2}d\Omega_{D-2}^{2}, \tag{41}\] where \[f_{B}(r)=1-\left(\frac{r_{B}}{r}\right)^{D-3},\quad f_{S}(r)=1-\left(\frac{r_ {S}}{r}\right)^{D-3}. \tag{42}\] The temperature and the entropy of the higher-dimensional black strings read \[T_{\rm H} = \frac{\kappa}{2\pi}=\frac{D-3}{4\pi r_{S}}\sqrt{1-\left(\frac{r_{B} }{r_{S}}\right)^{D-3}},\] \[S_{\rm BH} = \frac{4\pi\frac{D+1}{2}}{\Gamma\left(\frac{D-1}{2}\right)\kappa_{ D}^{2}}\sqrt{r_{S}^{D-1}\left(r_{S}^{D-3}-r_{B}^{D-3}\right)}. \tag{43}\] First, we need to determine the tortoise coordinate which is given as follows \[r_{*}(r) =\int\frac{dr}{\left[1-\left(\frac{r_{S}}{r}\right)^{D-3}\right] \sqrt{1-\left(\frac{r_{B}}{r}\right)^{D-3}}}\] \[\simeq\int\frac{dr}{\left[1-\left(\frac{r_{S}}{r}\right)^{D-3} \right]\sqrt{1-\left(\frac{r_{S}}{r}\right)^{D-3}}\delta} \tag{44}\] \[\simeq{}_{2}F_{1}\left(1,\frac{1}{3-D},\frac{D-4}{D-3},\left( \frac{r_{S}}{r}\right)^{D-3}\right)r\] \[\quad-{}_{2}F_{1}\left(1,\frac{D-4}{D-3},1+\frac{D-4}{D-3}, \left(\frac{r_{S}}{r}\right)^{D-3}\right)\frac{r}{2(D-4)}\left(\frac{r_{S}}{r }\right)^{D-3}\delta,\] where \(\delta\equiv r_{B}/r_{S}\) and \({}_{2}F_{1}(a,b;c;z)\) is the hypergeometric function. Note that, the integral at the first line of Eq. (44) in the general case of \(D>4\) is difficult to be exactly obtained. But, in the situation \(r_{B}\ll r_{S}\) corresponding to \(\delta\ll 1\) which means that the bubble lies deep into the black string, we can find an analytical expression for the tortoise coordinate which is given in terms of the hypergeometric functions. With the configuration without the island, it is easy to calculate the entanglement entropy which is given in late times as follows \[S_{R}\simeq\frac{c(D-3)}{6r_{S}}\sqrt{1-\left(\frac{r_{B}}{r_{S}}\right)^{D-3 }}t_{b}=\frac{c}{3}\kappa t_{b}. \tag{45}\] This expression implies that the entanglement entropy without the contribution of the island also evolves linearly in time in analogy to the five-dimensional case. Consequently, it results in the information loss problem in the evaporation of the higher-dimensional non-singular black string. Next, we study how the island configuration contributes to the entanglement entropy and makes the black string evaporation consistent with the unitarity of quantum mechanics. At early times, the entanglement entropy of the black string with the radiation is small and thus the island has to be deeply hidden inside the horizon if it exists. The generalized entropy can be approximately calculated as \[S_{\rm gen}\simeq\frac{\frac{D+1}{2}}{\Gamma\left(\frac{D-1}{2}\right)\kappa_{D}^ {2}}\sqrt{a^{D-1}(a^{D-3}-r_{B}^{D-3})}+\frac{c}{6}\log\left[\left(\frac{r_{S}} {a}\right)^{D-3}-1\right]. \tag{46}\] By extremizing the entropy over \(a\), we find the position of the island as \[a^{D-2}\sim c\kappa_{D}^{2}, \tag{47}\] which is in the order of the quantum gravity scale. This means that the island does not emerge at early times. At late times, we expect the appearance of the island when more and more Hawking radiation is emitted from the black string in order to lead to a transition of the entanglement entropy from linear growth due to the absence of the island to the constant behavior. With the approximation of late times given by Eq. (31), we can calculate the generalized entropy as \[S_{\rm gen}\simeq \frac{\frac{D+1}{2}}{\Gamma\left(\frac{D-1}{2}\right)\kappa_{D}^ {2}}\sqrt{a^{D-1}\left(a^{D-3}-r_{B}^{D-3}\right)} \tag{48}\] \[+\frac{c}{6}\left[W^{2}(a)W^{2}(b)\right]+\frac{c}{3}\log\left[1- 2e^{-\kappa(r_{*}(b)-r_{*}(a))}\right]\] \[\simeq \frac{\frac{D+1}{2}}{\Gamma\left(\frac{D-1}{2}\right)\kappa_{D}^ {2}}\sqrt{a^{D-1}\left(a^{D-3}-r_{B}^{D-3}\right)}\] \[+\frac{c}{6}\log\left[f_{S}(a)f_{S}(b)\right]-\frac{c}{3}\kappa r _{*}(a)+\frac{c}{3}\kappa r_{*}(b)-\frac{2c}{3}e^{-\kappa(r_{*}(b)-r_{*}(a))}.\] In order to find the island location which is slightly outside the black hole, we write the radius of the island as \(a=r_{S}+\epsilon\) with \(\epsilon\ll 1\). Then, by extremizing the generalized entropy, we derive an equation that determines the position of the island's boundary at the first-order approximation as \[\frac{8\pi\frac{D+1}{2}}{\Gamma\left(\frac{D-1}{2}\right)}\frac{r _{S}^{D-3}}{c\kappa_{D}^{2}}\left(D-2-\frac{X}{2}\right)-\frac{(D-2)}{6r_{S}} +\frac{(D-3)X}{12r_{S}} \tag{49}\] \[-\frac{2L}{3}\frac{D-3}{2r_{S}}\sqrt{\frac{D-3}{r_{S}\epsilon}} \left[\frac{r_{S}}{(D-3)}+\frac{D-2}{2(D-3)}\epsilon-\frac{(D-4)X}{4(D-3)} \epsilon\right]=0,\] where \(X\) and \(L\) are defined as \[X \equiv \left(\frac{r_{B}}{r_{S}}\right)^{D-3}, \tag{50}\] \[L \equiv \exp\left\{\kappa\left(\frac{r_{S}}{D-3}\left[\gamma+P\left(0, \frac{1}{3-D}\right)\right]-r_{*}(b)\right)\right\}, \tag{51}\] where \(\gamma\simeq 0.577\) is Euler's constant and \(P\left(0,(3-D)^{-1}\right)\equiv\Gamma^{\prime}\left((3-D)^{-1}\right)/\Gamma \left((3-D)^{-1}\right)\) is the polygamma function of order \(0\) with \(\Gamma^{\prime}(z)\) denoted the first derivative of the Gamma function \(\Gamma(z)\). With respect to the terms independent on \(\epsilon\), we observe that due to \(r_{S}^{D-2}/c\kappa_{D}^{2}\gg 1\), the second and third terms are much smaller than the first term in the left-hand side of Eq. (49) and they hence can be ignored. For the terms relating to \(\epsilon\), the final two terms in the bracket can be eliminated for the same reason. As a result, we find an approximation equation determining the position of the island's boundary as follows \[\frac{8\pi\frac{D+1}{2}}{\Gamma\left(\frac{D-1}{2}\right)}\frac{r_{S}^{D-3}}{ c\kappa_{D}^{2}}\left(D-2-\frac{X}{2}\right)-\frac{2L}{3}\frac{D-3}{2r_{S}} \sqrt{\frac{D-3}{r_{S}}}\epsilon\frac{r_{S}}{(D-3)\epsilon}=0. \tag{52}\] Solving this equation leads to the location of the island's boundary as \[a=r_{S}+\frac{c^{2}\kappa_{D}^{4}}{r_{S}^{2D-4}}\frac{(D-3)L^{2}\Gamma\left( \frac{D-1}{2}\right)^{2}}{576(D-2)^{2}\pi^{D+1}}\left[1+\frac{X}{D-2}\right]r _{S}. \tag{53}\] Because of \(c^{2}\kappa_{D}^{4}/r_{S}^{2D-4}\ll 1\), indeed the island locates slightly outside the event horizon. The dependence of the island position on the radius of the bubble \(r_{B}\) is manifested via the quantity \(X\). The island goes further from the black hole when the bubble's radius increases. This is compatible with the behavior of the five-dimensional case in the regime of \(r_{B}/r_{S}\ll 1\). Substituting the island position into the generalized entropy, we obtain the entanglement entropy of the radiation as follows \[\begin{split} S_{\text{EE}}\simeq&\frac{8\pi\frac {D+1}{2}}{\Gamma\left(\frac{D-1}{2}\right)\kappa_{D}^{2}}\sqrt{r_{S}^{D-1} \left(r_{S}^{D-3}-r_{B}^{D-3}\right)}\\ &+\frac{c}{6}\log\left[\frac{(D-3)L^{2}\Gamma\left(\frac{D-1}{2} \right)^{2}}{576(D-2)^{2}\pi^{D+1}}\left(1+\frac{X}{D-2}\right)\right]+ \mathcal{O}\left(\frac{c^{2}\kappa_{D}^{4}}{r_{S}^{2D-4}}\right).\end{split} \tag{54}\] The first term is twice the Bekenstein-Hawking entropy which comes from the contribution of the two-sided island area and is a dominant part of the entanglement entropy. The second term is the logarithmic correction which comes from the quantum nature of the radiation. The higher-order correction terms are very small due to the power factors of the tiny ratio \(c\kappa_{D}^{2}/r_{S}^{D-2}\). Therefore, at the leading order the entanglement entropy at late times is twice the Bekenstein-Hawking entropy which is a finite constant due to the appearance of the island. This result manifests that the entanglement entropy is bounded by the Bekenstein-Hawking entropy instead of growing linearly in time. This means that the information of the higher-dimensional non-singular black string is preserved during the evaporation. **Page time and scrambling time**. Although the entanglement entropy of the Hawking radiation that is emitted from the higher-dimensional non-singular black string grows linearly in time at the early stage, with the appearance of the island at late times the entanglement entropy reaches the maximum. The moment that determines the transition between these behaviors is given by the Page time as \[t_{\rm Page}\simeq\frac{3S_{\rm BH}}{\pi cT_{\rm H}}, \tag{55}\] which depends on the ratio of the entropy to the temperature of the higher-dimensional non-singular black string. This dependence is universal at the leading order independent of the spacetime dimension. It is interesting that although both \(S_{\rm BH}\) and \(T_{\rm H}\) are dependent on the bubble radius \(r_{B}\), at the leading order the Page time is not affected by the change of the bubble radius. The presence of the island implies that the information entering into the island is retrievable from the Hawking radiation. In this situation, it is easy to calculate the scrambling time corresponding to the information recovery time of the signal when it falls into the black string from the cutoff surface at \(t=0\) and reaches the island boundary as \[\begin{split} t_{\rm scr}&=r_{*}(b)-r_{*}(a)\\ &\simeq{}_{2}F_{1}\left(1,\frac{1}{3-D},\frac{D-4}{D-3},\left( \frac{r_{S}}{b}\right)^{D-3}\right)b-{}_{2}F_{1}\left(1,\frac{1}{3-D},\frac{D- 4}{D-3},\left(\frac{r_{S}}{a}\right)^{D-3}\right)a\\ &\simeq\frac{1}{2\pi T_{\rm H}}\log S_{\rm BH}.\end{split} \tag{56}\] The scrambling time is proportional to the ratio of the logarithm of the Bekenstein-Hawking entropy to the Hawking temperature. This dependence is universal at the leading order approximation. In order to see the effect of the bubble on the scrambling time, let us expand \(t_{\rm scr}\) in terms of \(r_{B}/r_{S}\) as \[\frac{D-3}{r_{S}}t_{\rm scr}\simeq 2\log\left[\frac{4\pi^{\frac{D+1}{2}}r_{S}^{ D-2}}{\Gamma\left(\frac{D-1}{2}\right)\kappa_{D}^{2}}\right]+\left\{1-\log \left[\frac{4\pi^{\frac{D+1}{2}}r_{S}^{D-2}}{\Gamma\left(\frac{D-1}{2}\right) \kappa_{D}^{2}}\right]\right\}\left(\frac{r_{B}}{r_{S}}\right)^{D-3}. \tag{57}\] We see that due to \(r_{S}^{D-2}/\kappa_{D}^{2}\gg 1\) the second term is always negative. This means that in the region \(r_{B}/r_{S}\ll 1\) the presence of the bubble behind the event horizon would make the information recovery of the signal falling into the black string faster. Conclusion The unphysical curvature singularity, the nature of microstates associated with the Bekenstein-Hawking entropy, and the information loss paradox are three important problems of black hole physics. It is expected that these problems would be solved in a consistent theory of quantum gravity that needs to describe the quantum fluctuations of spacetime that become important in the region close to the center of black holes. Currently, there is still no complete ultraviolet theory of quantum gravity. But, it is reasonable to expect that the traces of quantum gravity in the low-energy regime which provide indications toward the resolution of the critical puzzles. In this work, we point out that the compactified extra dimensions and the entanglement islands may be important clues to seeking a consistent theory of quantum gravity because they can tell how all deepest puzzles of black hole physics are solved in quantum gravity. In the five-dimensional Einstein-Maxwell theory with the extra dimension compactified on a circle \(S^{1}\), one can find two vacuum solutions that are a four-dimensional Schwarzschild black hole times \(S^{1}\) and a static bubble of nothing (or smooth massless solution). By imposing these vacuum solutions to be symmetric under the double Wick rotation between the usual time dimension and the compactified extra dimension, the non-singular black string and the smooth bubble solution (topological star) have been found by turning on the appropriate magnetic fluxes [39]. The usual curvature singularity is naturally removed due to the presence of a bubble hidden behind the event horizon which ends spacetime. Additionally, the smooth bubble solution, black string, and RN black hole can live together in a regime of mass and charge, which implies that the smooth bubble geometries can be realized as microstates leading to the Bekenstein-Hawking entropy of the regular black string. The gravitational Euclidean path integral (which is one of the main methods to explore quantum gravity) with the replica wormholes as new saddle points leads to the emergence of the island configuration. Taking into account the contribution of the island to calculate the entanglement entropy of the radiation that is emitted from the regular black string, we show that the entanglement entropy follows the Page curve consistent with the unitarity of quantum mechanics during the evaporation process of the regular black string. At early times of the evaporation, the entanglement entropy increases linearly with time which implies the information loss paradox. However, when the radiation entering the cutoff surface becomes more and more at late times, the island emerges and locates nearly outside the event horizon. As a result, the entanglement entropy reaches a saturation value that is approximate twice the Bekenstein-Hawking entropy of the regular black string. We calculate the Page time that determines the transition between the linearly growing behavior and the convergent behavior of the entanglement entropy. The leading order term of the Page time is universal and it is not affected by the change in the bubble radius. Because the signal that falls into the black hole is decoded from the outgoing radiation when it enters the island, we compute the scrambling time according to the Hayden-Preskill protocol [94]. We find that the scrambling time at the leading order is proportional to the ratio of the logarithm of the Bekenstein-Hawking entropy to the Hawking temperature. Because of this relationship, unlike the Page time, the scrambling time is dependent on the presence of the bubble and hence it would be affected due to the change in the bubble radius.
2304.05297
Neural Network Approach to Portfolio Optimization with Leverage Constraints:a Case Study on High Inflation Investment
Motivated by the current global high inflation scenario, we aim to discover a dynamic multi-period allocation strategy to optimally outperform a passive benchmark while adhering to a bounded leverage limit. To this end, we formulate an optimal control problem to outperform a benchmark portfolio throughout the investment horizon. Assuming the asset prices follow the jump-diffusion model during high inflation periods, we first establish a closed-form solution for the optimal strategy that outperforms a passive strategy under the cumulative quadratic tracking difference (CD) objective, assuming continuous trading and no bankruptcy. To obtain strategies under the bounded leverage constraint among other realistic constraints, we then propose a novel leverage-feasible neural network (LFNN) to represent control, which converts the original constrained optimization problem into an unconstrained optimization problem that is computationally feasible with standard optimization methods. We establish mathematically that the LFNN approximation can yield a solution that is arbitrarily close to the solution of the original optimal control problem with bounded leverage. We further apply the LFNN approach to a four-asset investment scenario with bootstrap resampled asset returns from the filtered high inflation regime data. The LFNN strategy is shown to consistently outperform the passive benchmark strategy by about 200 bps (median annualized return), with a greater than 90% probability of outperforming the benchmark at the end of the investment horizon.
Chendi Ni, Yuying Li, Peter A. Forsyth
2023-04-11T15:48:19Z
http://arxiv.org/abs/2304.05297v2
# Neural Network Approach to ###### Abstract Motivated by the current global high inflation scenario, we aim to discover a dynamic multi-period allocation strategy to optimally outperform a passive benchmark while adhering to a bounded leverage limit. To this end, we formulate an optimal control problem to outperform a benchmark portfolio throughout the investment horizon. Assuming the asset prices follow the jump-diffusion model during high inflation periods, we first establish a closed-form solution for the optimal strategy that outperforms a passive strategy under the cumulative quadratic tracking difference (CD) objective, assuming continuous trading and no bankruptcy. To obtain strategies under the bounded leverage constraint among other realistic constraints, we then propose a novel leverage-feasible neural network (LFNN) to represent control, which converts the original constrained optimization problem into an unconstrained optimization problem that is computationally feasible with standard optimization methods. We establish mathematically that the LFNN approximation can yield a solution that is arbitrarily close to the solution of the original optimal control problem with bounded leverage. We further apply the LFNN approach to a four-asset investment scenario with bootstrap resampled asset returns from the filtered high inflation regime data. The LFNN strategy is shown to consistently outperform the passive benchmark strategy by about 200 bps (median annualized return), with a greater than 90% probability of outperforming the benchmark at the end of the investment horizon. **Keywords:** cumulative tracking difference, leveraged portfolio, benchmark outperformance, asset allocation, machine learning **JEL codes:** G11, G22 **AMS codes:** 91G, 35Q93, 68T07 ## 1 Introduction Since the global outbreak of COVID-19 in March 2020, there has been a significant increase in worldwide inflation. Specifically, from May 2021 to February 2023, the 12-month change in the CPI index in the U.S. has not dropped below 5% (Bureau of Labor Statistics, 2023). Prior to the pandemic, the U.S. economy experienced nearly four decades of low inflation. The abrupt shift from a long-term low-inflation environment to a high-inflation environment has created substantial uncertainty and volatility in the financial markets. In 2022, the technology-heavy NASDAQ stock index recorded a yearly return of -33.10% (NASDAQ, 2023). Equally concerning is the uncertainty around the duration of this round of high inflation. Some believe that the geopolitical tensions and the COVID-19 pandemic will overturn the trend of globalization and lead to global supply chain restructuring (Javorcik, 2020) which may result in a higher cost of production in the foreseeable future. Moreover, Ball et al. (2022) suggests that the future inflation rate may remain high if the unemployment rate remains low. In this article, we aim to answer the following question: with the goal of outperforming a passive benchmark, how should an active investor optimize the portfolio during high inflation? It is important to note that we do not attempt to make predictions about future inflation conditions. Instead, we approach the problem by formulating a multi-period optimal control problem that considers bounded leverage constraints and specific investment criteria. This optimal control problem requires the specification of an appropriate objective function, realistic constraints, as well as stochastic models for returns of traded assets during high inflation regimes. Given the complexities and challenges associated, it is crucial to develop an efficient method capable of computing optimal solutions, accommodating flexible data sources, handling high-dimensional cases, and dealing with complex constraints. In this paper, we propose a framework to address these challenges. In Section 2, we assume that the real (inflation-adjusted) asset returns during a high-inflation regime follow stochastic processes and treat allocation decisions as the control of a dynamic system. Specifically, we formulate an optimal control problem to outperform a fixed-mix benchmark portfolio consistently throughout the investment horizon by minimizing a cumulative quadratic tracking difference (CD) objective. There is a large amount of extant literature on closed-form solutions for beating a stochastic benchmark under synthetic market assumptions (Browne, 1999, 2000; Tepla, 2001; Basak et al., 2006; Davis and Lleo, 2008; Lim and Wong, 2010; Oderda, 2015; Alekseev and Sokolov, 2016; Al-Aradi and Jaimungal, 2018). In these articles, the common objective function involves a log-utility function, e.g. the log wealth ratio. Under the log wealth ratio formulation, it is often hard to accommodate a fixed stream of cash injections, which is a common characteristic of open-ended funds. Forsyth et al. (2022) consider a scenario where a fixed amount of cash injections is allowed and provides a closed-form solution under a cumulative quadratic tracking difference (CD) objective given the assumption that the stock price follows a double exponential jump-diffusion model and the bond price is deterministic. Since the assumption that the bond index price is stochastic and has jumps is more reasonable under a high-inflation scenario, we develop a closed-form solution under the case that both the stock index and the bond index follow jump-diffusion models. The closed-form solution is derived, unfortunately, under unrealistic assumptions such as continuous rebalancing, infinite leverage, and continued trading when insolvent. A discrete-time multi-period asset allocation problem is generally solved using a dynamic programming (DP) based approach, which converts a multi-step optimization problem into multiple single-step optimization problems. However, van Staden et al. (2023) point out that dynamic programming-based approaches require the evaluation of a high-dimensional performance criterion to obtain the optimal control which is comparatively low-dimensional. This means that solving the discrete-time problem numerically using dynamic programming-based techniques (for example numerical solutions to the corresponding PIDE (Wang and Forsyth, 2010), or reinforcement learning (RL) techniques (Dixon et al., 2020; Park et al., 2020; Lucarelli and Borrotti, 2020; Gao et al., 2020)) are inefficient and are computationally prone to known issues such as error amplification over recursions (Wang et al., 2020). Acknowledging these limitations, in Section 2.7, we propose to use a single neural network model to approximate the optimal control and solve the original optimal control problem directly via a single standard finite-dimensional optimization. This direct approximation of the control exploits the lower dimensionality of the optimal control and bypasses the problem of solving high-dimensional conditional expectations associated with DP methods. We note that the idea of using a neural network to directly approximate the control process is also used in Han et al. (2016); Buehler et al. (2019); Tsang and Wong (2020); Reppen et al. (2022), in which they propose a stacked neural network approach that includes individual sub-networks for each rebalancing step. In contrast, we propose a single shallow neural network that includes time as an input feature, and thus avoids the need to have multiple sub-networks for each rebalancing step and greatly reduces the computational and modeling complexity. Furthermore, using time as a feature in the neural network approximation function is consistent with the observation that (under assumptions) the optimal control is naturally a continuous function of time, which we discuss in detail in Section 2.4. The idea of using a single neural network to approximate controls has also been explored in previous studies such as Li and Forsyth (2019) and Ni et al. (2022). These studies focus on portfolio optimization problems with long-only constraints. The neural network architecture proposed in these studies transforms the constrained portfolio optimization problems into unconstrained optimization problems, making them computationally easier to solve. However, these existing neural network architectures do not address the bounded leverage constraint, which limits the total long exposure in the portfolio. The limited literature on portfolio optimization with bounded leverage is likely due to the added complexity arising from the combination of long and short positions in the portfolio. A significant contribution of this article is the introduction of a novel leverage-feasible neural network (LFNN) model, which converts the leverage-constrained optimization problem into an unconstrained optimization problem. This model enables the incorporation of the bounded leverage constraint into the portfolio optimization framework. Additionally, in Section 2.8, we provide a mathematical proof that, under reasonable assumptions, the solution of the unconstrained optimization problem obtained using the LFNN model can approximate the optimal control of the original problem arbitrarily well. This mathematical justification validates the effectiveness and validity of the LFNN approach. In Section 3, we present a case study on active portfolio optimization in a high-inflation regime. To identify historical high-inflation periods, we employ a simple filtering method. Subsequently, we use bootstrap resampling to generate training and testing data sets, which consist of price paths for four assets: the equal-weighted and cap-weighted stock indexes, as well as the 30-day and 10-year U.S. treasury indexes. Using the leverage-feasible neural network (LFNN) model and the cumulative quadratic tracking shortfall (CS) objective, we derive a leverage-constrained strategy for portfolio optimization. Our results demonstrate that the LFNN model produces a strategy that consistently outperforms the fixed-mix benchmark. Specifically, the strategy achieves a median (annualized) internal rate of return (IRR) that is more than 2% higher than the benchmark. Moreover, there is a probability of over 90% that the strategy will yield a higher terminal wealth compared to the benchmark. These findings highlight the efficacy of the LFNN model in optimizing portfolios under high-inflation conditions. By incorporating the bounded leverage constraint and utilizing the CS objective, our approach enables investors to achieve superior performance and mitigate risks in a high-inflation environment. Our contributions are summarized below: 1. To gain intuition about the behavior of the optimal controls, we derive the closed-form solution under a jump-diffusion asset price model and other typical assumptions (such as continuous rebalancing) for a two-asset case. The closed-form solution provides important insights into the properties of the optimal control as well as meaningful interpretations of the neural network models that approximate the controls. 2. We propose to represent the control directly by a neural network representation so that the stochastic optimal control problem can be solved numerically under realistic constraints such as discrete rebalancing and limited leverage. Particularly, we propose the novel leverage-feasible neural network (LFNN) model to convert the original complex leverage-constrained optimization problem into an unconstrained optimization problem that can be solved easily by standard optimization methods. 3. We prove that, with a suitable choice of the hyperparameter of the LFNN model, the solution of the parameterized unconstrained optimization problem can approximate the optimal control arbitrarily well. This provides a mathematical justification for the validity of the LFNN approach. This is further supported by the numerical results that the performance of the LFNN model matches the clipped form of the closed-form solution on simulated data. 4. In the case study on active portfolio optimization in high-inflation, we apply the neural network method to bootstrap resampled asset returns with four underlying assets, including the equal-weighted/cap-weighted stock indexes, and the 30-day/10-year treasury bond indexes. The dynamic strategy from the learned LFNN model outperforms the fixed-mix benchmark strategy consistently throughout the investment horizon, with a 2% higher median (annualized) internal rate of return (IRR), and more than 90% probability of achieving a higher terminal wealth. Furthermore, the learned allocation strategy suggests that the equal-weighted stock index and short-term bonds are preferable investment assets during high-inflation regimes. ## 2 Outperform dynamic benchmark under bounded leverage ### Sovereign wealth funds and benchmark targets Instead of taking a passive approach, some of the largest sovereign wealth funds often adopt an active management philosophy and use passive portfolios as the benchmark to evaluate the efficiency of active management. For example, the Canadian Pension Plan (CPP) uses a base reference portfolio of 85% global equity and 15% Canadian government bonds (CPP Investments, 2022). Another example is the Government Pension Fund Global of Norway (also known as the oil fund) managed by Norges Bank Investment Management (NBIM), which uses a benchmark index consisting of 70% equity index and 30% bond index.1 The benchmark equity index is constructed based on the market capitalization for equities in the countries included in the benchmark. The benchmark index for bonds specifies a defined allocation between government bonds and corporate bonds, with a weight of 70 percent to government bonds and 30 percent to corporate bonds (Norges Bank, 2022). Footnote 1: The Ministry of Finance of Norway sets the allocation fraction between the equity index and the bond index. It gradually raised the weight for equities from 60% to 70% from 2015-2018. However, the excess return that these well-known sovereign wealth funds have achieved over their respective passive benchmark portfolios cannot be described as impressive. In the 2022 fiscal year report, CPP claims to have beaten the base reference portfolio by an annualized 80 bps after fees over the past 5 years (CPP Investments, 2022). On the other hand, NBIM reports a mere average of 27 bps of annual excess return over the benchmark over the last decade (see Table 2.1). It is worth noting that these behemoth funds achieve seemingly meager results by hiring thousands of highly paid investment professionals and spending billions of dollars on day-to-day operations. For example, the CPP 2021 annual report (CPP Investments, 2021) lists personnel costs as CAD 938 million, for 1,936 employees, which translates to average costs of about CAD 500,000 per employee-year. The stark contrast between the enormous spending of sovereign wealth funds and the meager outperformance of the funds relative to the passive benchmark portfolios is probably provocative to taxpayers and pensioners who invest their hard-earned money in the funds. Equally concerning is the potential of a long, persistent inflation regime and the funds' ability to consistently beat the benchmark portfolio in such times. After all, both the CPP Investments and NBIM were established in the late 1990s, a decade after the last long inflation period ended in the mid-1980s. These concerns prompt us to ask the following question: in a presumed persistent high-inflation environment, can a fund manager find a simple asset allocation strategy that consistently beats the benchmark passive portfolios by a reasonable margin (preferably without spending billions of dollars in personnel costs)? ### Mathematical formulation In this section, we mathematically formulate the problem of outperforming a benchmark. Let \([t_{0}(=0),T]\) denote the investment horizon, and let \(W(t)\) denote the wealth (value) of the portfolio actively managed \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline Year & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 & 2020 & 2021 & Average \\ \hline Excess return (\%) & 0.21 & 0.99 & -0.77 & 0.45 & 0.15 & 0.70 & -0.30 & 0.23 & 0.27 & 0.74 & 0.27 \\ \hline \end{tabular} \end{table} Table 2.1: Norges Bank Investment Management, relative return to benchmark portfolio by the manager at time \(t\in[t_{0},T]\). We refer to the actively managed portfolio as the "active portfolio". Furthermore, let \(\hat{W}(t)\) denote the wealth of the benchmark portfolio at time \(t\in[t_{0},T]\). To ensure a fair assessment of the relative performance of the two portfolios, we assume both portfolios start with an equal initial wealth amount \(w_{0}>0\), i.e., \(W(t_{0})=\hat{W}(t_{0})=w_{0}>0\). Technically, the admissible sets of underlying assets for the active and passive portfolio need not be identical. However, for simplicity, we assume that both the active portfolio and the benchmark portfolio can allocate wealth to the same set of \(N_{a}\) assets. Let vector \(\mathbf{S}(t)=(S_{i}(t):i=1,\cdots,N_{a})^{\top}\in\mathbb{R}^{N_{a}}\) denote the asset prices of the \(N_{a}\) underlying assets at time \(t\in[t_{0},T]\). In addition, let vectors \(\mathbf{p}(t)=(p_{i}(t):i=1,\cdots,N_{a})^{\top}\in\mathbb{R}^{N_{a}}\) and \(\mathbf{\hat{p}}(t)=(\hat{p}_{i}(t):i=1,\cdots,N_{a})^{\top}\in\mathbb{R}^{N_{a}}\) denote the fraction of wealth allocated to the \(N_{a}\) underlying assets at time \(t\in[t_{0},T]\), respectively, for the active portfolio and the benchmark portfolio. From a control theory perspective, the allocation vector \(\mathbf{p}\) can be regarded as the control of the system, as it determines how the wealth of the active portfolio evolves over time. We will seek to find the optimal feedback control. In other words, the closed-loop controls (allocation decisions) are assumed to also depend on the value of the state variables (e.g. portfolio wealth). Therefore, we consider the control \(\mathbf{p}\) to be a function of time as well as the relevant state variables. In addition, the benchmark portfolio allocation \(\mathbf{\hat{p}}\) can be regarded as a known function of its state variables and time as well. Mathematically, \(\mathbf{p}(\mathbf{X}(t))=(p_{i}(\mathbf{X}(t)):i=1,\cdots,N_{a})^{\top}\in\mathbb{R}^{N_ {a}}\) and \(\mathbf{\hat{p}}(\hat{\mathbf{X}}(t))=(\hat{p}_{i}(\hat{\mathbf{X}}(t)):i=1,\cdots,N_{a}) ^{\top}\in\mathbb{R}^{N_{a}}\), where \(\mathbf{X}(t)\in\mathcal{X}\subseteq\mathbb{R}^{N_{x}}\) and \(\hat{\mathbf{X}}(t)\in\hat{\mathcal{X}}\subseteq\mathbb{R}^{N_{x}}\) are the state variables taken into account by the active portfolio and the benchmark portfolio respectively. Here we include \(t\) in \(\mathbf{X}(t)\) and \(\hat{\mathbf{X}}(t)\) for notational simplicity. In this article, we consider the particular problem of outperforming a passive portfolio, in which \(\mathbf{X}(t)=\big{(}t,W(t),\hat{W}(t)\big{)}^{\top}\). We assume that the active portfolio and the benchmark portfolio follow the same rebalancing schedule denoted by \(\mathcal{T}\subseteq[t_{0},T]\). In the case of discrete rebalancing, \(\mathcal{T}\subset[t_{0},T]\) is a discrete set. In the case of continuous rebalancing, \(\mathcal{T}=[t_{0},T]\), i.e., rebalancing happens continuously throughout the entire investment horizon. Additionally, we assume both portfolios follow the same deterministic sequence of cash injections, defined by the set \(\mathcal{C}=\{c(t),\;t\in\mathcal{T}_{c}\}\), where \(\mathcal{T}_{c}\subseteq[t_{0},T]\) is the schedule of the cash injections. When \(\mathcal{T}_{c}\) is a discrete injection schedule, \(c(t)\) is the amount of cash injection at \(t\). In the case of continuous cash injections, i.e., \(\mathcal{T}=[t_{0},T]\), \(c(t)\) is the rate of cash injection at \(t\), i.e., the total cash injection amount during \([t,t+dt]\) is \(c(t)dt\), where \(dt\) is an infinitesimal time interval. For simplicity, we assume that \(\mathcal{T}_{c}=\mathcal{T}\), so that the cash injections schedule is the same as the rebalancing schedule. At \(t\in\mathcal{T}\), \(W(t)\) and \(\hat{W}(t)\) always denote the wealth after the cash injection (assuming there is a cash injection event happening at \(t\)). The active and benchmark strategies, respectively, are defined as the sequence of the allocation fractions following the rebalancing schedule. Mathematically, the active and benchmark strategies are defined by sets \[\mathcal{P}=\{\mathbf{p}(\mathbf{X}(t)),\;t\in\mathcal{T}\},\quad\text{and}\quad\hat{ \mathcal{P}}=\{\mathbf{\hat{p}}(\hat{\mathbf{X}}(t)),\;t\in\mathcal{T}\}. \tag{2.1}\] Denote \(\mathcal{A}\) as the set of admissible strategies, which reflects the investment constraints on the controls. We assume that admissibility can vary with state and let \(\{\mathcal{X}_{i}\colon\,i=1,\cdots,k\}\) be a partition of \(\mathcal{X}\) (the state variable space), i.e. \[\left\{\begin{aligned} &\bigcup_{i=1}^{k}\mathcal{X}_{i}= \mathcal{X},\\ &\mathcal{X}_{i}\bigcap\mathcal{X}_{j}=\varnothing,\forall 1\leq i<j \leq k,\end{aligned}\right. \tag{2.2}\] and \(\{\mathcal{Z}_{i}\subseteq\mathbb{R}^{N_{a}}\colon\,i=1,\cdots,k\}\) be the corresponding value sets of feasible controls such that any feasible control \(\mathbf{p}\) satisfies \[\mathbf{p}(\mathbf{x})\in\mathcal{Z}_{i},\forall\mathbf{x}\in\mathcal{X}_{i},\;\forall i \in\{1,\cdots,k\}. \tag{2.3}\] We say that strategy \(\mathcal{P}\) is an admissible strategy, i.e., \(\mathcal{P}\in\mathcal{A}\), if and only if \[\mathcal{P}=\Big{\{}\mathbf{p}(\mathbf{X}(t)),\;t\in\mathcal{T}\;\Big{|}\;\mathbf{p}(\mathbf{X} (t))\in\mathcal{Z}_{i},\;\text{if}\;\mathbf{X}(t)\in\mathcal{X}_{i}\Big{\}} \tag{2.4}\] Consider a discrete rebalancing schedule \(\mathcal{T}=\{t_{j},\ j=0,\cdots,N\}\) with \(N\) rebalancing events, where \(t_{0}<t_{1}<\cdots<t_{N}=T\).2 Then, the wealth evolution of the active portfolio and the benchmark portfolio can be described by the equations Footnote 2: Technically, at \(t=t_{0}\), the manager makes the initial asset allocation, rather than a “rebalancing” of the portfolio. However, despite the different purposes, a rebalancing of the portfolio is simply a new allocation of the portfolio wealth. Therefore, for notational simplicity, we include \(t_{0}\) in the rebalancing schedule. \[\left\{\begin{aligned} W(t_{j+1})&=\Big{(}\sum \limits_{i=1}^{N_{a}}p_{i}(\mathbf{X}(t_{j}))\cdot\frac{S_{i}(t_{j+1})-S_{i}(t_{j}) }{S_{i}(t_{j})}\Big{)}W(t_{j})+c(t_{j+1}),\ j=0,\cdots,N-1,\\ \hat{W}(t_{j+1})&=\Big{(}\sum\limits_{i=1}^{N_{a}} \hat{p}_{i}(\hat{\mathbf{X}}(t_{j}))\cdot\frac{S_{i}(t_{j+1})-S_{i}(t_{j})}{S_{i}( t_{j})}\Big{)}\hat{W}(t_{j})+c(t_{j+1}),\ j=0,\cdots,N-1.\end{aligned}\right. \tag{2.5}\] In the continuous rebalancing case, \(\mathcal{T}=[t_{0},T]\). Let \(dS_{i}(t)\) denote the instantaneous change in price for asset \(i\), \(i\in[1,\cdots,N_{a}]\).3 Then, at \(t\in\mathcal{T}=[t_{0},T]\), the wealth dynamics of the active portfolio and the benchmark portfolio, following their respective strategies \(\mathcal{P}\) and \(\hat{\mathcal{P}}\), can be described by the equations Footnote 3: For illustration purposes, here we assume \(S_{i}(t),i\in[1,\cdots,N_{a}]\) follow standard diffusion processes, i.e., no jumps. We will discuss the case with jumps in detail in Section 2.4. \[\left\{\begin{aligned} dW(t)&=\Big{(}\sum \limits_{i=1}^{N_{a}}p_{i}(\mathbf{X}(t))\cdot\frac{dS_{i}(t)}{S_{i}(t)}\Big{)}W( t_{j})+c(t)dt,\\ dW\hat{W}(t)&=\Big{(}\sum\limits_{i=1}^{N_{a}} \hat{p}_{i}(\hat{\mathbf{X}}(t))\cdot\frac{dS_{i}(t)}{S_{i}(t)}\Big{)}\hat{W}(t_{ j})+c(t)dt.\end{aligned}\right. \tag{2.6}\] Let sets \(\mathcal{W}_{\mathcal{P}}=\{W(t),t\in\mathcal{T}\}\) and \(\hat{\mathcal{W}}_{\hat{\mathcal{P}}}=\{\hat{W}(t),t\in\mathcal{T}\}\) denote the wealth trajectories of the active portfolio and the benchmark portfolio following their respective investment strategies \(\mathcal{P}\) and \(\hat{\mathcal{P}}\). Let \(F(\mathcal{W}_{\mathcal{P}},\hat{\mathcal{W}}_{\hat{\mathcal{P}}})\in\mathbb{R}\) denote an investment metric that measures the performances of the active and benchmark strategies, based on their respective wealth trajectories. In this article, we assume that the asset prices \(\mathbf{S}(t)\in\mathbb{R}^{N_{a}}\) are stochastic. Then, the wealth trajectories \(\mathcal{W}_{\mathcal{P}}\) and \(\hat{\mathcal{W}}_{\hat{\mathcal{P}}}\) are also stochastic, as well as the performance metric \(F(\mathcal{W}_{\mathcal{P}},\hat{\mathcal{W}}_{\hat{\mathcal{P}}})\), which measures the relative performance of the active strategy with respect to the benchmark strategy. Therefore, when investment managers target to optimize an investment metric, the evaluation is often on the expectation of the random metric. Let \(\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}[F(\mathcal{W}_{\mathcal{P}},\hat{ \mathcal{W}}_{\hat{\mathcal{P}}})]\) denote the expectation of the value of the performance metric \(F\), with respect to a given initial wealth \(w_{0}=W(0)=\hat{W}(0)\) at time \(t_{0}=0\), following an admissible investment strategies \(\mathcal{P}\in\mathcal{A}\), and the benchmark investment strategy \(\hat{\mathcal{P}}\). Since the benchmark strategy is often pre-determined and known, we keep the benchmark strategy \(\hat{\mathcal{P}}\) implicit in this notation for simplicity. Subsequently we use \(\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}[F(\mathcal{W}_{\mathcal{P}},\hat{ \mathcal{W}}_{\hat{\mathcal{P}}})]\), the expectation of a desired performance metric, as the _(investment) objective function_ and solve \[\text{(Optimization problem):}\quad\inf_{\mathcal{P}\in\mathcal{A}} \mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\big{[}F(\mathcal{W}_{\mathcal{P}}, \hat{\mathcal{W}}_{\hat{\mathcal{P}}})\big{]}. \tag{2.7}\] ### Choice of investment objective The first step to designing a proper outperforming investment objective is to clarify the definition of _beating the benchmark_. In the context of measuring the performance of the portfolio against the benchmark, a common metric is the tracking error, which measures the volatility of the difference in returns, i.e., \[\text{Tracking error}=stdev(R-\hat{R}), \tag{2.8}\] where \(R\) denotes the return of the active portfolio, and \(\hat{R}\) denotes the return of the benchmark portfolio. Note that the returns of the active portfolio and the benchmark portfolio are determined from their respective wealth trajectories (\(\mathcal{W}_{\mathcal{P}}\) and \(\hat{\mathcal{W}}_{\hat{\mathcal{P}}}\)) that are evaluated under the same investment horizon and same market conditions. The tracking error measures the volatility of the difference in returns over the investment horizon. A criticism of the tracking error is that it only measures the variability in the difference in returns, but does not reflect the magnitude of the return difference itself. For example, an active strategy with a constant negative return difference over the investment horizon would yield a better tracking error than an active strategy with a positive but volatile return difference. For this reason, many prefer the tracking difference (Johnson et al., 2013; Hougan, 2015; Charteris and McCullough, 2020; Boyde, 2021), which is defined as the annualized difference between the active portfolio's cumulative return and the benchmark portfolio's cumulative return over a specific period. Note that both tracking error and tracking difference metrics measure the return difference of the active portfolio over the benchmark portfolio. In other words, these metrics measure how closely the return of the active portfolio tracks the return of the benchmark portfolio. In practice, if an investment manager aims to achieve a certain annualized relative return target, e.g., \(\beta\), then the tracking difference metric may not be appropriate. To address this, van Staden et al. (2022) suggests the investment objective \[\inf_{\mathcal{P}\in\mathcal{A}}\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})} \Big{[}\Big{(}W(T)-e^{\beta T}\hat{W}(T)\Big{)}^{2}\Big{]}, \tag{2.9}\] where \(W(T)\) and \(\hat{W}(T)\) are the respective terminal wealth of the active portfolio and the benchmark portfolio at terminal time \(T\), and \(\beta\) is the annualized relative return target. The optimal control problem (2.9) aims to produce an active strategy that minimizes the quadratic difference between \(W(T)\) and the terminal portfolio value target of \(e^{\beta T}\hat{W}(T)\). In other words, the optimal control policy tries to outperform the benchmark portfolio by a total factor of \(e^{\beta T}\) over the time horizon \([0,T]\), which is equivalent to an annualized relative return of \(\beta\). The quadratic term of the difference incentivizes the terminal wealth of the active portfolio \(W(T)\) to closely track the _elevated target_\(e^{\beta T}\). It is worth noting that the relative return target \(\beta\) can be intuitively interpreted as the manager's willingness to take more risks. As \(\beta\downarrow 0\), the optimal solution to problem (2.9) is simply to mimic the benchmark strategy. However, as \(\beta\) grows larger, the manager needs to take on more risk (for more return) in order to beat the benchmark portfolio by the relative return target rate. A criticism of the investment objective (2.9) is that it is symmetrical in terms of the outperformance and underperformance of \(W(T)\) relative to the elevated target \(e^{\beta T}\hat{W}(T)\). This is a common issue for volatility-based measures, such as the Sharpe ratio (Ziemba, 2005). In practice, investors may favor outperformance more than underperformance, while still aiming to track the elevated target closely. Acknowledging this, instead of (2.9), Ni et al. (2022) propose the following asymmetrical objective function, \[\inf_{\mathcal{P}\in\mathcal{A}}\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[} \Big{(}\min(W(T)-e^{\beta T}\hat{W}(T),0)\Big{)}^{2}+\max\big{(}W(T)-e^{\beta T }\hat{W}(T),0\big{)}\Bigg{]}. \tag{2.10}\] The investment objective (2.10) penalizes the outperformance (of \(W(T)\) relative to the elevated target \(e^{\beta T}\hat{W}(T)\)) linearly but the underperformance quadratically, thus encouraging the optimal policy to favor outperformance more than underperformance when necessary. Note that the use of objective function (2.10) does not permit closed-form solutions and machine learning techniques are used (Ni et al., 2022) to compute the desired optimal strategy numerically. Another criticism of the investment objective (2.9) and (2.10) is that both are only concerned with the relative performance at terminal time \(T\). In reality, investment managers are often required to report intermediate portfolio performance internally or externally at regular time intervals. Instead of achieving the annualized relative return target when reviewing the portfolio performance at the end of the investment horizon, managers may want to consistently achieve the relative return target throughout the entire investment horizon. In this case, managers may need to set an investment objective function to control the deviation of the wealth of the portfolio from the target along a market scenario within the investment horizon. Consequently, van Staden et al. (2022) propose the following cumulative quadratic tracking difference (CD) objectives \[(CD(\beta)):\quad\left\{\begin{aligned} &\inf_{\mathcal{P}\in\mathcal{A}} \mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[}\int_{t_{0}}^{T}\Big{(}W(t)-e^ {\beta t}\hat{W}(t)\Big{)}^{2}dt\Bigg{]},\;\text{if $\mathcal{T}=[t_{0},T]$,}\\ &\inf_{\mathcal{P}\in\mathcal{A}}\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[}\sum_{t\in\mathcal{T}\cup\{T\}}\Big{(}W(t)-e^{\beta t}\hat{W}( t)\Big{)}^{2}\Bigg{]},\;\text{if $\mathcal{T}\subseteq[t_{0},T],\mathcal{T}$ discrete.}\end{aligned}\right. \tag{2.12}\] Here, note that objective (2.11) is for the continuous rebalancing case, and (2.12) for discrete rebalancing. Both (2.11) and (2.12) measure the cumulative deviation of the wealth of the active portfolio relative to the target, along a market scenario within the entire investment horizon. Therefore, they measure the intermediate performance deviations effectively. However, similar to (2.9), (2.11) and (2.12) penalize outperformance and underperformance symmetrically. Therefore, we also consider following cumulative quadratic shortfall (CS) objectives that only penalize the shortfall (underperformance with respect to the target) \[(CS(\beta)):\quad\left\{\begin{aligned} &\inf_{\mathcal{P}\in\mathcal{A}} \mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[}\int_{t_{0}}^{T}\Big{(}\min \big{(}W(t)-e^{\beta t}\hat{W}(t),0\big{)}\Big{)}^{2}dt+\epsilon W(T)\Bigg{]}, \;\text{if $\mathcal{T}=[t_{0},T]$,}\\ &\inf_{\mathcal{P}\in\mathcal{A}}\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[}\sum_{t\in\mathcal{T}\cup\{T\}}\Big{(}\min\big{(}W(t)-e^{\beta t }\hat{W}(t),0\big{)}\Big{)}^{2}+\epsilon W(T)\Bigg{]},\;\text{if $\mathcal{T}\subseteq[t_{0},T], \mathcal{T}$ discrete.}\end{aligned}\right. \tag{2.13}\] Here (2.13) and (2.14) are the investment objectives for the continuous rebalancing and discrete rebalancing cases respectively. \(\epsilon\) is a small regularization parameter to ensure that problem (2.13) and (2.14) are well-posed. A more detailed comparison of the CD and CS objective functions can be found in Appendix H. ### Closed-form solution for CD problem In this section, we present the closed-form solution to the CD problem (2.11) under several assumptions. The closed-form solution not only provides us with insights for understanding the CD-optimal controls for problem (2.11), but also serves as the baseline for understanding the numerical results derived from the neural network method (discussed in later sections). Specifically, in this section, we consider the case that all asset prices follow jump-diffusion processes and portfolios with cash injections, which are aspects not frequently considered in benchmark outperformance literature Browne (1999, 2000); Tepla (2001); Basak et al. (2006); Yao et al. (2006); Zhao (2007); Davis and Lleo (2008); Lim and Wong (2010b); Oderda (2015); Zhang and Gao (2017); Al-Aradi and Jaimungal (2018b); Nicolosi et al. (2018); Bo et al. (2021). We first summarize the assumptions for obtaining the closed-form solution to CD problem (2.11). **Assumption 2.1**.: _(Two assets, no friction, unlimited leverage, trading in insolvency, constant rate of cash injection) The active portfolio and the benchmark portfolio have access to two underlying assets, a stock index, and a constant-maturity bond index. Both portfolios are rebalanced continuously, i.e., \(\mathcal{T}=[t_{0},T]\). There is no transaction cost and no leverage limit. Furthermore, we assume that trading continues in the event of insolvency, i.e., when \(W(t)<0\) for some \(t\in[t_{0},T]\). Finally, we assume both portfolios receive constant cash injections with an injection rate of \(c\), which means during any time interval \([t,t+\Delta t]\subseteq[t_{0},T],\forall\Delta t>0\), both portfolios receive cash injection amount of \(c\Delta t\)._ **Remark 2.1**.: (Remark on Assumption 2.1) For illustration purposes, we assume only two underlying assets. However, the technique for deriving the closed-form solution can be extended to multiple assets. We remark that unlimited leverage is unrealistic, and is only assumed for deriving the closed-form solution. In Section C.1, we will discuss the technique for handling the leverage constraint in more detail. We also acknowledge that it is not realistic to assume that the manager can continue to trade and borrow when insolvent. However, this assumption is typically required for obtaining closed-form solutions, see Zhou and Li (2000); Li and Ng (2000) for the case of a multi-period mean-variance asset allocation problem. In Appendix C.1, we have more discussion on the impact of insolvency and its handling in experiments. **Assumption 2.2**.: _(Fixed-mix benchmark strategy) We assume that the benchmark strategy is a fixed-mix strategy (also known as the constant weight strategy). We assume the benchmark always allocates a constant fraction of \(\hat{\varrho}\left(\in\mathbb{R}\right)\) of the portfolio wealth to the stock index, and a constant fraction of \(1-\hat{\varrho}\) to the bond index. Let \(\hat{\varrho}=(\hat{\varrho},1-\hat{\varrho})^{\top}\in\mathbb{R}^{2}\) denote the vector of allocation fractions to the stock index and the bond index, the benchmark strategy is the fixed-mix strategy defined by \(\hat{\mathcal{P}}=\{\hat{\boldsymbol{p}}(\hat{\boldsymbol{X}}(t))=\big{(}\hat {p}_{1}(\hat{\boldsymbol{X}}(t)),\hat{p}_{2}(\hat{\boldsymbol{X}}(t))\big{)}^{ \top}\equiv\hat{\boldsymbol{\varrho}},\;\forall t\in\mathcal{T}\}\)._ Finally, we assume the stock index price and bond index price follow the jump-diffusion processes described below. **Assumption 2.3**.: _(Jump-diffusion processes) Let \(S_{1}(t)\) and \(S_{2}(t)\) denote the deflated (adjusted by inflation) price of the stock index and the bond index at time \(t\in[t_{0},T]\). We assume \(S_{i}(t),\;i\in\{1,2\}\) follow the jump-diffusion processes_ \[\frac{dS_{i}(t)}{S_{i}(t^{-})}=(\mu_{i}-\lambda_{i}\kappa_{i}+r_{i}\cdot \boldsymbol{1}_{S_{i}(t^{-})<0})dt+\sigma_{i}dZ_{i}(t)+d\Big{(}\sum_{k=1}^{ \pi_{i}(t)}(\xi_{i}^{(k)}-1)\Big{)},\;i=1,2. \tag{2.15}\] _Here \(\mu_{i}\) are the (uncompensated) drift rate, \(\sigma_{i}\) is the diffusive volatility, \(Z_{1}(t),Z_{2}(t)\) are correlated Brownian motions, where \(\mathbb{E}[dZ_{1}(t)\cdot dZ_{2}(t)]=\rho dt\). \(r_{i}\) are the borrowing premiums when \(S_{i}(t^{-})\) is negative.4\(\pi_{i}(t)\) is a Poisson process with positive intensity parameter \(\lambda_{i}\). \(\{\xi_{i}^{(k)},\;k=1,\cdots,\pi_{i}(t)\}\) are i.i.d. positive random variables that describe jump multipliers associated with the assets. If a jump occurs for asset \(i\) at time \(t\in(t_{0},T]\), its underlying price jumps from \(S_{i}(t^{-})\) to \(S_{i}(t)=\xi_{i}\cdot S_{i}(t^{-})\).5\(\kappa_{i}=\mathbb{E}[\xi_{i}-1]\). \(\xi_{i}\) and \(\pi_{i}(t)\) are independent of each other. Moreover, \(\pi_{1}(t)\) and \(\pi_{2}(t)\) are assumed to be independent.6_ Footnote 4: Intuitively, there is a premium for shorting an asset. In the closed-form solution derivation, we assume \(r_{i}=0\). Footnote 5: For any functional \(\psi(t)\), we use the notation \(\psi(t^{-})\) as shorthand for the left-sided limit \(\psi(t^{-})=\lim_{\Delta t\downarrow 0}\psi(t-\Delta t)\). Footnote 6: See Forsyth (2020) for the discussion on the empirical evidence for stock-bond jump independence. Also note that the assumption of independent jumps can be relaxed without technical difficulty if needed (Kou, 2002), but will significantly increase the complexity of notations. **Remark 2.2**.: (Motivation for jump-diffusion model) The assumption of stock index price following a jump-diffusion model is common in the financial mathematics literature (Merton, 1976; Kou, 2002). In addition, we follow the practitioner approach and directly model the returns of the constant maturity bond index as a stochastic process, see for example Lin et al. (2015); MacMinn et al. (2014). As in MacMinn et al. (2014), we also assume that the constant maturity bond index follows a jump-diffusion process. During high-inflation regimes, central banks often make rate hikes to curb inflation, which causes sudden jumps in bond prices (Lahaye et al., 2011). We believe this is an appropriate assumption for bonds in high-inflation regimes. Under the jump-diffusion model (2.15), the wealth processes for the active portfolio and benchmark portfolio are \[\left\{\begin{aligned} & dW(t)=\Big{(}\sum\limits_{i=1}^{N_{a}}p_{i}( \boldsymbol{X}(t^{-}))\cdot\frac{dS_{i}(t)}{S_{i}(t^{-})}\Big{)}W(t_{j}^{-}) +cdt,\\ & d\hat{W}(t)=\Big{(}\sum\limits_{i=1}^{N_{a}}\hat{p}_{i}(\hat{ \boldsymbol{X}}(t^{-}))\cdot\frac{dS_{i}(t)}{S_{i}(t^{-})}\Big{)}\hat{W}(t_{j} ^{-})+cdt,\end{aligned}\right. \tag{2.16}\] where \(t\in(t_{0},T]\), \(W(t_{0})=\hat{W}(t_{0})=w_{0}\) and \(X(t^{-})=(t,W(t^{-}),\hat{W}(t^{-}))^{\top}\in\mathbb{R}^{3}\) is the state variable vector. We now derive the closed-form solution of the CD problem (2.11) under Assumption 2.1, 2.2 and 2.3. We first present the verification theorem for the HJB integro-differential equation (PIDE) satisfied by the value function and the optimal control of the CD problem (2.11). **Theorem 2.1**.: _(Verification theorem for CD problem (2.11) For a fixed \(\beta>0\), assume that for all \((t,w,\hat{w},\hat{\varrho})\in[t_{0},T]\times\mathbb{R}^{3}\), there exists a function \(V(t,w,\hat{w},\hat{\varrho}):[t_{0},T]\times\mathbb{R}^{3}\mapsto\mathbb{R}\) and \(p^{\star}(t,w,\hat{w},\hat{\varrho}):[t_{0},T]\times\mathbb{R}^{3}\mapsto \mathbb{R}^{2}\) that satisfy the following two properties. (i) \(V\) and \(\boldsymbol{p}^{\star}\) are sufficiently smooth and solve the HJB PIDE (2.17), and (ii) the function \(\boldsymbol{p}^{\star}(t,w,\hat{w},\hat{\varrho})\) attains the pointwise infimum in (2.17) below_ \[\left\{\begin{aligned} &\frac{\partial V}{\partial t}+(w-e^{\beta t} \hat{w})^{2}+\inf_{\boldsymbol{p}\in\mathbb{R}^{2}}H(\boldsymbol{p};t,w,\hat{w },\hat{\boldsymbol{\varrho}})=0,\\ & V(T,w,\hat{w},\hat{\varrho})=0,\end{aligned}\right. \tag{2.17}\] _where_ \[H(\mathbf{p};t,w,\hat{w},\hat{\mathbf{\varrho}})= \big{(}w\cdot\mathbf{\alpha}^{\top}\mathbf{p}+c\big{)}\cdot\frac{\partial V }{\partial w}+\big{(}\hat{w}\cdot\mathbf{\alpha}^{\top}\hat{\mathbf{\varrho}}+c\big{)} \cdot\frac{\partial V}{\partial\hat{w}}-\Big{(}\sum_{i}\lambda_{i}\Big{)} \cdot V(t,w,\hat{w},\hat{\varrho})\] \[+\frac{w^{2}}{2}\cdot\big{(}\mathbf{p}^{\top}\mathbf{\Sigma}\mathbf{p}\big{)} \cdot\frac{\partial^{2}V}{\partial w^{2}}+\frac{\hat{w}^{2}}{2}\cdot\big{(} \hat{\mathbf{\varrho}}^{\top}\mathbf{\Sigma}\hat{\mathbf{\varrho}}\big{)}\cdot\frac{ \partial^{2}V}{\partial\hat{w}^{2}}+w\hat{w}\cdot\big{(}\mathbf{p}^{\top}\mathbf{ \Sigma}\hat{\mathbf{\varrho}}\big{)}\cdot\frac{\partial^{2}V}{\partial w\hat{w}}\] \[+\sum_{i}\lambda_{i}\int_{0}^{\infty}V(w+p_{i}w(\xi-1),\hat{w}+ \hat{p}_{i}\hat{w}(\xi-1),t,\hat{\varrho})f_{\xi_{i}}(\xi)d\xi. \tag{2.18}\] _Here \(\mathbf{\alpha}=(\mu_{1}-\lambda_{1}\kappa_{1},\mu_{2}-\lambda_{2}\kappa_{2})^{\top}\) is the vector of (compensated) drift rates, \(\mathbf{\Sigma}=\begin{bmatrix}\sigma_{1}^{2}&\rho\sigma_{1}\sigma_{2}\\ \rho\sigma_{1}\sigma_{2}&\sigma_{2}^{2}\end{bmatrix}\) is the covariance matrix, and \(f_{\xi_{i}}\) is the density function for \(\xi_{i}\)._ _Then, under Assumption 2.1, 2.2 and 2.3, \(V\) is the value function and \(\mathbf{p}^{*}\) is the optimal control for the CD problem (2.11)._ Proof.: See Appendix A.1 Define several auxiliary variables \[\left\{\begin{aligned} &\kappa_{i}^{(2)}=\mathbb{E}\big{[}(\xi_{i}-1)^ {2}\big{]},\quad(\sigma_{i}^{(2)})^{2}=(\sigma_{i})^{2}+\lambda_{i}\kappa_{i} ^{(2)},\;i\in\{1,2\},\\ &\vartheta=\sigma_{1}\sigma_{2}\rho-(\sigma_{2}^{(2)})^{2},\quad \gamma=(\sigma_{1}^{(2)})^{2}+(\sigma_{2}^{(2)})^{2}-2\sigma_{1}\sigma_{2} \rho,\\ &\phi=\frac{(\mu_{1}-\mu_{2})(\mu_{1}-\mu_{2}+\vartheta)}{\gamma}, \quad\eta=\frac{(\mu_{1}-\mu_{2}+\vartheta)^{2}}{\gamma}-(\sigma_{2}^{(2)})^ {2},\end{aligned}\right. \tag{2.19}\] then we have the following proposition regarding the optimal control of problem (2.11). **Proposition 2.1**.: _(CD-optimal control) Suppose Assumption 2.1, 2.2 and 2.3 are applicable, then the optimal control fraction of the wealth of the active portfolio to be invested in the stock index for the \(CD(\beta)\) problem (2.11) is given by \(p^{*}(t,w,\hat{w},\hat{\varrho})\in\mathbb{R}\), where_ \[p^{*}(t,w,\hat{w},\hat{\varrho})=\frac{1}{W^{*}(t)}\Bigg{[}\frac{(\mu_{1}-\mu_ {2})}{\gamma}h(t;\beta,c)+\frac{(\mu_{1}-\mu_{2}+\vartheta)}{\gamma}\Big{(}g( t;\beta)\hat{W}(t)-W^{*}(t)\Big{)}+g(t;\beta)\hat{W}(t)\cdot\hat{\varrho} \Bigg{]}. \tag{2.20}\] _Here \(W^{*}(t)\) denotes the wealth process of the active portfolio from (2.6) following control \(\mathbf{p}^{*}(t,W^{*}(t),\hat{W}(t),\hat{\varrho})=\Big{(}p^{*}(t,W^{*}(t),\hat{ W}(t),\hat{\varrho}),1-p^{*}(t,W^{*}(t),\hat{W}(t),\hat{\varrho})\Big{)}^{\top}\), where \(p^{*}\) is the optimal stock allocation described in (2.20), and \(\hat{W}(t)\) is the wealth process of the benchmark portfolio following the fixed-mixed strategy described in Assumption 2.2. Here, \(h\) and \(g\) are deterministic functions of time,_ \[g(t;\beta)=-\frac{D(t;\beta)}{2A(t)},\qquad h(t;\beta,c)=-\frac{B(t;\beta,c)}{ 2A(t)}, \tag{2.21}\] _where \(A,D\) and \(B\) are deterministic functions defined as_ \[A(t)=\frac{e^{(2\mu_{2}-\eta)(T-t)}-1}{(2\mu_{2}-\eta)},\qquad D(t;\beta)=2e^{ \beta T}\Big{(}\frac{e^{-\beta(T-t)}-e^{(2\mu_{2}-\eta)(T-t)}}{2\mu_{2}-\eta+ \beta}\Big{)}, \tag{2.22}\] _and_ \[B(t;\beta,c) =\frac{2c}{2\mu_{2}-\eta}\Big{(}\frac{e^{(2\mu_{2}-\eta)(T-t)}-e^ {(\mu_{2}-\phi)(T-t)}}{\mu_{2}+\phi-\eta}-\frac{e^{(\mu_{2}-\phi)(T-t)}-1}{\mu _{2}-\phi}\Big{)}\] \[+\frac{2ce^{\beta T}}{2\mu_{2}-\eta+\beta}\Big{(}\frac{e^{(\mu_{2} -\phi)(T-t)}-e^{-\beta(T-t)}}{\mu_{2}-\phi+\beta}-\frac{e^{(2\mu_{2}-\eta)(T-t )}-e^{(\mu_{2}-\phi)(T-t)}}{\mu_{2}+\phi-\eta}\Big{)}. \tag{2.23}\] Proof.: See Appendix A.2. #### 2.4.1 Insights from CD-optimal control The CD-optimal control (2.20) provides insights into the behaviour of the optimal allocation policy. For ease of exposition, we first establish the following properties of \(g(t;\beta)\) and \(h(t;\beta,c)\). **Corollary 2.1**.: _(Properties of \(g(t;\beta)\)) The function \(g(t;\beta)\) defined in (2.21) has the following properties for \(t\in[t_{0},T]\) and \(\beta>0\):_ 1. _For fixed_ \(t\in[t_{0},T]\)_,_ \(g(t;\beta)\) _is strictly increasing on_ \(\beta\in(0,\infty)\)_._ 2. _For fixed_ \(\beta>0\)_,_ \(g(t;\beta)\) _is strictly increasing on_ \(t\in[t_{0},T]\)_._ 3. \(g(t;\beta)\) _admits the following bounds:_ \[e^{\beta t}\leq g(t;\beta)\leq e^{\beta T}.\] (2.24) Proof.: See Appendix A.3. **Corollary 2.2**.: _(Properties of \(h(t;\beta,c)\)) The function \(h(t;\beta,c)\) defined in (2.21) has the following properties for \(t\in[t_{0},T]\), \(\beta>0\) and \(c\geq 0\):_ 1. _For fixed_ \(t\in[t_{0},T]\) _and_ \(c>0\)_,_ \(h(t;\beta,c)\) _is strictly increasing on_ \(\beta\in(0,\infty)\)_._ 2. \(h(t;\beta,c)\geq 0\)_,_ \(\forall(t,\beta,c)\in[t_{0},T]\times(0,\infty)\times[0,\infty)\)_._ 3. _For fixed_ \(t\in[t_{0},T]\) _and_ \(\beta>0\)_,_ \(h(t;\beta,c)\) _is strictly increasing on_ \(c\in[0,\infty)\)_._ \(h(t;\beta,0)\equiv 0\)_. Moreover,_ \(h(t;\beta,c)\propto c\)_, i.e._ \(h(t;\beta,c)\) _is proportional to_ \(c\)_._ Proof.: See Appendix A.3. In order to analyze the closed-form solution, we make the following assumptions. **Assumption 2.4**.: _(Drift rates of the two assets) We assume that the drift rates of the stock and the bond index \(\mu_{1}\) and \(\mu_{2}\) satisfy the following properties,_ \[\mu_{1}-\mu_{2}>0,\quad\mu_{1}-\mu_{2}+\vartheta>0, \tag{2.25}\] _where \(\vartheta\) is defined in (2.19)._ **Remark 2.3**.: (Remark on drift rate assumptions) The first inequality \(\mu_{1}-\mu_{2}>0\) indicates that the stock index has a higher drift rate than the bond index, which is a standard assumption.7 The second inequality \(\mu_{1}-\mu_{2}+\vartheta>0\) is also practically reasonable. \(\vartheta\) is a variance term that is usually on a smaller scale compared to the drift rates. In reality, it is unlikely that \(\mu_{1}-\mu_{2}>0\) but \(\mu_{1}-\mu_{2}+\vartheta\leq 0\).8 Footnote 7: In fact, in this two-asset case, this assumption does not cause loss of generality. Footnote 8: For reference, based on the calibrated jump-diffusion model (2.15) on historical high-inflation regimes, \(\mu_{1}=0.051,\mu_{2}=-0.014,\vartheta=-0.00024\), and thus both inequalities are satisfied. Now we proceed to summarize the insights from the CD-optimal control (2.20). The first obvious observation is that the CD-optimal control is a contrarian strategy. This can be seen from the fact that fixing time and the wealth of the benchmark portfolio \(\hat{W}(t)\), the allocation to the more risky stock index decreases when the wealth of the active portfolio \(W^{*}(t)\) increases. If we take a deeper look at (2.20), we can see that the CD-optimal control consists of two components: a cash injection component \(p^{*}_{cash}\) and a tracking component \(p^{*}_{track}\). Mathematically, \[p^{*}(t,w,\hat{w},\hat{\varrho})=p^{*}_{cash}(t,w,\hat{w})+p^{*}_{track}(t,w, \hat{w},\hat{\varrho}), \tag{2.26}\] where \[\left\{\begin{aligned} & p_{cash}^{*}(t,w,\hat{w})=\frac{1}{W^{*}(t)} \Bigg{[}\frac{(\mu_{1}-\mu_{2})}{\gamma}h(t;\beta,c)\Bigg{]},\\ & p_{track}^{*}(t,w,\hat{w},\hat{\varrho})=\frac{1}{W^{*}(t)} \Bigg{[}\frac{(\mu_{1}-\mu_{2}+\hat{\vartheta})}{\gamma}\Big{(}g(t;\beta)\hat {W}(t)-W^{*}(t)\Big{)}+g(t;\beta)\hat{W}(t)\cdot\hat{\varrho}\Bigg{]}.\end{aligned}\right. \tag{2.27}\] Based on Assumption 2.4 and Corollary 2.2, the cash injection component \(p_{cash}\) is always non-negative. Furthermore, from Corollary 2.2, we know that the stock allocation from the cash injection component is proportional to the cash injection rate \(c\). In addition, as \(t\uparrow T\), \(h(t;\beta,c)\) increases, and thus the stock allocation from the cash injection component also increases with time. On the other hand, the tracking component \(p_{track}\) does not depend on the cash injection rate \(c\), but only concerns the tracking performance of the active portfolio. One key finding is that \[\left\{\begin{aligned} p_{track}^{*}(t,w,\hat{w},\hat{ \varrho})\geq\hat{\varrho},&\text{ if }W^{*}(t)\leq g(t;\beta)\hat{W}(t),\\ p_{track}^{*}(t,w,\hat{w},\hat{\varrho})<\hat{\varrho},& \text{ if }W^{*}(t)>g(t;\beta)\hat{W}(t).\end{aligned}\right. \tag{2.28}\] This means that the CD-optimal control uses \(g(t;\beta)\hat{W}(t)\) as the true target for the active portfolio to decide if the active portfolio should take more or less risk than the benchmark portfolio. This is a key observation, since the CD objective function (2.11) measures the difference between \(W(t)\) and \(e^{\beta t}\hat{W}(t)\). One would naively think that the optimal strategy would be based on the deviation from \(e^{\beta t}\hat{W}(t)\). In contrast, from Corollary 2.1, we know that the true target \(g(t;\beta)\hat{W}(t)\) used for decision making is greater than \(e^{\beta t}\hat{W}(t)\). The insight from this observation is that if the manager wants to track an elevated target \(e^{\beta t}\hat{W}(t)\), she should aim higher than the target itself. ### Leverage constraints In practice, large pension funds such as the Canadian Pension Plan often have exposures to alternative assets, such as private equity (CPP Investments, 2022). Unfortunately, due to practical limitations, we only have access to long-term historical returns of publicly traded stock indexes and treasury bond indexes. Although controversial, some literature suggests that returns on private equity can be replicated using a leveraged small-cap stock index (Phalippou, 2014; L'Her et al., 2016). Following this line of argument, we allow managers to take leverage to invest in public stock index funds to roughly mimic the pension fund portfolios with some exposure to private equities. Essentially, taking leverage to invest in stocks requires borrowing additional capital, which incurs borrowing costs. For simplicity, we assume the borrowing activity is represented by shorting some bond assets within the portfolio, and thus the manager is required to pay the cost of shorting these shortable assets. We assume that the cost consists of two parts: the returns of the shorted assets, and an additional borrowing premium (rate depends on specific investment scenarios) so that the total borrowing cost reflects both the interest rate environment (the return of shorted bond assets) and is reasonably estimated (with the added borrowing premium). Following the notation from Section 2.2, we assume that the total \(N_{a}\) underlying assets are divided into two groups. The first group of \(N_{l}\) assets are long-only assets, which we index by the set \(\{1,\cdots,N_{l}\}\). The second group of \(N_{a}-N_{l}\) assets are shortable assets that can be shorted to create leverage and are indexed by the set \(\{N_{l}+1,\cdots,N_{a}\}\). Recall the notation of \(p_{i}(\mathbf{X}(t))\) for the allocation fraction for asset \(i\) at time \(t\). For long-only assets, the wealth fraction needs to be non-negative, hence we have \[\text{(Long-only constraint):}\quad p_{i}(\mathbf{X}(t))\geq 0,\;i\in\{1,\cdots,N_{l} \},\;t\in\mathcal{T}. \tag{2.29}\] Furthermore, the total allocation fraction for all assets should be one. Therefore, the following summation constraint needs to be satisfied \[\text{(Summation constraint):}\quad\sum_{i=1}^{N_{a}}p_{i}(\mathbf{X}(t))=1, \;t\in\mathcal{T}. \tag{2.30}\] In practice, due to borrowing costs (from taking leverage) and risk management mandates, the use of leverage is often constrained. For this reason, we cap the maximum leverage by introducing a constant \(p_{max}\), which represents the total allocation fraction for long-only assets. Therefore, \[\text{(Maximum leverage constraint):}\quad\sum_{i=1}^{N_{l}}p_{i}(\mathbf{X}(t)) \leq p_{max},\;t\in\mathcal{T}. \tag{2.31}\] Note that no leverage is permitted if \(p_{max}=1\). Finally, we make the following assumption on the scenario of shorting multiple shortable assets. **Assumption 2.5**.: _(Simultaneous shorting) If one shortable asset has a negative weight, other shortable assets must have nonpositive weights. Mathematically, this assumption can be expressed as_ \[\text{(Simultaneous shorting constraint):}\left\{\begin{array}{l}p_{i}(\mathbf{X}(t)) \leq 0,\;\forall i\in\{N_{l}+1,\cdots,N_{a}\},\;\text{if}\;\sum_{i=1}^{N_{l}}p_{i }(\mathbf{X}(t))>1,\;t\in\mathcal{T}\\ p_{i}(\mathbf{X}(t))\geq 0,\;\forall i\in\{N_{l}+1,\cdots,N_{a}\},\;\text{if}\;\sum_{i=1}^{N_ {l}}p_{i}(\mathbf{X}(t))\leq 1,\;t\in\mathcal{T}\end{array}\right.\;. \tag{2.32}\] **Remark 2.4**.: (Remark on Assumption 2.5) This assumption avoids the ambiguity between the long-only assets and shortable assets in scenarios that involve leverage. When leveraging occurs, all shortable assets are treated as one group to provide the needed liquidity to achieve the desired leverage level. The above constraints consider scenarios with non-negative portfolio wealth. Before we proceed to the handling of the negative portfolio wealth scenarios, we first define the following partition of the state space \(\mathcal{X}\), **Definition 2.1**.: _(Partition of state space) We define \(\left\{\mathcal{X}_{1},\mathcal{X}_{2}\right\}\) to be a partition of the state space \(\mathcal{X}\), such that_ \[\left\{\begin{array}{l}\mathcal{X}_{1}=\left\{x=(t,W,\hat{W})^{\top}\in \mathcal{X}\middle|\!\!\!W\geq 0\right\}\!,\\ \mathcal{X}_{2}=\left\{x=(t,W,\hat{W})^{\top}\in\mathcal{X}\middle|\!\!\!W<0 \right\}\!.\end{array}\right. \tag{2.33}\] Intuitively, we separate the state space \(\mathcal{X}\) into two regions by the wealth of the active portfolio, one with non-negative wealth and the other with negative wealth. Then, we present the following assumption concerning the negative wealth (insolvency) scenarios. **Assumption 2.6**.: _(No trading in insolvency) If the wealth of the active portfolio is negative, then all long-only asset positions should be liquidated, and all the debt (i.e. the negative wealth) is allocated to the least-risky shortable asset (in terms of volatility). Particularly, without loss of generality, we assume all debt is allocated to asset \(N_{l}+1\). Let \(\mathbf{e}_{i}\in\mathbb{R}^{N_{a}}=(0,\cdots,0,1,0,\cdots,0)^{\top}\) denote the standard basis vector of which the \(i\)-th entry is 1 and all other entries are 0. Then, we can formulate this assumption as follows._ \[\text{(No trading in insolvency):}\quad p(\mathbf{X}(t))=\mathbf{e}_{N_{l}+1},\quad \text{if}\;\mathbf{X}(t)\in\mathcal{X}_{2}. \tag{2.34}\] **Remark 2.5**.: (Remark on Assumption 2.6) Essentially, when the portfolio wealth is negative, we assume the debt is allocated to a short-term bond asset and accumulates over time. Summarizing the constraints, we can define two sets \(\mathcal{Z}_{1},\mathcal{Z}_{2}\): \[\left\{\begin{array}{l}Z_{1}=\left\{\mathbf{z}\in\mathbb{R}^{N_{a}}\middle|\! \!\!\begin{array}{l}\left\{\begin{array}{l}z_{i}\geq 0,\forall i\in\{1, \cdots,N_{l}\},\\ \sum_{i=1}^{N_{a}}z_{i}=1,\\ \sum_{i=1}^{N_{l}}z_{i}\leq p_{max},\\ z_{i}\leq 0,\;\forall i\in\{N_{l}+1,\cdots,N_{a}\},\;\text{if}\;\sum_{i=1}^{N_ {l}}z_{i}>1,\\ z_{i}\geq 0,\;\forall i\in\{N_{l}+1,\cdots,N_{a}\},\;\text{if}\;\sum_{i=1}^{N_ {l}}z_{i}\leq 1\end{array}\right.\end{array}\right\}\!,\\ \mathcal{Z}_{2}=\left\{\mathbf{e}_{N_{l}+1}\right\}\!, \tag{2.35}\] Then, the corresponding space of feasible control vector values \(\mathcal{Z}\) and the admissible strategy set \(\mathcal{A}\) are \[(\text{Admissible set}):\quad\left\{\begin{aligned} \mathcal{Z}&=\mathcal{Z}_{1}\cup \mathcal{Z}_{2},\\ \mathcal{A}&=\left\{\mathcal{P}=\left\{\boldsymbol{p}( \boldsymbol{X}(t)),\;t\in\mathcal{T}\right|\left\{\begin{aligned} \boldsymbol{p}(\boldsymbol{X}(t))\in\mathcal{Z}_{1}, \;\text{if}\;\boldsymbol{X}(t)\in\mathcal{X}_{1},\\ \boldsymbol{p}(\boldsymbol{X}(t))\in\mathcal{Z}_{2},\;\text{if}\; \boldsymbol{X}(t)\in\mathcal{X}_{2},\end{aligned}\right.\quad\right\} \end{aligned}\right\}\!. \tag{2.37}\] It is not obvious how the conditional constraints in (2.37) and (2.38) can be formulated into a standard constrained optimization problem. ### Neural network method In Section 2.4, we derive the closed-form solution under the jump-diffusion model, which requires several unrealistic assumptions such as continuous rebalancing, unlimited leverage, and trading in insolvency. Furthermore, the closed-form solution is specific to the investment objective defined in the CD problem (2.11). To discover optimal strategies for high inflation regimes, capability in solving general investment problem (2.7) for different objectives and under realistic constraints, such as discrete rebalancing and limited leverage (i.e., leverage constraints discussed in Section 2.5), is critically beneficial. Therefore, we need computationally efficient methods to solve these problems numerically, particularly in high-dimensional cases. Solving a discrete-time multi-period optimal asset allocation problem often utilizes dynamic programming (DP). For example, Dixon et al. (2020); Park et al. (2020); Lucarelli and Borrotti (2020); Gao et al. (2020) use Q-learning algorithms to solve the discrete-time multi-period optimal allocation problem. In general, if there are \(N_{a}\) assets to invest in, then the use of Q-learning involves approximation of an action-value function ("Q" function) which is a \((2N_{a}+1)\)-dimensional function (van Staden et al., 2023) which represents the conditional expectation of the cumulative rewards at an intermediate state.9 Meanwhile, the optimal control is a mapping from the state space to the allocation fractions to the assets. If the state space is relatively low-dimensional, 10 then the DP-based approaches are potentially unnecessarily high-dimensional. Footnote 9: Intuitively, the dimensionality comes from tracking the allocation in the \(N_{a}\) assets for both the active portfolio and benchmark portfolio when evaluating the changes in wealth of both portfolios over one period in the action-value function. Footnote 10: For example, the state space of problem (2.11) with assumptions of a fixed-mix strategy is a vector in \(\mathbb{R}^{3}\). Instead of using dynamic programming methods, Han et al. (2016); Buehler et al. (2019); Tsang and Wong (2020); Reppen et al. (2022) propose to approximate the optimal control function by neural network functions directly. In particular, they propose a stacked neural network approach that essentially uses a sub-network to approximate the control at every rebalancing step. Therefore, the number of neural networks required grows linearly with the number of rebalancing periods. Note that, in the taxonomy of Powell (2023), this method is termed as Policy Function Approximation (PFA). In this article, we follow the lines of Li and Forsyth (2019); Ni et al. (2022) and propose a single neural network to approximate the optimal control function. The direct representation of the control function avoids the high-dimensional approximation required in DP-based methods. In addition, we consider time \(t\) as an input feature (along with the wealth of the active portfolio and benchmark portfolio), therefore avoiding the need for multiple sub-networks in the stacked neural network approach. The numerical solution to the general problem (2.7) requires solving for the feedback control \(\boldsymbol{p}\). We approximate the control function \(\boldsymbol{\theta}\) by a neural network function \(f(\boldsymbol{X}(t);\boldsymbol{\theta}):\mathcal{X}\mapsto\mathbb{R}^{N_{a}}\), where \(\boldsymbol{\theta}\in\mathbb{R}^{N_{\boldsymbol{\theta}}}\) represents the parameters of the neural network (i.e., weights and biases). In other words, \[\boldsymbol{p}(\boldsymbol{X}(t))\simeq f(\boldsymbol{X}(t);\boldsymbol{\theta })\equiv f(\cdot;\boldsymbol{\theta}). \tag{2.39}\] Then, the optimization problem (2.7) can be converted to solving the following optimization problem. \[(\text{Parameterized optimization problem}):\quad\inf_{\boldsymbol{\theta}\in \mathcal{Z}_{\boldsymbol{\theta}}}\mathbb{E}_{f(\cdot;\boldsymbol{\theta})}^{ (t_{0},w_{0})}\big{[}F(\mathcal{W}_{\boldsymbol{\theta}},\hat{\mathcal{W}}_{ \boldsymbol{\hat{p}}})\big{]}. \tag{2.40}\] Here \(\mathcal{W}_{\boldsymbol{\theta}}\) is the wealth trajectory of the active portfolio following the neural network approximation function parameterized by \(\theta\). \(\mathcal{Z}_{\boldsymbol{\theta}}\subseteq\mathbb{R}^{N_{\boldsymbol{\theta}}}\) is the feasibility domain of the parameter \(\boldsymbol{\theta}\), which is translated from the constraints of the original problem, e.g., (2.37) and (2.38). Mathematically, \[\mathcal{Z}_{\mathbf{\theta}}=\Bigg{\{}\mathbf{\theta}:\left\{\begin{aligned} f( \mathbf{X};\mathbf{\theta})&\in\mathcal{Z}_{1},\;\text{if}\;\mathbf{X}\in \mathcal{X}_{1},\\ f(\mathbf{X};\mathbf{\theta})&\in\mathcal{Z}_{2},\;\text{if}\;\mathbf{X} \in\mathcal{X}_{2}.\end{aligned}\right\}. \tag{2.41}\] Here \(\mathcal{Z}_{1},\mathcal{Z}_{2}\) are defined in (2.35), (2.36) and \(\mathcal{X}_{1},\mathcal{X}_{2}\) are partitions of the state space \(\mathcal{X}\) defined in Definition 2.1. Note here that \(\mathcal{Z}_{\mathbf{\theta}}\) depends on the structure of the neural network function \(f(\cdot;\mathbf{\theta})\). Intuitively, \(\mathcal{Z}_{\mathbf{\theta}}\) is the preimage of \(\mathcal{Z}\), i.e., any \(\theta\in\mathcal{Z}_{\mathbf{\theta}}\), \(f(\cdot;\mathbf{\theta})\in\mathcal{Z}\). Specific neural network model design may result in \(\mathcal{Z}_{\mathbf{\theta}}=\mathbb{R}^{N_{\mathbf{\theta}}}\), which means (2.40) becomes an unconstrained optimization problem. For long-only investment problems, the only constraints are the long-only constraint (2.29) and the summation constraint (2.30). Previous work has proposed a neural network architecture with a softmax activation function at the last layer so that the output (vector of allocation fractions) automatically satisfies the two constraints, and thus \(\mathcal{Z}_{\mathbf{\theta}}=\mathbb{R}^{N_{\mathbf{\theta}}}\) and problem (2.40) becomes an unconstrained optimization problem (see, e.g., Li and Forsyth (2019); Ni et al. (2022)). However, as discussed in Section 2.5, we consider the more complicated case where leverage and shorting are allowed. The problem thus involves more constraints than the long-only case and therefore we would like to design a new model architecture to convert the constrained optimization problem to an unconstrained problem. We will discuss the design of the _leverage-feasible neural network_ (LFNN) model in the next section, and how the LFNN model achieves this goal. It is worth noting that for the particular CD problem (2.12) and CS problem (2.14), our technique may be formulated to appear similar to policy gradient methods in RL literature (Silver et al., 2014) on a high level. Examples of policy gradient methods in financial problems include Coache and Jaimungal (2021), in which the authors develop an actor-critic algorithm for portfolio optimization problems with convex risk measures. However, there are two main differences between our proposed methodology and policy gradient algorithms. Firstly, we assume that the randomness of the environment (i.e., asset returns) over the entire investment horizon is readily available upfront (e.g., through calibration of parametric models or resampling of historical data), which is a common assumption adopted by practitioners when backtesting investment strategies. On the other hand, RL literature often considers an unknown environment, and the algorithms focus on the exploration of the agent to learn from the unknown environment and thus may be unnecessarily complicated for our use case. Secondly, our proposed methodology is not limited to the cumulative reward framework in RL and thus is more universal and suitable for problems in which the investment objective cannot be easily expressed in the form of a cumulative reward. ### Leverage-feasible neural network (LFNN) In this section, we propose the leverage-feasible neural network (LFNN) model, which yields \(\mathcal{Z}_{\mathbf{\theta}}=\mathbb{R}^{N_{\mathbf{\theta}}}\) for leverage constraints defined in equation (2.37), and converts a constrained optimization problem (2.40) to an unconstrained problem. Let vector \(\mathbf{x}=(t,W(t),\tilde{W}(t))^{\top}\in\mathcal{X}\) be the feature (input) vector. We first define a standard fully-connected feedforward neural network (FNN) function \(\tilde{f}:\mathcal{X}\mapsto\mathbb{R}^{N_{a}+1}\) as follows: \[\text{(FNN)}:\ \ \left\{\begin{aligned} h_{j}^{(1)}=\text{Sigmoid }\Big{(}\sum_{i=1}^{N_{x}}x_{i}\theta_{ij}^{(1)}+b_{j}^{(1)}\Big{)},\;j=1, \cdots,N_{h}^{(1)},\\ h_{j}^{(k)}=\text{Sigmoid}\Big{(}\sum_{i=1}^{N_{h}^{(k-1)}}h_{i}^{(k- 1)}\theta_{ij}^{(k)}+b_{j}^{(k)}\Big{)},\;j=1,\cdots,N_{h}^{(k)},\;\forall k \in\{2,\cdots,K\},\\ o_{j}=\sum_{i=1}^{N_{h}^{(K)}}h_{i}\theta_{ij}^{(K+1)},\;j=1, \cdots,N_{a}+1,\\ \tilde{f}(\mathbf{x};\mathbf{\theta}):=(o_{1},\cdots,o_{N_{a}+1})^{\top}. \end{aligned}\right. \tag{2.42}\] Here \(\text{Sigmoid}(\cdot)\) denotes the sigmoid activation function, \(K\) denotes the number of hidden layers, \(h_{j}^{k}\) denotes the value of the \(j\)-th node in the \(k\)-th hidden layer, and \(N_{h}^{(k)}\) is the number of nodes in the \(k\)-th hidden layer. Additionally, \(\mathbf{\theta}^{(k)}=(\theta_{ij}^{(k)})\in\mathbb{R}^{N_{h}^{(k)}\times N_{h}^{(k -1)}}\) and \(\mathbf{b}^{(k)}=(b_{j}^{(k)})\in\mathbb{R}^{N_{h}^{(k)}}\) are the (vectorized) weight matrix and bias vector for the \(k\)-th layer,11 and the parameter vector of the entire neural network is \((\mathbf{\theta}^{(1)},\mathbf{b}^{(1)},\cdots,\mathbf{\theta}^{(K)},\mathbf{b}^{(K)},\mathbf{\theta}^{ (K+1)})^{\top}\in\mathbb{R}^{N_{\mathbf{\theta}}}\), where \(N_{\mathbf{\theta}}=\sum_{k=1}^{K+1}N_{h}^{(k)}\cdot N_{h}^{(k-1)}+\sum_{k=1}^{K}N_{ h}^{(k)}\). Building on \(\tilde{f}\), we propose the following _leverage-feasible neural network_ (LFNN) model \(f:\mathcal{X}\mapsto\mathcal{Z}\): \[(\text{LFNN}):\quad f(\mathbf{x};\mathbf{\theta}):=\psi\Big{(}\tilde{f}(\mathbf{x};\mathbf{ \theta}),\mathbf{x}\Big{)}\in\mathcal{Z}. \tag{2.43}\] Here, \(\psi(\cdot)\) is the _leverage-feasible activation function_. For \(\mathbf{o}=(o_{1},\cdots,o_{N_{a}+1})^{\top}\in\mathbb{R}^{N_{a}+1}\), and \(\mathbf{p}=\psi(\mathbf{o},\mathbf{x})\), \(\psi(\cdot):(\mathbf{o},\mathbf{x})\in\mathbb{R}^{N_{a}+1}\times\mathcal{X}\mapsto \mathcal{Z}\) is defined by \[\mathbf{p}=\psi(\mathbf{o},\mathbf{x})=\left\{\begin{array}{ll}l=p_{max}\cdot\text{Sigmoid }(o_{N_{a}+1}),\\ p_{i}=l\cdot\frac{e^{o_{i}}}{\sum_{k=1}^{N_{a}}e^{o_{k}}},\;i\in\{1,\cdots,N_{ l}\},&\text{if }\mathbf{x}\in\mathcal{X}_{1},\\ p_{i}=(1-l)\cdot\frac{e^{o_{i}}}{\sum_{k=N_{l}+1}^{N_{a}}e^{o_{k}}},\;i\in\{N_ {l}+1,\cdots,N_{a}\},\\ \mathbf{e}_{N_{l}+1},&\text{if }\mathbf{x}\in\mathcal{X}_{2}.\end{array}\right. \tag{2.44}\] Recall that \(N_{l}\) is the number of long-only assets and \(p_{max}\) is the maximum leverage allowed. We show that the leverage-feasible activation function \(\psi\) has the following property. **Lemma 2.1**.: _(Decomposition of \(\psi\)) The leverage-feasible function \(\psi\) defined in (2.44) has the function decomposition that_ \[\psi(\mathbf{o},\mathbf{x})=\varphi(\zeta(\mathbf{o}),\mathbf{x}), \tag{2.45}\] _where_ \[\left\{\begin{aligned} &\zeta:\mathbb{R}^{N_{a}+1}\mapsto \tilde{\mathcal{Z}},\zeta(o)=\Bigg{(}\text{Softmax}\Big{(}(o_{1},\cdots,o_{N_ {l}})\Big{)},\text{Softmax}\Big{(}(o_{N_{l}+1},\cdots,o_{N_{a}})\Big{)},p_{ max}\cdot\text{Sigmoid}(o_{N_{a}+1})\Bigg{)}^{\top},\\ &\varphi:\tilde{\mathcal{Z}}\times\mathcal{X}\mapsto\mathcal{Z}, \varphi(z)=\Big{(}z_{N_{a}+1}\cdot(z_{1},\cdots,z_{N_{l}}),(1-z_{N_{a}+1}) \cdot(z_{N_{l}+1},\cdots,z_{N_{a}})\Big{)}^{\top}\cdot\mathbf{I_{\mathbf{x}\in \mathcal{X}_{1}}}+\mathbf{e}_{N_{l}+1}\cdot\mathbf{I_{\mathbf{x}\in\mathcal{X}_{2}}},\end{aligned}\right. \tag{2.46}\] _and_ \[\tilde{\mathcal{Z}}=\Bigg{\{}z\in\mathbb{R}^{N_{a}+1},\sum_{i=1}^{N_{l}}z_{i}= 1,\sum_{i=N_{l}+1}^{N_{a}}z_{i}=1,z_{N_{a}+1}\leq p_{max},z_{i}\geq 0,\forall i \Bigg{\}}. \tag{2.47}\] Proof.: This is easily verifiable by definition of \(\psi\) in (2.44). **Remark 2.6**.: (Remark on Lemma 2.1) The leverage-feasible activation function \(\psi\) corresponds to a two-step decision process described by \(\zeta\) and \(\varphi\). Intuitively, \(\zeta\) first determines the internal allocations within long-only assets and shortable assets, as well as the total leverage. Then, \(\varphi\) converts the internal allocations and total leverage into final allocation fractions, which depend on the wealth of the active portfolio. With the LFNN model outlined above, the parameterized optimization problem (2.40) becomes an unconstrained optimization problem. Specifically, we present the following theorem regarding the feasibility domain \(\mathcal{Z}_{\mathbf{\theta}}\) associated with the LFNN model (2.43). **Theorem 2.2**.: _(Unconstrained feasibility domain) The feasibility domain \(\mathcal{Z}_{\mathbf{\theta}}\) defined in (2.41) associated with the LFNN model (2.43) is \(\mathbb{R}^{N_{\mathbf{\theta}}}\)._ Proof.: See Appendix B.1. Following Theorem 2.2, the constrained optimization problem (2.7) can be transformed into the following unconstrained optimization problem \[(\text{Unconstrained parameterized problem}):\quad\inf_{\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{ \theta}}}}\mathbb{E}_{f(\cdot;\mathbf{\theta})}^{(t_{0},w_{0})}\big{[}F( \mathcal{W}_{\mathbf{\theta}},\hat{\mathcal{W}}_{\hat{\mathcal{P}}})\big{]}. \tag{2.48}\] ### Mathematical justification for LFNN approach By approximating the feasible control with a parameterized LFNN model, we have shown that the original constrained optimization problem is transformed into an unconstrained optimization problem, which is computationally more implementable. However, an important question remains: is the solution to the parameterized unconstrained optimization problem (2.48) capable of yielding the optimal control of the original problem (2.7)? In other words, suppose \(\mathbf{\theta}^{*}\) is the solution to (2.48), can \(f(\cdot;\mathbf{\theta}^{*})\) approximates solution to (2.7) with desired accuracy? In this section, we prove that under benign assumptions and appropriate choices of the hyperparameter of the LFNN model (2.43), solving the unconstrained problem (2.48) provides an arbitrarily close approximation the original problem (2.7). We start by establishing the following lemma. **Lemma 2.2**.: _(Structure of feasible control) Any feasible control function \(p:\mathcal{X}\mapsto\mathcal{Z}\), where \(\mathcal{Z}\) is defined in (2.38), has the function decomposition_ \[p(x)=\varphi(\omega(x),x), \tag{2.49}\] _where \(\varphi:\tilde{\mathcal{Z}}\times\mathcal{X}\mapsto\mathcal{Z}\) is defined in (2.46) and \(\omega:\mathcal{X}\mapsto\tilde{\mathcal{Z}}\)._ Proof.: See Appendix B.2. Next, we propose the following benign assumptions on the state space and the optimal control. **Assumption 2.7**.: _(Assumption on state space and optimal control)_ * _The space_ \(\mathcal{X}\) _of state variables is a compact set._ * _Following Lemma_ 2.2_, the optimal control_ \(p^{*}:\mathcal{X}\mapsto\mathcal{Z}\) _has the decomposition_ \(p^{*}(x)=\varphi(\omega^{*}(x),x)\) _for some_ \(\omega^{*}:\mathcal{X}\mapsto\tilde{\mathcal{Z}}\)_. We assume_ \(\omega^{*}\in C(\mathcal{X},\tilde{\mathcal{Z}})\)_, where_ \(C(\mathcal{X},\tilde{\mathcal{Z}})\) _denotes the set of continuous mappings from_ \(\mathcal{X}\) _to_ \(\tilde{\mathcal{Z}}\)_._ **Remark 2.7**.: (Remark on Assumption 2.7) In our particular problem of outperforming a benchmark portfolio, the state variable vector is \(X(t)=(t,W(t),\hat{W}(t))^{\top}\in\mathcal{X}\) where \(t\in[0,T]\). In this case, assumption (i) is equivalent to the assumption that the wealth of the active portfolio and benchmark portfolio is bounded, i.e. \(\mathcal{X}=[0,T]\times[w_{min},w_{max}]\times[w_{min},\hat{w}_{max}]\), where \(w_{min},w_{max}\) and \(\hat{w}_{min},\hat{w}_{max}\) are the respective wealth bounds for the portfolios. Intuitively, assumption (ii) states that the decision process for the optimal control to obtain the internal allocation fractions within the long-only assets, shortable assets, and the total leverage is a continuous function. This is a natural extension of the long-only case, in which it is commonly assumed that the allocation within long-only assets is a continuous function of state variables. Finally, we present the following theorem. **Theorem 2.3**.: _(Approximation of optimal control) Following Assumption 2.7, \(\forall\epsilon>0\), there exists \(N_{h}\in\mathbb{N}\), and \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\) such that the corresponding LFNN model \(f(\cdot;\mathbf{\theta})\) described in (2.43) satisfies the following:_ \[\sup_{x\in\mathcal{X}}\|f(x;\mathbf{\theta})-p^{*}(x)\|<\epsilon. \tag{2.50}\] Proof.: See Appendix B.2. Theorem 2.3 shows that given any arbitrarily small tolerance \(\epsilon>0\), there exists a suitable choice of the hyperparameter of the LFNN model (e.g. the number of hidden layers and nodes), and a parameter vector \(\mathbf{\theta}\), such that the corresponding parameterized LFNN function is within this tolerance of the optimal control function.12 In other words, with a large enough LFNN model (in terms of the number of hidden nodes), solving the unconstrained parameterized problem (2.48) approximately solves the original optimization problem (2.7) with any required precision. **Remark 2.8**.: (Empirical evidence of approximation) In practice, we find that a small neural network structure with one single hidden layer and only 10 hidden nodes achieves excellent approximation performance. In particular, in a numerical experiment with simulated data, we compare the LFNN model with the approximate form of the closed-form solution derived in Section 2.4, and find that the LFNN model mimics the closed-form solution very well. This provides further empirical evidence that supports Theorem 2.3. Additional details can be found in Appendix C. ### Training LFNN Since the numerical experiments involve the solution and evaluation of the optimal parameters \(\boldsymbol{\theta}^{*}\) of the LFNN model (2.43) in problem (2.48), we briefly review how the parameters are computed in experiments. In numerical experiments, the expectation in (2.48) is approximated by using a finite set of samples of the set \(\boldsymbol{Y}=\{Y^{(j)}:j=1,\cdots,N_{d}\}\), where \(N_{d}\) is the number of samples, and \(Y^{(j)}\) represents a time series sample of _joint_ asset return observations \(R_{i}(t),\ i\in\{1,\cdots,N_{a}\}\), observed at \(t\in\mathcal{T}\).13 Mathematically, problem (2.48) is approximated by Footnote 13: Note that the corresponding set of asset prices can be easily inferred from the set of asset returns, or vice versa. \[\inf_{\boldsymbol{\theta}\in\mathbb{R}^{N_{\boldsymbol{\theta}}}}\Bigg{\{} \frac{1}{N_{d}}\sum_{j=1}^{N_{d}}F\left(\mathcal{W}^{(j)}_{\boldsymbol{\theta }},\hat{\mathcal{W}}^{(j)}_{\hat{\mathcal{P}}}\right)\Bigg{\}}. \tag{2.51}\] Here \(\mathcal{W}^{(j)}_{\boldsymbol{\theta}}\) is the wealth trajectory of the active portfolio following the LFNN parameterized by \(\boldsymbol{\theta}\), and \(\hat{\mathcal{W}}^{(j)}\) is the wealth trajectory of the benchmark portfolio following the benchmark strategy \(\hat{\mathcal{P}}\), both evaluated on \(Y^{(j)}\), the \(j\)-th time series sample. We use a shallow neural network model, specifically, an LFNN model with one single hidden layer with 10 hidden nodes, i.e., \(K=1\) and \(N_{h}^{(1)}=10\). We use the 3-tuple vector \((t,W_{\boldsymbol{\theta}}(t),\hat{W}(t))^{\top}\) as the input (feature) to the LFNN network. At \(t\in[t_{0},T]\), \(W_{\boldsymbol{\theta}}(t)\) is the wealth of the active portfolio of the strategy that follows the LFNN model parameterized by \(\boldsymbol{\theta}\), and \(\hat{W}(t)\) is the wealth of the benchmark portfolio. Then, the optimal parameter \(\boldsymbol{\theta}^{*}\) can be numerically obtained by solving problem (2.51) using standard optimization algorithms such as ADAM (Kingma and Ba, 2014). This process is commonly referred to as "training" of the neural network model, and \(\boldsymbol{Y}\) is often referred to as the training data set (Goodfellow et al., 2016). Once \(\boldsymbol{\theta}^{*}\) is numerically obtained, the resulting optimal strategy \(f(\cdot;\boldsymbol{\theta}^{*})\) is evaluated on a separate "testing" data set \(\boldsymbol{Y}^{test}\), which contains a different set of samples generated from either the same distribution of the training process or a different process (depending on experiment purposes) so that the "out-of-sample" performance of \(f(\cdot;\boldsymbol{\theta}^{*})\) is assessed. ## 3 Numerical experiments In this section, we present a case study that explores optimal asset allocation during high-inflation periods using the LFNN model through numerical experiments. To conduct our analysis, we need data specifically from high inflation periods. Such data can be acquired using parametric modeling or non-parametric sample generation methods. It is important to note that our LFNN approach is agnostic to the choice of data modeling methods. While there is no universally accepted method for identifying or modeling high inflation regimes, for the purpose of this demonstration, we employ a simple filtering technique to identify inflation regime data and generate the required samples for training the LFNN. ### Filtering historical inflation regimes We use the U.S. CPI index and monthly data from the Center for Research in Security Prices (CRSP) over the 1926:1-2022:1 period.1415 We select high-inflation periods as determined by the CPI index using the following filtering procedure. Using a moving window of \(k\) months, we determine the cumulative CPI index log return (annualized) in this window. If the cumulative annualized CPI index log return is greater than a cutoff, then all the months in the window are flagged as part of a high-inflation regime. Note that some months may appear in more than one moving window. Any months which do not meet this criterion are considered to be in low-inflation regimes. See Algorithm D.1 in Appendix D.1 for the pseudo-code. Footnote 14: The date convention is that, for example, 1926:1 refers to January 1, 1926. Since the average annual inflation over the period 1926:1-2022:1 was 2.9%, and Federal Reserve policy-makers have been targeting the inflation rate of 2% over the long run to achieve maximum employment and price stability (The Federal Reserve, 2011), we use a cutoff of 5% as the threshold for high inflation. In addition, we use the moving window size of 5 years (see Appendix D.2 for more discussion). This uncovers two inflation regimes: 1940:8-1951:7 and 1968:9-1985:10, which correspond to well-known market shocks (i.e. the second world war, and price controls; the oil price shocks and stagflation of the seventies). Table 3.1 shows the average annual inflation over the two regimes identified from our filter. For possible investment assets, we consider the 30-day U.S. T-bill index (CRSP designation "t30ind"), a constant maturity 10-year U.S. treasury index,16 and the cap-weighted stock index (CapWt) and the equal-weighted stock index (EqWt), also from CRSP.17 All of these various indexes are adjusted for inflation by using the U.S. CPI index. Footnote 15: More specifically, results presented here were calculated based on data from Historical Indexes, ©2022 Center for Research in Security Prices (CRSP), The University of Chicago Booth School of Business. Wharton Research Data Services (WRDS) was used in preparing this article. This service and the data available thereon constitute valuable intellectual property and trade secrets of WRDS and/or its third-party suppliers. Footnote 16: The 10-year treasury index was generated from monthly returns from CRSP back to 1941 (CRSP designation “b10ind”). The data for 1926-1941 are interpolated from annual returns in Homer and Sylla (1996). The 10-year treasury index is constructed by (a) buying a 10-year treasury at the start of each month, (b) collecting interest during the month, and then (c) selling the treasury at the end of the month. We repeat the process at the start of the next month. The gains in the index then reflect both interest and capital gains and losses. Footnote 17: The capitalization-weighted total returns have the CRSP designation “wretd”, and the equal-weighted total returns have the CRSP designation “ewretd”. We find that the equal-weighted stock index has a higher average return and higher volatility than the cap-weighted stock index. In addition, we find that the 30-day T-bill index has a similar average return as the 10-year T-bond index, but much lower volatility, see Appendix D.3 for more details. This indicates that the T-bill index is the better choice of a defensive asset during high inflation. Subsequently, we consider the equal-weighted stock index, the cap-weighted stock index, and the 30-day T-bill index. ### Bootstrap resampling Once we have obtained the filtered historical high-inflation data series from Section 3.1, it becomes necessary to generate training and testing data sets from the original time series data. While one common approach is to assume and fit a parametric model to the underlying data, it is important to acknowledge the limitations associated with this choice. Parametric models have several drawbacks, including the difficulty of accurately estimating their parameters (Black, 1993). Even for a simple geometric Brownian motion (GBM) model, accurately estimating \begin{table} \begin{tabular}{c c} \hline \hline Time Period & Average Annualized Inflation \\ \hline 1940:8-1951:7 &.0564 \\ 1968:9-1985:10 &.0661 \\ \hline \hline \end{tabular} \end{table} Table 3.1: Inflation regimes determined using a five-year moving window with a cutoff inflation rate of 0.05. the drift rate can be challenging and prone to errors, requiring a long historical period of data coverage (Brigo et al., 2008). More complex models, such as the jump-diffusion model (2.15), introduce additional components to the stochastic model, which necessitates the estimation of extra parameters. Furthermore, parametric models inherently make assumptions about the true stochastic model for asset prices, which can be subject to debate. Acknowledging the above limitations of parametric market data models, we turn to the alternative nonparametric method of bootstrap resampling as a data-generating process for numerical experiments. Unlike parametric models, non-parametric models such as bootstrap resampling do not make assumptions about the parametric form of the asset price dynamics. Intuitively speaking, the bootstrap resampling method randomly chooses data points from the historical time series data and reassembles them into new paths of time series data. The bootstrap was initially proposed as a statistical method for estimating the sampling distribution of statistics (Efron, 1992). We use it as a data-generating procedure, as the philosophy behind bootstrap resampling is consistent with the idea that _"history does not repeat, but it rhymes."_. The bootstrap resampling provides an empirical distribution, which seems to be the least prejudiced estimate possible of the underlying distribution of the data-generating process. We also note that bootstrap resampling is widely adopted by practitioners (Alizadeh and Nomikos, 2007; Cogneau and Zakamouline, 2013; Dichtl et al., 2016; Scott and Cavaglia, 2017; Shahzad et al., 2019; Cavaglia et al., 2022; Simonian and Martirosyan, 2022) as well as academics (Anarkulova et al., 2022). Specifically, we choose to use the stationary block bootstrap resampling method (Politis and Romano, 1994). See Appendix E.1 for detailed pseudo-code for bootstrap resampling. Compared to the traditional bootstrap method, the block bootstrap technique preserves the local dependency of data within blocks. Furthermore, the stationary block bootstrap uses random blocksizes which preserves the stationarity of the original time series data. An important parameter is the expected blocksize, which, informally, is a measure of serial correlation in the return data. A challenge in using block bootstrap resampling is the need to choose a single blocksize for multiple underlying time series data so that the bootstrapped data entries for different assets are synchronized in time. Subsequently, we use the expected blocksize of 6 months for all time series data. However, we have compared different numerical experiments using a range of blocksizes, including i.i.d. assumptions (i.e. expected blocksize equal to one month), and find that the results are relatively insensitive to blocksize, as discussed in more detail in Appendix E.2. Typically, the bootstrap technique resamples from data sourced from one contiguous segment of historical periods. However, the moving-window filtering algorithm has identified two non-contiguous historical inflation regimes. To apply the bootstrap method, there are two intuitive possibilities: 1) concatenate the two historical inflation regimes first, then bootstrap from the concatenated combined series, or 2) bootstrap within each regime (i.e., using circular block bootstrap resampling within each regime), then combine the bootstrapped resampled data points. We have experimented with both methods and find that the difference is minimal (see Appendix E.3). In this article, we adopt the first method, i.e., we concatenate the historical regimes first, then bootstrap from the combined series. This method is also adopted by Anarkulova et al. (2022), where stock returns from different countries are concatenated and the bootstrap is applied to the combined data. ### A case study on high inflation investment: a 4-asset scenario #### 3.3.1 Experiment setup In this section, we conduct a case study on optimal asset allocation during a consistent high-inflation regime. The details of the investment specification are given in Table 3.2. Briefly, the active portfolio and benchmark portfolio begin with the same initial wealth of 100 at \(t_{0}=0\). Both portfolios are rebalanced monthly. The investment horizon is 10 years, and there is an annual cash injection of 10 for both portfolios, evenly divided over 12 months. We consider an empirical case in which we allow the manager to allocate between four investment assets: the equal-weighted stock index, the cap-weighted stock index, the 30-day U.S. T-bill index, and the 10-year U.S. T-bond index. We assume that the stock indexes and the 10-year T-bond index are long-only assets. The manager can short the T-bill index to take leverage and invest in the long-only assets (with maximum total leverage of 1.3). In this experiment, we assume the borrowing premium rate is zero. Essentially, we assume that the manager can borrow short-term funding to take leverage at the same cost as the treasury bill. This may be a reasonable assumption for sovereign wealth funds, as they are state-owned and enjoy a high credit rating. We remark that the borrowing premium does not really affect the results significantly.18 The annual outperformance target \(\beta\) is set to be 2% (i.e. 200 bps per year). Footnote 18: See Appendix I for a more detailed discussion. It is worth noting that we choose the benchmark portfolio to be a fixed-mix portfolio that maintains a 70% weight in the equal-weighted stock index and 30% in the 30-day U.S. T-bill index. We select this fixed-mix portfolio as the benchmark based on our observation that the equal-weighted stock index shows superior performance compared to the cap-weighted stock index during high-inflation environments. Surprisingly, when analyzing bootstrap resampled data from the historical inflation regimes, we find that the fixed-mix portfolio consisting of 70% in the equal-weighted stock index and 30% in the 30-day U.S. T-bill index partially stochastically dominates the fixed-mix portfolio consisting of 70% in the cap-weighted stock index and 30% in the 30-day U.S. T-bill index. For more detailed information, interested readers can refer to Appendix F. As discussed in the previous section, we use the stationary bootstrap resampling algorithm (see Appendix E.1) to generate a training data set \(\mathbf{Y}\) and a testing data set \(\mathbf{Y}^{test}\) (both with 10,000 resampled paths) from the concatenated index samples from two historical inflation regimes: 1940:8-1951:7 and 1968:9-1985:10, using an expected blocksize of 6 months. The testing data set \(\mathbf{Y}^{test}\) is generated using a different random seed as the training data set \(\mathbf{Y}\), and thus the probability of seeing the same sample in \(\mathbf{Y}\) and \(\mathbf{Y}^{test}\) is near zero (see Ni et al. (2022) for proof). We remark that in this experiment, we train the LFNN model (2.43) on \(\mathbf{Y}\) under the discrete-time CS objective (H.1), instead of the CD objective (2.12). As discussed in Section 2.3, the CS objective function only penalizes underperformance relative to the elevated target. Numerical comparisons of the two objective functions suggest that the CS objective function indeed yields more favorable investment results than the CD objective (see Appendix H). In this section, unless stated otherwise, all the results presented are testing results. ### Experiment results \begin{table} \begin{tabular}{l c c c c} \hline \hline Strategy & Median\([W_{T}]\) & E\([W_{T}]\) & std\([W_{T}]\) & 5th Percentile & Median IRR (annual) \\ \hline Neural network & 364.2 & 403.4 & 211.8 & 136.3 & 0.078 \\ Benchmark & 308.5 & 342.9 & 165.0 & 149.0 & 0.056 \\ \hline \hline \end{tabular} \end{table} Table 3.3: Statistics of strategies. Results are based on the evaluation results on the testing data set. \begin{table} \begin{tabular}{l c} \hline \hline Investment horizon \(T\) (years) & 10 \\ Equity market indexes & CRSP cap-weighted/equal-weighted index (real) \\ Bond index & CRSP 30-day/10-year U.S. treasury index (real) \\ Index samples for bootstrap & Concatenated 1940:8-1951:7, 1968:9-1985:10 \\ Initial portfolio wealth/annual cash injection & 100/10 \\ Rebalancing frequency & Monthly \\ Maximum leverage & 1.3 \\ Outperformance target rate \(\beta\) & 2\% \\ \hline \hline \end{tabular} \end{table} Table 3.2: Investment scenario. The analysis of Figure (a) reveals that the neural network strategy (the strategy following the training LFNN model) consistently outperforms the benchmark strategy in terms of the wealth ratio \(W(t)/\hat{W}(t)\). Over time, both the mean and median wealth ratios demonstrate a smooth and consistent increase. Regarding tail performance (20th percentile), the neural network strategy initially falls behind the benchmark but gradually recovers and ultimately achieves 10% greater wealth at the terminal time. This observation indicates that the neural network strategy effectively manages tail risk. An additional metric that holds significant interest for managers is the distribution of the terminal wealth ratio \(\frac{W(T)}{\hat{W}(T)}\). This metric examines the relative performance of the strategies at the end of the investment period. Figure (b) illustrates that there is a greater than 90% chance that the neural network strategy outperforms the benchmark strategy in terms of terminal wealth. This outcome is particularly noteworthy as the objective function (H.1) does not directly target the terminal wealth ratio. Given the constant cash injections in the portfolios, it is appropriate to employ the internal rate of return (IRR) as a measure of the portfolio's annualized performance. Figure (c)c demonstrates that the neural network strategy has a more than 90% chance of producing a higher IRR. Furthermore, the median IRR of the neural network strategy exceeds that of the benchmark strategy by slightly over 2%, aligning with the chosen target outperformance rate of \(\beta=0.02\). This indicates that the neural network model consistently achieves the desired target performance across most outcomes. The results from Table 3.3 indicate that the 5th percentile of the terminal wealth for the neural network strategy is lower than that of the benchmark strategy. This suggests that in some scenarios, particularly during persistent bear markets when stocks perform poorly, the neural network strategy may experience lower terminal wealth compared to the benchmark strategy. The neural network strategy takes on more risk by allocating a higher fraction of wealth to the equal-weighted stock index, which is considered a riskier asset, in comparison to the benchmark portfolio. It's important to note, however, that these scenarios occur with low probability. As depicted in Figure (b)b, the neural network strategy exhibits a significantly high probability of outperforming the benchmark in terms of terminal wealth, exceeding 90%. This implies that while there might be instances where the neural network strategy suffers relative to the benchmark, the overall performance is consistently strong, resulting in a high likelihood of achieving superior terminal value. To gain insight into the strong performance of the neural network strategy, we further examine its allocation profile. We begin by examining the mean allocation fraction for the four assets over time, as depicted in Figure 3.2. The first noteworthy observation from Figure 3.2 is that, on average, the neural network strategy does not allocate wealth to the cap-weighted stock index. Initially, this might appear surprising; however, it aligns Figure 3.1: Percentiles of wealth ratio \(\frac{W(t)}{\hat{W}(t)}\) and CDF of terminal wealth ratio \(\frac{W(T)}{\hat{W}(T)}\) and internal rate of return (IRR). Results are based on the evaluation of the learned neural network model on \(\boldsymbol{Y}^{test}\). with historical data indicating significantly higher real returns for the equal-weighted stock index during periods of high inflation (refer to Appendix D.3). Given that the objective is to outperform a benchmark heavily invested in the equal-weighted index (70%), it is logical to avoid allocating wealth to a comparatively weaker index in the active portfolio. The second observation derived from Figure 3.2 pertains to the evolution of mean bond allocation fractions. Initially, the neural network strategy shorts the 30-day T-bill index and assumes some leverage while heavily investing in the equal-weighted stock index during the first two years. This indicates a deliberate risk-taking approach early on to establish an advantage over the benchmark strategy. Subsequently, the allocation to the 10-year T-bond decreases, coinciding with the reduction in the allocation to the equal-weighted index. This suggests that the initial allocation to the T-bond was primarily for leveraging purposes, with the 10-year bond being the only defensive asset available. As leverage is no longer used in later years, the neural network strategy favors the T-bill over the 10-year bond. Overall, despite the gradual decrease in stock allocation over time, the neural network strategy maintains an average allocation of more than 80% to the equal-weighted stock index. This is expected, as outperforming an aggressive benchmark with a 70% allocation to the equal-weighted stock index necessitates assuming higher levels of risk. Despite the higher allocation to riskier assets, the neural network strategy consistently delivers strong results compared to the benchmark strategy, as illustrated in Figure 3.1. Lastly, it is worth noting that the neural network strategy, trained under high-inflation regimes, exhibits remarkable performance on low-inflation testing datasets. This unexpected outcome highlights the robustness of the strategy. For further discussion on this topic, interested readers can refer to Appendix J. ## 4 Conclusion In this paper, our primary objective is to propose a framework that generates optimal dynamic allocation strategies under leverage constraints in order to outperform a benchmark during high inflation regimes. Imposing leverage-constraint in multi-period asset allocation is consistent with the practice in large sovereign wealth funds, which often have exposures to alternative assets. Our proposed framework efficiently solves high-dimensional optimal control problems, accommodating diverse objective functions, constraints, and data sources. We begin by assuming that both asset prices follow jump-diffusion models. Under this assumption, we derive a closed-form solution for a two-asset case using the cumulative tracking difference (CD) objective function. However, to obtain this closed-form solution, we need to make additional unrealistic assumptions Figure 3.2: Mean allocation fraction over time, evaluated on \(\boldsymbol{Y}^{test}\) such as continuous rebalancing, unlimited leverage, and continued trading in insolvency. Despite these assumptions, the closed-form solution provides valuable insights into the optimal control behavior. Notably, to track the elevated target, the optimal control needs to aim higher than the target when making allocation decisions. To overcome the limitations of unrealistic assumptions and derive a more practical solution, we introduce a novel leverage-feasible neural network (LFNN) model. The LFNN model approximates the optimal control directly, eliminating the need for high-dimensional approximations of conditional expectations required in dynamic programming approaches. Additionally, the LFNN model converts the leverage-constrained optimization problem into an unconstrained optimization problem. Importantly, we justify the validity of the LFNN approach by mathematically proving that the solution to the parameterized unconstrained optimization problem can approximate the solution to the original constrained optimization problem with arbitrary precision. To illustrate the effectiveness of our proposed approach, we conduct a case study on optimal asset allocation during high-inflation regimes. We apply the LFNN model to bootstrap resampled data from filtered historical high-inflation data. In our numerical experiment, we consider an investment case with four assets in high inflation regimes. The results consistently demonstrate that the neural network strategy outperforms the benchmark strategy throughout the investment period. Specifically, the neural network strategy achieves a 2% higher median Internal Rate of Return (IRR) compared to the benchmark strategy and yields a higher terminal wealth with more than a 90% probability. The allocation strategy derived from the LFNN model suggests that managers should favor the equal-weighted stock index over the cap-weighted stock index and short-term bonds over long-term bonds during high-inflation periods. ## 5 Acknowledgements Forsyth's work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2017-03760. Li's work was supported by he Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2020-04331. ## Appendix A Technical details of closed-form solution ### Proof of Theorem (2.17) At any state \((t,w,\hat{w})\in[t_{0},T]\times\mathbb{R}^{2}\), define the value function \(V(w,\hat{w},t)\) to the CD problem (2.11) as \[V(t,w,\hat{w},\hat{\boldsymbol{\varrho}}))=\inf_{\boldsymbol{p}}\Big{\{} \mathbb{E}_{\boldsymbol{p}}\Bigg{[}\int_{t}^{T}\big{(}W(s)-e^{\beta s}\hat{W} (s)\big{)}^{2}ds\Big{|}W(t)=w,\hat{W}(t)=\hat{w}\Big{]}\Bigg{\}}.\] (A.1) By the dynamic programming principle, we have \[V(t,w,\hat{w},\hat{\boldsymbol{\varrho}})=\inf_{\boldsymbol{p}}\Big{\{} \mathbb{E}_{\boldsymbol{p}}\Big{[}\Big{(}V(t+\Delta t,W(t+\Delta t),\hat{W}(t +\Delta t),\hat{\boldsymbol{\varrho}})+\int_{t}^{t+\Delta t}\big{(}W(s)-e^{ \beta s}\hat{W}(s)\big{)}^{2}ds\Big{)}\Big{|}W(t)=w,\hat{W}(t)=\hat{w}\Big{]} \Big{\}}.\] (A.2) Rearrange equation (A.2) to obtain \[\inf_{\boldsymbol{p}}\Big{\{}\mathbb{E}_{\boldsymbol{p}}\Big{[}\Big{(}dV(t,w,\hat{w},\hat{\boldsymbol{\varrho}})+\int_{t}^{t+\Delta t}\big{(}W(s)-e^{ \beta s}\hat{W}(s)\big{)}^{2}ds\Big{)}\Big{|}W(t)=w,\hat{W}(t)=\hat{w}\Big{]} \Big{\}}=0\] (A.3) Then, apply Ito's lemma with jumps (Cont et al., 2011), substitute \(dW\) and \(d\hat{W}\) terms with (2.16), and take limits as \(\Delta t\downarrow 0\), we obtain (2.17). The above results merely serve as an intuitive guide to obtain (2.17). The formal proof of (2.17) proceeds by using a suitably smooth test function, see for example (Oksendal and Sulem, 2007). ### Proof of results for CD-optimal control In Section 2.4, we emphasized the dependence of \(B\) and \(D\) (defined in (2.23) and (2.22)) on parameters \(\beta\) and \(c\) for understanding the optimal control function. As \(\beta\) and \(c\) are fixed parameters, in this proof, we omit the dependence of \(B\) and \(D\) on them for notational simplicity. The quadratic source term \(\left(w-e^{\beta t}\hat{w}\right)^{2}\) in Theorem (2.17) suggests the following _ansatz_ for the value function \(V\) in Theorem 2.17 of the form \[V(t,w,\hat{w})=A(t)w^{2}+B(t)w+C(t)+\hat{A}(t)\hat{w}^{2}+\hat{B}(t)\hat{w}+D(t )w\hat{w},\] (A.4) where \(A,B,C,\hat{A},\hat{B},D\) are unknown deterministic functions of time \(t\). If (A.4) is correct, then the pointwise infimum in (2.17) is attained by \(p^{*}\) satisfying the relationship \[\left(w\cdot\frac{\partial^{2}V}{\partial w^{2}}\right)\cdot p^{*}=-\frac{1}{ \gamma}\Bigg{(}\big{(}\mu_{1}-\mu_{2}\big{)}\cdot\frac{\partial V}{\partial w} +\big{(}\hat{\varrho}\gamma+\theta\big{)}\cdot\hat{w}\cdot\frac{\partial^{2} V}{\partial w\hat{w}}+\theta\cdot w\cdot\frac{\partial^{2}V}{\partial w^{2}} \Bigg{)},\] (A.5) assuming \(A(t)>0\). Here \(\gamma\) and \(\theta\) are defined in (2.19). (A.4) implies that the relevant partial derivatives of \(V\) are of the form \[\frac{\partial^{2}V}{\partial w^{2}}=2A(t),\quad\frac{\partial V}{\partial w} =2A(t)w+B(t)+D(t)\hat{w},\quad\frac{\partial^{2}V}{\partial w\hat{w}}=D(t).\] (A.6) Substituting (A.6) into (A.5), the optimal control \(p^{*}\) obtained is in the form of (2.20), where \(h\) and \(g\) are given by (2.21). Then, it only remains to determine the functions \(A,B,D\). Substituting (A.5) into PIDE (2.17), we can obtain the following ordinary differential equations (ODE) for \(A,B,D\), \[\left\{\begin{array}{l}\frac{dA(t)}{dt}=-\Big{(}2\mu_{2}-\eta \Big{)}A(t)-1,\qquad A(T)=0,\\ \frac{dD(t)}{dt}=-\Big{(}2\mu_{2}-\eta\Big{)}D(t)+2e^{\beta t},\qquad D(T)=0,\\ \frac{dB(t)}{dt}=-(\mu_{2}-\phi)B(t)-2cA(t)-cD(t),\qquad B(T)=0,\end{array}\right.\] (A.7) Solving the ODE system gives us the \(A,B,D\) defined in (2.22) and (2.23). We also note that \(A(t)>0\), thus completing the proof. ### Proof of Corollary (2.1) and (2.2) van Staden et al. (2022) derive the CD-optimal control under the assumption that the stock price follows the double-exponential jump-diffusion model and the bond is risk-free with the bond price \(B(t)\) following \[\frac{dB(t)}{B(t)}=r.\] (A.8) Under such, assumptions, van Staden et al. (2022) shows that the CD-optimal control can be expressed in a similar form as in (2.20) with \(g\) and \(h\) functions. The \(g\) and \(h\) functions satisfy the same properties as in Corollary (2.1) and (2.2). Despite the fact that we assume the bond price follows a jump-diffusion model, the proof of Corollary (2.1) and (2.2) follows similar steps as the proof in van Staden et al. (2022). Technical details of LFNN model ### Proof of Theorem 2.2 **Theorem 2.2**.: (Unconstrained feasibility domain) The feasibility domain \(\mathcal{Z}_{\mathbf{\theta}}\) defined in (2.41) associated with the LFNN model (2.43) is \(\mathbb{R}^{N_{\mathbf{\theta}}}\). Proof.: First, it is obvious that \(\mathcal{Z}_{\mathbf{\theta}}\subseteq\mathbb{R}^{N_{\mathbf{\theta}}}\) by definition of (2.41). Next, we show that \(\mathbb{R}^{N_{\mathbf{\theta}}}\subseteq\mathcal{Z}_{\mathbf{\theta}}\). To prove this, we need to show that for any \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\), \[f(x;\mathbf{\theta})=p\in\left\{\begin{aligned} & \mathcal{Z}_{1},\;\text{if}\;x\in\mathcal{X}_{1},\\ &\mathcal{Z}_{2},\;\text{if}\;x\in\mathcal{X}_{2},\end{aligned} \right.\quad\forall x\in\mathcal{X}.\] (B.1) Here \(f\) is the LFNN function defined in (2.43), \(p=(p_{1},\cdots,p_{N_{a}})^{\top}\in\mathbb{R}^{N_{a}}\) is the output of the LFNN model that represents the wealth allocation to the assets, \(\mathcal{Z}\) is the feasibility domain defined in (2.37), and \(x=\big{(}t,W(t),\dot{W}(t)\big{)}^{\top}\in\mathcal{X}\) is a feature vector. To prove (B.1), we verify the two scenarios (\(x\in\mathcal{X}_{1}\) and \(x\in\mathcal{X}_{2}\)) separately. When \(x\in\mathcal{X}_{2}\), it is easily verifiable that \(p=\mathbf{e}_{N_{t}+1}\) via the definition of the leverage-feasible activation function (2.44). Next, we verify that when \(x\in\mathcal{X}_{1}\), \(p\in\mathcal{Z}_{1}\). To prove this, we need to show that constraints of (2.29)-(2.32) are satisfied when \(x\in\mathcal{X}_{1}\). By definition of (2.44), it is obvious that the long-only constraint (2.29) holds for long-only assets. It is also easy to verify that the summation constraint (2.30) is satisfied. This can be observed after the fact that \[\sum_{i=1}^{N_{l}}p_{i}=l,\quad\text{and}\quad\sum_{i=N_{l}+1}^{N_{a}}p_{i}=1-l.\] (B.2) The maximum leverage constraint (2.31) is also satisfied, as \[\sum_{i=1}^{N_{l}}p_{i}=l=p_{max}\cdot\text{Sigmoid}(-o_{N_{a}+1})\leq p_{max}.\] (B.3) Finally, the simultaneous shorting constraint (2.5) is satisfied. To see this, we examine the scenario when leverage occurs, i.e., \(\sum_{i=1}^{N_{l}}p_{i}=l>1\). Then, by definition from (2.44), we know \[p_{i}=(1-l)\cdot\frac{e^{o_{i}}}{\sum_{k=N+1}^{N_{a}}e^{o_{k}}}\leq 0,\;\forall i \in\{N_{l}+1,\cdots,N_{a}\}\] (B.4) From (B.4) it is clear that if \(l\leq 1\), then \(p_{i}\geq 0,\forall i\). Therefore, for any \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\), (B.1) is satisfied. This implies \(\mathbb{R}^{N_{\mathbf{\theta}}}\subseteq\mathcal{Z}_{\mathbf{\theta}}\). ### Proof of Lemma 2.2 and Theorem 2.3 **Lemma 2.2**.: (Structure of feasible control) Any feasible control function \(p:\mathcal{X}\mapsto\mathcal{Z}\), where \(\mathcal{Z}\) is defined in (2.38), has the function decomposition \[p(x)=\varphi(\omega(x),x),\] (B.5) where \(\varphi:\tilde{\mathcal{Z}}\times\mathcal{X}\mapsto\mathcal{Z}\) is defined in (2.46), i.e. \[\varphi(z)=\Big{(}z_{N_{a}+1}\cdot(z_{1},\cdots,z_{N_{l}}),(1-z_{N_{a}+1}) \cdot(z_{N_{l}+1},\cdots,z_{N_{a}})\Big{)}^{\top}\cdot\mathbf{1}_{\mathbf{x}\in \mathcal{X}_{1}}+\mathbf{e}_{N_{l}+1}\cdot\mathbf{1}_{\mathbf{x}\in\mathcal{X}_{2}},\] (B.6) and \(\omega:\mathcal{X}\mapsto\tilde{\mathcal{Z}}\). Here \[\tilde{\mathcal{Z}}=\bigg{\{}z\in\mathbb{R}^{N_{a}+1},\sum_{i=1}^{N_{l}}z_{i}=1, \sum_{i=N_{l}+1}^{N_{a}}z_{i}=1,z_{N_{a}+1}\leq p_{max},z_{i}\geq 0,\forall i \bigg{\}}.\] (B.7) Proof.: We prove the lemma by existence. Define \(\omega\) as \[\omega(x)=\bigg{\{}\phi\big{(}p(x)\big{)},\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\text{if }x\in\mathcal{X}_{1},\] (B.8) where for \(\forall z=(z_{1},\cdots,z_{N_{a}})^{\top}\in\mathcal{Z}_{1}\), \(y=\phi(z){\in\mathbb{R}^{N_{a}+1}}\) is as defined below \[\phi(z)\equiv y=\left\{\begin{array}{ll}y_{i}=\frac{z_{i}}{\sum_{j=1}^{N_{l }}z_{j}},\;i\in\{1,\cdots,N_{l}\},\\ y_{i}=\frac{z_{i}}{1-\sum_{j=1}^{N_{l}}z_{j}},\;i\in\{N_{l},\cdots,N_{a}\}, \qquad\text{if }\sum_{i=1}^{N_{l}}z_{i}\in(0,1)\cup(1,p_{max}],\\ y_{N_{a}+1}=\sum_{j=1}^{N_{l}}z_{j},\\ y_{i}=z_{i},\;i\in\{1,\cdots,N_{l}\},\\ y_{i}=1/(N_{a}-N_{l}),\;i\in\{N_{l},\cdots,N_{a}\},\quad\text{if }\sum_{i=1}^{N_{l}}z_{i}=1,\\ y_{N_{a}+1}=1,\\ \left\{\begin{array}{ll}y_{i}=0,\;i\in\{1,\cdots,N_{l}\},\\ y_{i}=z_{i},\;i\in\{N_{l},\cdots,N_{a}\},&\text{if }\sum_{i=1}^{N_{l}}z_{i}=0,\\ y_{N_{a}+1}=0,\end{array}\right.\end{array}\] (B.9) It can then be easily verified that \(\omega:\mathcal{X}\mapsto\tilde{\mathcal{Z}}\), and that \(p(x)=\varphi(\omega(x),x)\). **Lemma B.1**.: (Approximation of controls with a specific structure) _Assume a control function \(p:\mathcal{X}\mapsto\mathcal{Z}\) has the structure_ \[p(x)=\Phi(\Omega(x),x),x\in\mathcal{X},\] (B.10) _where \(\mathcal{X}\) is compact, \(\Omega\in C(\mathcal{X},\mathcal{Y})\), i.e. \(\Omega\) is a continuous mapping from \(\mathcal{X}\) to \(\mathcal{Y}\), and \(\Phi:\mathcal{Y}\times\mathcal{X}\mapsto\mathcal{Z}\) is Lipschitz continuous on \(\mathcal{Y}\times\mathcal{X}_{i}\), \(\forall i=1,\cdots,n\), where \(\{\mathcal{X}_{i},i=1,\cdots,n\}\) is a partition of \(\mathcal{X}\), i.e._ \[\left\{\begin{array}{ll}\bigcup_{i=1}^{n}\mathcal{X}_{i}=\mathcal{X},\\ \mathcal{X}_{i}\bigcap\mathcal{X}_{j}=\varnothing,\forall 1\leq i,j\leq n. \end{array}\right.\] (B.11) _If \(\exists m\in\mathbb{N}\) and \(\Upsilon:\mathbb{R}^{m}\mapsto\mathcal{Y}\) such that_ 1. \(\Upsilon\) _has a continuous right inverse on_ \(Im(\Upsilon)\)_._ 2. \(Im(\Upsilon)\) _is dense in_ \(\mathcal{Y}\)_, then_ \(\forall\epsilon>0\)_._ 3. \(\partial Im(\Upsilon)\) _is collared._ _Then there exists a choice of \(N_{h}\) and \(\boldsymbol{\theta}\) such that the fully connected feedforward neural network function \(\tilde{f}(\cdot;\boldsymbol{\theta})\) defined in (2.42) satisfies_ \[\sup_{x\in\mathcal{X}}\|\Phi\Big{(}\Upsilon\big{(}\tilde{f}(x;\boldsymbol{ \theta})\big{)},x\Big{)}-p(x)\|<\epsilon.\] (B.12) Proof.: Let \[L_{\Phi}=\max_{1\leq i\leq n}L_{i},\] (B.13) where \(L_{i}\) is the Lipschitz constant for \(\Phi\) on \(\mathcal{Y}\times\mathcal{X}_{i}\). Since \(\Omega\in C(\mathcal{X})\) in compact \(\mathcal{X}\), following Kratsios and Bilokopytov (2020), we know that \(\forall\epsilon_{*}\), there exists \(N_{h}\in\mathbb{N}\) and \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\) such that the corresponding FNN \(\tilde{f}(\cdot;\mathbf{\theta}):\mathcal{X}\mapsto\mathbb{R}^{m}\) defined in (2.42) satisfies \[\sup_{x\in\mathcal{X}}\|\Upsilon\big{(}\tilde{f}(x;\mathbf{\theta})\big{)}-\Omega(x )\|<\epsilon/L_{\Phi},\] (B.14) Then \[\sup_{x\in\mathcal{X}}\|\Phi\Big{(}\Upsilon\big{(}\tilde{f}(x; \mathbf{\theta})\big{)},x\Big{)}-p(x)\| =\sup_{1\leq i\leq n}\sup_{x\in\mathcal{X}_{i}}\|\Phi\Big{(} \Upsilon\big{(}\tilde{f}(x;\mathbf{\theta})\big{)},x\Big{)}-\Phi\Big{(}\Omega(x),x \Big{)}\|\] (B.15) \[\leq\sup_{1\leq i\leq n}\sup_{x\in\mathcal{X}_{i}}L_{i}\cdot\Big{(} \|\Upsilon\big{(}\tilde{f}(x;\mathbf{\theta})\big{)}-\Omega(x)\|\Big{)}\] (B.16) \[<\sup_{1\leq i\leq n}\frac{L_{i}}{L_{\Phi}}\epsilon\] (B.17) \[\leq\epsilon.\] (B.18) **Remark B.1**.: (Remark on Lemma B.1) Normally, the universal approximation theorem only applies to the approximation of continuous functions defined on a compact set (Hornik, 1991). Lemma B.1 extends the universal approximation theorem to a broader class of functions that have the structure of (B.10). Furthermore, Lemma B.1 provides guidance on constructing neural network functions that handle stochastic constraints on controls which are usually difficult to address in stochastic optimal control problems. Consider the following example: the control \(p:\mathcal{X}\mapsto\mathbb{R}^{N_{a}}\) has stochastic constraints such that \(p(\mathbf{x})\in[a(\mathbf{x}),b(\mathbf{x})]\) where \(a,b:\mathcal{X}\mapsto\mathbb{R}^{N_{a}}\) are deterministic functions. This is a common setting in portfolio optimization problems in which allocation fractions to specific assets are subject to thresholds tied to the performance of the portfolio. With Lemma B.1, with a bit of engineering, one can easily construct a \(\Phi\) so that the corresponding neural network satisfies the constraints naturally and be guaranteed that such a neural network can approximate the control well. We then proceed to prove Theorem 2.3. **Theorem 2.3**.: (Approximation of optimal control) Following Assumption 2.7, \(\forall\epsilon>0\), there exists \(N_{h}\in\mathbb{N}\), and \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\) such that the corresponding LFNN model \(f(\cdot;\mathbf{\theta})\) described in (2.43) satisfies the following: \[\sup_{x\in\mathcal{X}}\|f(x;\mathbf{\theta})-p^{*}(x)\|<\epsilon.\] (B.19) Proof.: From (2.43) and Lemma 2.1, we know that \[f(x;\mathbf{\theta})=\psi\big{(}\tilde{f}(x;\mathbf{\theta}),x\big{)}=\varphi\Big{(} \zeta\big{(}\tilde{f}(x;\mathbf{\theta})\big{)},x\Big{)},\] (B.20) where \(\tilde{f}\) is the FNN defined in (2.42) and \(\varphi:\tilde{\mathcal{Z}}\times\mathcal{X}\mapsto\mathbb{R}^{N_{a}},\zeta: \mathbb{R}^{N_{a}+1}\mapsto\tilde{\mathcal{Z}}\) are defined in (2.46). It can be easily verified that \(\zeta\) satisfies the following: 1. \(\zeta\) has a continuous right inverse, e.g. \[\zeta^{-1}(z):Im(\zeta)\mapsto\mathbb{R}^{N_{a}+1},\zeta^{-1}(z)=\Bigg{(}\log (z_{1}),\cdots,\log(z_{N_{a}}),\sigma^{-1}(z_{N_{a}+1}/p_{max})\Bigg{)}^{\top},\] (B.21) where \(\sigma^{-1}\) is the inverse function of the sigmoid function. 2. \(Im(\zeta)\) is dense in \(\tilde{\mathcal{Z}}\). This is because \(\overline{Im(\zeta)}\), the closure of \(Im(\zeta)\), is \(\tilde{\mathcal{Z}}\). 3. \(\partial Im(\zeta)\) is collared (Brown, 1962; Connelly, 1971; Baillif, 2022). Furthermore, consider the partition of \(\mathcal{X}\), \(\big{\{}\mathcal{X}_{1},\mathcal{X}_{2}\big{\}}\), which is defined in Definition 2.1. It is easily verifiable that \(\varphi\) is Lipschitz continuous on \(\tilde{\mathcal{Z}}\times\mathcal{X}_{1}\) and \(\tilde{\mathcal{Z}}\times\mathcal{X}_{2}\) respectively. Finally, according to Assumption 2.7, \(p^{*}(x)=\varphi\big{(}\omega^{*}(x),x\big{)}\), where \(\omega^{*}\in C(\mathcal{X},\tilde{\mathcal{Z}})\). Applying Lemma B.1 with \(\mathcal{Y}=\tilde{\mathcal{Z}},\Omega(\cdot)=\omega^{*}(\cdot)\), \(\Upsilon(\cdot)=\zeta(\cdot)\), and \(\Phi(\cdot,\cdot)=\varphi(\cdot,\cdot)\), we know that there exists \(N_{h}\in\mathbb{N}\), and \(\boldsymbol{\theta}\in\mathbb{R}^{N_{\boldsymbol{\theta}}}\) such that the corresponding LFNN model \(f(x;\boldsymbol{\theta})=\varphi\Big{(}\zeta\big{(}\tilde{f}(x;\boldsymbol{ \theta})\big{)},x\Big{)}\) satisfies the following: \[\sup_{x\in\mathcal{X}}\|f(x;\boldsymbol{\theta})-p^{*}(x)\|<\epsilon.\] (B.22) ## Appendix C Comparing LFNN with closed-form solution In this section, we compare the performance of the strategy following the learned shallow LFNN model (which we refer to as the "neural network strategy" from now on) with the closed-form solution (2.20), and provide empirical validation of the LFNN approach. ### Approximate form under realistic assumptions We first note that the closed-form solution \(p^{*}\) defined in (2.20) is obtained under several unrealistic assumptions, namely continuous rebalancing, unlimited leverage, and continuing trading in insolvency.19 In practice, investors have constraints such as discrete rebalancing, limited leverage, and no trading when insolvent. For a meaningful comparison, instead of comparing the neural network strategy with the closed-form solution \(p^{*}\) directly, we compare the neural network strategy with an easily obtainable approximation to the closed-form solution which satisfies realistic constraints. Footnote 19: Note that we consider a two-asset scenario here, thus the scalar \(p^{*}\in\mathbb{R}\) (allocation fraction for the stock index) fully describes the allocation strategy \(\boldsymbol{p}^{*}\), since \(\boldsymbol{p}^{*}=(p^{*},1-p^{*})^{\top}\). In particular, we consider an equally-spaced discrete rebalancing schedule \(\mathcal{T}_{\Delta t}\) defined as \[\mathcal{T}_{\Delta t}=\Big{\{}t_{i}:\;i=0,\cdots,N\Big{\}},\] (C.1) where \(t_{i}=i\Delta t\), and \(\Delta t=T/N\). Then, the _clipped form_\(\bar{p}_{\Delta t}:\mathcal{T}_{\Delta t}\times\mathbb{R}^{3}\mapsto \mathbb{R}\) is defined as \[\text{(Clipped form)}:\quad\bar{p}_{\Delta t}(t_{i},\bar{W}_{\Delta t}(t_{i} ),\hat{W}_{\Delta t}(t_{i}),\hat{\varrho})=\min\Bigg{(}\max\Big{(}p^{*}(t_{i},\bar{W}_{\Delta t}(t_{i}),\hat{W}_{\Delta t}(t_{i}),\hat{\varrho}),p_{min} \Big{)},p_{max}\Bigg{)}.\] (C.2) Here \([p_{min},p_{max}]\), where \(p_{min}=0\) and \(p_{max}\geq 1\), is the allowed range, \(\bar{W}_{\Delta t}(t_{i})\) is the wealth of the active portfolio at \(t_{i}\) following \(\bar{p}_{\Delta t}\) from \(t_{0}\) to \(t_{i}\), \(\hat{W}_{\Delta t}(t_{i})\) is the wealth of the benchmark portfolio at \(t_{i}\) following the fixed-mix strategy described by constant allocation fraction \(\hat{\varrho}\), but only rebalanced discretely according to \(\mathcal{T}_{\Delta t}\). Clearly, the allocation strategy from \(\bar{p}_{\Delta t}\) follows the discrete schedule of \(\mathcal{T}_{\Delta t}\), and satisfies the leverage constraint that \(\bar{p}_{\Delta t}\in[p_{min},p_{max}]\). \(p_{\Delta t}\) approaches the closed-form solution \(p^{*}\) as \(\Delta t\downarrow 0,p_{min}\downarrow-\infty\) and \(p_{max}\uparrow\infty\). We note that a similar clipping idea is explored in Vigna (2014) in the context of closed-form solutions for multi-period mean-variance asset allocation. However, it should be emphasized that the clipped form \(\bar{p}_{\Delta t}\) with finite \((p_{min},p_{max})\) is a feasible, but in general sub-optimal, control of the leverage-constrained CD problem (2.12). We then address the assumption that trading continues when insolvent, i.e., when the wealth of the portfolio reaches zero. While necessary for the mathematical derivation of the closed-form solution, we acknowledge that this is by no means reasonable for practitioners. Under the continuous rebalancing case (no jumps), if the control (allocation) is bounded, it is shown that the wealth of the portfolio can never be negative (Wang and Forsyth, 2012). However, with discrete rebalancing, even with a bounded control, as long as the upper bound \(p_{max}>1\), it is theoretically possible that the portfolio value becomes negative. We address this assumption by applying an overlay on strategies so that in the case of insolvency, we assume the manager liquidates the long-only positions and allocates the debt (negative wealth) to a shortable (bond) asset (consistent with Assumption 2.6) to allow outstanding debt to accumulate until the end of the investment horizon. Going forward, when we refer to any strategy (e.g. neural network strategy, clipped form), we mean the strategy with this overlay applied. We remark that in practice, this overlay has little effect. In numerical experiments with 10,000 samples of observed wealth trajectories (based on calibrated jump-diffusion model or bootstrap resampled data paths), we do not observe any single wealth trajectory that ever hits negative wealth for any strategy (e.g., neural network strategy, clipped form, etc). In summary, the clipped form satisfies the realistic constraints and is a comparable benchmark for the neural network strategy. In the following section, we will numerically compare the performance of the clipped form, the neural network strategy, and the closed-form solution. ### Comparison: LFNN strategy vs clipped-form solution We assess and compare the performance of the neural network strategy and the clipped form, we assume the following investment scenario described in Table C.1. We assume the stock index and the bond index prices follow a double exponential jump model (2.15), see e.g., (Kou, 2002; Kou and Wang, 2004), i.e., for the jump variable \(\xi_{i}\), \(y_{i}=\log(\xi_{i})\) follows the double exponential distribution with density functions \(g_{i}(y_{i})\) defined as follows \[g_{i}(y_{i})=\nu_{i}\iota_{i}e^{-\iota_{i}y_{i}}\mathbf{1}_{y_{i}\geq 0}+(1- \nu_{i})\varsigma_{i}e^{-\varsigma_{i}y_{i}}\mathbf{1}_{y_{i}<0},\;i=1,2.\] (C.3) where \(\nu_{i}\) is the probability for an upward jump, and \(\iota_{i}\) and \(\varsigma_{i}\) are parameters that describe the upward jump and downward jump respectively. The double exponential jump-diffusion model allows the flexibility of modeling asymmetric upward and downward jumps in asset prices, which seems an appropriate assumption for inflation regimes.20 Footnote 20: We remind the reader that the closed-form solution is derived under the jump-diffusion model. Using the threshold technique (Mancini, 2009; Cont et al., 2011; Dang and Forsyth, 2016), we calibrate the double exponential jump-diffusion models to the historical high-inflation periods described in Section 3.1. The calibrated parameters can be found in Appendix G. Then, we construct a training data set \(\mathbf{Y}\) and a testing data set \(\mathbf{Y}^{test}\) by sampling the calibrated model, each with 10,000 samples. The neural network strategy follows the LFNN model learned from \(\mathbf{Y}\). We then evaluate the performance of the neural network strategy and the approximate form (C.2) on the testing data set \(\mathbf{Y}^{test}\). Specifically, we compare the value of the CD objective function (2.12) for the neural network strategy and the clipped form on \(\mathbf{Y}^{test}\). In particular, this training/testing process is repeated for various rebalancing frequencies from monthly to annually, as described in Table C.1. In Table C.2, we can see that the neural network strategy consistently outperforms the clipped form in terms of the objective function value for all rebalancing frequencies. From Table 4.2 we can see that the \begin{table} \begin{tabular}{l c} \hline \hline Investment horizon \(T\) (years) & 10 \\ Assets & CRSP cap-weighted index (real)/30-day T-bill (U.S.) (real) \\ Index Samples & Concatenated 1940:8-1951:7, 1968:9-1985:10 \\ Initial portfolio wealth/annual cash injection & 100/10 \\ Rebalancing frequency & Monthly, quarterly, semi-annually, annually \\ Maximum leverage & 1.3 \\ Benchmark equity percentage & 0.7 \\ Outperformance target rate \(\beta\) & 1\% (100 bps) \\ \hline \hline \end{tabular} \end{table} Table C.1: Investment scenario. objective function values of both the neural network strategy and the clipped form converge at roughly a first-order rate as \(\Delta t\downarrow 0\). Assuming this to be true, we extrapolate the solution to \(\Delta t=0\) using Richardson extrapolation. These extrapolated values are estimates of the exact value of the continuous-time CD objective function (2.11) for the clipped form and the neural network strategy. We can see that the neural network strategy still outperforms the clipped form in terms of the extrapolated objective function value. We can also see that the extrapolated neural network objective function value is lower than the (suboptimal) clipped form extrapolated value, but, of course, larger than the unconstrained closed-form solution. Finally, we compare the neural network allocation strategy with the clipped form strategy. Specifically, in Figure C.1, we consider the case of monthly rebalancing and present the scatter plots of the allocation fraction in the stock index with respect to time \(t\) and the ratio between the wealth of the active portfolio \(W(t)\) and the elevated target \(e^{\beta t}\hat{W}(t)\). For simplicity, we call this ratio the "tracking ratio". We plot the 3-tuple \(\left(\frac{W(t)}{e^{\beta t}\hat{W}(t)},t,p_{1}(W(t),\hat{W}(t),t)\right)\) (obtained from the evaluation of the strategies on samples from \(\mathbf{Y}^{test}\)) by using time \(t\) as the x-axis, the tracking ratio \(\frac{W(t)}{e^{\beta t}\hat{W}(t)}\) as the y-axis, and the values of the corresponding allocation fraction to the cap-weighted index \(p_{1}(W(t),\hat{W}(t),t)\) to color the scattered dots on the plot. A darker shade of the color indicates a higher allocation fraction. As we can see from Figure C.1, the stock allocation fraction of the neural network strategy behaves similarly to the stock allocation fraction from the clipped form. Both strategies invest more wealth in the stock when the tracking ratio is lower, which is consistent with the insights we obtained in Section 2.4.1. In addition, the transition patterns of the allocation fraction of the two strategies are also highly similar. One can almost draw an imaginary horizontal dividing line around \(\frac{W(t)}{e^{\beta t}\hat{W}(t)}=0.9\) that separates high stock allocation and low stock allocation for both strategies. \begin{table} \begin{tabular}{l c c c c c} \hline \multicolumn{5}{c}{Closed-form solution objective function value: 418} \\ \hline Strategy & \(\Delta t=1\) & \(\Delta t=1/2\) & \(\Delta t=1/4\) & \(\Delta t=1/12\) & \(\Delta t=0\) \\ \hline Clipped form & 545 & 504 & 479 & 467 & 461 (extrapolated) \\ \hline Neural network & 537 & 498 & 476 & 464 & 458 (extrapolated) \\ \hline \end{tabular} \end{table} Table C.2: CD objective function values. Results shown are evaluated on \(\mathbf{Y}^{test}\), the lower the better. We remark that a common criticism towards the use of neural networks is about the lack of interpretability compared to more interpretable counterparts such as the regression models (Rudin, 2019). In this section, we see that the neural network strategy closely resembles the closed-form solution for the CD objective. The closed-form solution, in turn, complements the neural network model and offers an alternative way of interpreting results obtained from the neural network. ## Appendix D Moving-window inflation filter ### Filtering algorithm Algorithm D.1 presents the pseudocode for the moving-window filtering algorithm. ``` Data: \(\text{CPI}[i]\); \(i=1,\ldots,N\) /* \(\text{CPI}\) Index */ \(\text{Cutoff}\) /* \(\text{High inflation cutoff}\): annualized */ \(\Delta t\) /* \(\text{CPI}\) index time interval */ \(K\) /* smoothing window size Result: \(\text{Flag}[i]\); \(i=1,\ldots,N\) /* = 1 high-inflation month; = 0 otherwise */ /* \(\text{Finalization}\) */ \(\text{Flag}[i]\) =0; \(i=1,\ldots,N\); for(\(i=1,\ldots,N-K\)) { if\(\log(CPI[i+K]/CPI[i])/(K*\Delta t)>\)Cutoffthen for(\(j=0,\ldots,K\)) { Flag[i+j] = 1 ; } } end for } ``` **Algorithm D.1**Pseudocode window inflation filter ### Effect of moving window size Figure D.1 shows the filtering results for windows of size 12, 60, and 120 months. We can see that the five-year window produces two obvious inflation regimes: 1940:8-1951:7 and 1968:9-1985:10, which correspond to well-known market shocks (i.e. the second world war, and price controls; the oil price shocks and stagflation of the seventies). Increasing the window size to 10 years results in similar-looking plots as the five-year window size, but the number of months in each window increases, and the average inflation rate is lower. Since our objective is to determine the effect of high-inflation periods on allocation strategies, we choose the five-year window size. ### Asset performance during high inflation To gain some intuition on the behavior of asset returns during the inflation periods, we assume that each real (adjusted by CPI index) index follows geometric Brownian motion (GBM). For example, given an index with value \(S\), then \[dS = \mu S\ dt+\sigma S\ dZ\] (D.1) where \(dZ\) is the increment of a Wiener process. We use maximum likelihood estimation to fit the drift rate \(\mu\) (expected arithmetic return) and volatility \(\sigma\) in each regime, for each index, as shown in Table D.1. We also show a series constructed by: converting the indexes in each regime to returns, concatenating the two return series, and converting the concatenated return series back to an index. This concatenated index does not, of course, correspond to an actual historical index, but is a pseudo-index constructed from high-inflation regimes. This amounts to a worst-case sequence of returns in terms of the duration of historical inflation periods, that could plausibly be expected during a long period of high inflation. It is striking that in each historical inflation regime (i.e., 1940:8-1951:7 and 1968:9-1985:10) in Table D.1, the drift rate \(\mu\) for the equal-weighted index is much larger than the drift rate for the cap-weighted index. We can observe that the mean geometric return for the cap-weighted index, in the period 1968:9-1985:10, was only about one percent per year. It is also noticeable that bonds performed very poorly in the period 1940:8-1951:7. As well, during the period 1968:9-1985:10, there was essentially no term premium for 10-year treasuries, compared with 30-day T-bills. In addition, the 10-year treasury index had much higher volatility compared to the 30-day T-bill index. Looking at the concatenated series, it appears that 30-day T-bills are arguably the better defensive \begin{table} \begin{tabular}{l r r r} \hline Index & \(\mu\) & \(\sigma\) & \(\mu-\sigma^{2}/2\) \\ \hline \multicolumn{4}{c}{1940:8-1951:7} \\ \hline CapWt & 0.079 & 0.140 &.069 \\ EqWt & 0.145 & 0.190 &.127 \\ 10 Year Treasury & -0.035 & 0.036 & -.036 \\ 30-day T-bill & -0.050 & 0.029 & -.050 \\ \hline \multicolumn{4}{c}{1968:9-1985:10} \\ \hline CapWt & 0.026 & 0.164 &.013 \\ EqWt & 0.065 & 0.220 &.041 \\ 10 Year Treasury & 0.011 & 0.093 &.007 \\ 30-day T-bill & 0.009 & 0.012 &.009 \\ \hline \multicolumn{4}{c}{Concatenated: 1940:8-1951:7 and 1968:9 - 1985:10} \\ \hline CapWt & 0.049 & 0.156 &.038 \\ EqWt & 0.098 & 0.209 &.076 \\ 10 Year Treasury & -0.008 & 0.076 & -.011 \\ 30-day T-bill & -0.014 & 0.022 & -.014 \\ \hline \end{tabular} \end{table} Table D.1: GBM parameters for the indexes shown. All indexes are real (deflated). \(\mu\) is the expected annualized arithmetic return. \(\sigma\) is the annualized volatility. (\(\mu-\sigma^{2}/2\)) is the annualized mean geometric return. Figure D.1: high-inflation regimes, using the moving-window method, with the window size shown. The cutoff for _high-inflation_ regimes was 0.05. High-inflation months have a label value of one, and low-inflation months have a label value of zero. CPI data identified from the historical period 1926:1-2022:1. asset here since the volatility of this index is quite low (but with a negative (real) drift rate). ## Appendix E Bootstrap resampling ### Stationary block bootstrap algorithm Algorithm E.1 presents the pseudocode for the stationary block bootstrap. See Ni et al. (2022) for more discussion. ``` /* initialization */ bootstrap_samples = [ ]; /* loop until the total number of required samples are reached */ whileTrue do /* choose random starting index in [1,...,N], N is the index of the last historical sample */ index = UniformRandom( 1, N ); /* actual blocksize follows a shifted geometric distribution with the expected value of exp_block_size */ blocksize = GeometricRandom( \(\frac{1}{exp\_block\_size}\) ); for(\(i=0;\ i<blocksize;\ i=i+1\) ) { /* if the chosen block exceeds the range of the historical data array, do a circular bootstrap */ ifindex + \(i>N\)then bootstrap_samples.append( historical_data[ index + i - N ] ); else bootstrap_samples.append( historical_data[ index + i ] ); end if if bootstrap_samples.len() == number_required then return bootstrap_samples; end if } ``` **Algorithm E.1**Pseudocode for stationary block bootstrap ### Effect of blocksize As discussed, we will use bootstrap resampling (Politis and Romano, 1994; Politis and White, 2004; Patton et al., 2009; Dichtl et al., 2016; Anarkulova et al., 2022), to analyze the performance of using the equal-weighted index compared to the cap-weighted index, during periods of high inflation (our concatenated series: 1940:8-1951:7, 1968:9-1985:10). First, we examine the effect of the expected blocksize parameter in the bootstrap resampling algorithm. We will use a paired sampling approach, where we simultaneously draw returns from the bond and stock indexes.21 The algorithm in Politis and White (2004) was developed for single asset time series. It is therefore important to assess the effect of the blocksize on numerical results. In Table E.1, we examine the effect of different blocksizes on the statistics of stationary block bootstrap resampling. Footnote 21: This preserves correlation effects. Perhaps a more visual way of analyzing the effect of the expected blocksize is shown in Figure E.1, where we show the cumulative distribution function (CDF) of the final wealth after 10 years, for different blocksizes. We show the CDF since this gives us a visualization of the entire final wealth distribution, not just a few summary statistics. Since the data frequency is at one-month intervals, specifying a geometric mean expected blocksize of one month means that the blocksize is always a constant one month. This effectively means that we are assuming that the data is i.i.d. However, the one-month results are an outlier, compared to the other choices of expected blocksize. There is hardly any difference between the CDFs for any choice of expected blocksize in the range of 3-24 months. In this article, we use an expected blocksize of 6 months. ### Bootstrapping from non-contiguous data segments As discussed in Section 3.2, we have identified two historical inflation regimes: 1940:8-1951:7 and 1968:9-1985:10. As traditional bootstrap methods assume one segment of the underlying data segment, it naturally becomes a question of how to bootstrap from two non-contiguous segments of data appropriately. In the main sections of the article, we first concatenate the two data segments, then treat the concatenated data samples as a complete segment and apply bootstrap methods to them. This method is in line with the work \begin{table} \begin{tabular}{l c c c c} \hline Expected blocksize (months) & Median\([W_{T}]\) & E\([W_{T}]\) & std\([W_{T}]\) & 5th Percentile \\ \hline 1 & 170.9 & 191.6 & 97.6 & 78.6 \\ 3 & 174.6 & 202.9 & 120.4 & 69.3 \\ 6 & 174.2 & 204.2 & 125.9 & 66.8 \\ 12 & 175.5 & 204.4 & 124.2 & 67.9 \\ 24 & 179.2 & 205.1 & 118.4 & 68.7 \\ \hline \end{tabular} \end{table} Table E.1: Effect of expected blocksize, on the statistics of the final wealth \(W(T)\) at \(T=10\) years. Constant weight, scenario in Table F.1. Equity weight: 0.7, rebalanced monthly. Bond index: 30-day T-bill. Equity index: equal-weighted. Concatenated series: 1940:8-1951:7, 1968:9-1985:10 (high-inflation regimes). All quantities are real (inflation-adjusted). Initial wealth 100. Bootstrap resampling, \(10,000\) resamples). Figure E.1: Cumulative distribution function (CDF), final wealth \(W(T)\) at \(T=10\) years. The effect of expected blocksize. Constant weight, scenario in Table F.1. Equity weight: 0.7, rebalanced monthly. Bond index: 30-day T-bill. Equity index: equal-weighted. Concatenated series: 1940:8-1951:7, 1968:9-1985:10 (high-inflation regimes). All quantities are real (inflation-adjusted). Initial wealth 100. Bootstrap resampling, expected blocksize one year, \(10,000\) resamples. of Anarkulova et al. (2022), in which the authors concatenate stock returns from different countries and bootstrap from the concatenated series. A second intuitive bootstrap method would be to bootstrap randomly from each of the two segments. Briefly, each bootstrap resample consists of (i) selecting a random segment (probability proportional to the length of the segment), (ii) selecting a random starting date in the selected segment, (iii) then selecting a block (of random size) of consecutive returns from this start date, (iv) in the event that the end of the data set in a segment is reached, we use circular block bootstrap resampling within that segment, and (v) repeating this process until a sample of the total desired length is obtained. We compare the bootstrapped data from concatenated segments and separate segments, by evaluating the performance of the \(70\%/30\%\) equal-weighted index/T-bill fixed-mix portfolio, using the investment scenario described in Table F.1. We can observe from Table E.2 that the strategy performance on bootstrap resampled data using two methods only varies slightly. This indicates that the two methods do not yield much difference for practical purposes. This is indeed expected - after all, the difference between the two methods only occurs when a random block crosses the edge of each of the segments. However, such a situation only occurs with a very low probability. Except for this low-probability situation, the two bootstrap methods are identical. ## Appendix F Comparing passive strategies in high inflation regimes In this section, we compare the performances of two fixed-mix strategies. The first strategy, the "EqWt" strategy, maintains a \(70\%\) allocation to the equal-weighted index, and \(30\%\) allocation to the \(30\)-day T-bill index. The second strategy, the "CapWt" strategy, maintains a \(70\%\) allocation to the cap-weighted index, and \(30\%\) allocation to the \(30\)-day T-bill index. Figure F.1 compares the CDF (cumulative distribution functions) of the terminal wealth of the EqWt strategy and the CapWt strategy based on \(10,000\) block bootstrap resampled data samples (Politis and Romano, 1991; Dichtl et al., 2016; Anarkulova et al., 2022) from the concatenated CRSP combined time series from 1940:8-1951:7 and 1968:9-1985:10, with an expected blocksize of six months. Both strategies \begin{table} \begin{tabular}{l c c} \hline \hline Investment horizon \(T\) (years) & 10 \\ Equity market indexes & CRSP cap-weighted/equal-weighted index (real) \\ Bond index & 30-day T-bill (U.S.) (real) \\ Index Samples & Concatenated 1940:8-1951:7, 1968:9-1985:10 \\ Initial portfolio wealth & 100 \\ Rebalancing frequency & Monthly \\ \hline \hline \end{tabular} \end{table} Table F.1: Investment scenario. \begin{table} \begin{tabular}{l c c c c} \hline \hline & Median[\(W_{T}\)] & E[\(W_{T}\)] & std[\(W_{T}\)] & 5th Percentile \\ \hline Bootstrap from concatenated segments & 174.2 & 204.2 & 125.9 & 66.8 \\ Bootstrap from separate segments & 176.9 & 208.0 & 132.4 & 65.4 \\ \hline \hline \end{tabular} \end{table} Table E.2: Effect of bootstrap method - bootstrap from concatenated segments vs bootstrap from separate segments, on the statistics of the final wealth \(W(T)\) at \(T=10\) years. Constant weight, scenario in Table F.1. Equity weight: \(0.7\), rebalanced monthly. Bond index: \(30\)-day T-bill. Equity index: equal-weighted. Concatenated series: 1940:8-1951:7, 1968:9-1985:10 (high-inflation regimes). All quantities are real (inflation-adjusted). Initial wealth \(100\). Bootstrap resampling, \(10,000\) resamples). assume an initial wealth of 100 with no further cash injections and withdrawals, the investment horizon is 10 years, with monthly rebalancing to maintain the constant weights in the portfolio, see also Table F.1. We first recall the concept of _partial stochastic dominance_. Suppose two investment strategies \(A\) and \(B\) are evaluated on a set of data samples under the same investment scenario. We consider the CDFs of terminal wealth \(W\) associated with both strategies. Specifically, we denote the CDF of strategy A by CDF\({}_{A}(W)\) and that of strategy B by CDF\({}_{B}(W)\). Let \(W_{T}\) be the random wealth at time \(T\) and \(W\) be a possible wealth realization, then we can interpret CDF\({}_{A}(W)\) as \[\text{CDF}_{A}(W) = Prob(W_{T}\leq W)\.\] (F.1) Following Atkinson (1987); van Staden et al. (2021), we define partial first-order stochastic dominance. **Definition F.1** (Partial first order stochastic dominance).: _Given an investment strategy A which generates a CDF of terminal wealth \(W\) given by CDF\({}_{A}(W)\), and a strategy B with CDF\({}_{B}(W)\), then strategy \(A\) partially stochastically dominates strategy B (to first order) in the interval \((W_{lo},W_{hi})\) if_ \[\text{CDF}_{A}(W) \leq \text{CDF}_{B}(W),\ \forall W\in(W_{lo},W_{hi})\] (F.2) _with strict inequality for at least one point in \((W_{lo},W_{hi})\)._ The arguments for relaxing the usual definition of stochastic dominance are given in Atkinson (1987); van Staden et al. (2021). Given some initial wealth \(W_{0}\), if \(W_{hi}\gg W_{0}\), then an investor may not be concerned that strategy \(A\) underperforms strategy \(B\) at these very high wealth values. In this case, the investor is fabulously wealthy. Suppose that \(W_{lo}\ll W_{0}\), Assume CDF\({}_{A}(W_{lo})=\text{CDF}_{B}(W_{lo})\). As an extreme example, suppose \(W_{lo}=\)2 cents. The fact that strategy \(B\) has a higher probability of ending up with one cent, compared with strategy \(A\) is cold comfort, and not particularly interesting. On the other hand, suppose CDF\((W_{lo})\ll 1\). Again, an investor may not be interested in events with exceptionally low probabilities. Remarkably, the EqWt strategy appears to partially stochastically dominate the CapWt strategy, since the CDF curve of the EqWt strategy almost appears to be entirely on the right side of the CDF curve of the CapWt strategy, except at very low probability values, see Atkinson (1987); van Staden et al. (2021) for a definition of stochastic partial dominance. Close examination shows that the curves cross at the point \(F_{EqWt}(W_{lo})=F_{CapWt}(W_{lo})\simeq.02\), with a slight underperformance of the EqWt strategy compared to the CapWt strategy in this extreme left tail. Figure F.1: Cumulative distribution function of final real wealth \(W\) at \(T=10\) years, bootstrap resampling expected blocksize six months, \(10,000\) resamples (Appendix D.1). \(T=10\) years. Data: concatenated returns, 1940:8-1951:7, 1968:9-1985:10. Scenario described in Table F.1. The fact that the EqWt strategy partially stochastically dominates the CapWt strategy seems to suggest that the equal-weighted stock index is the better choice for the stock index than the cap-weight stock index during high inflation times. We note that, using recent data22, the situation is not as clear (Taljaard and Mare, 2021), since the equal-weighted index appears to underperform. However, Taljaard and Mare (2021) suggests that this is due to the recent market concentration in tech stocks.23 In fact, a plausible explanation for the outperformance (historically) of an equal-weighted index is that this is simply due to the small-cap effect, which was not widely known until about 1981 (Banz, 1981). Plyakha et al. (2021) acknowledge that the equal-weighted index has significant exposure to the size factor. However, Plyakha et al. (2021) argue that the equal-weighted index also has a larger exposure to the value factor. In addition, there is a significant _alpha_ effect due to the contrarian strategy of frequent rebalancing to equal weights. It would appear to be simplistic to dismiss an equal weight strategy on the grounds that this is simply a small cap effect that has become less effective. Footnote 22: Since about 2010. Of course, this is outside a period of sustained high inflation. Footnote 23: As of February 2023, Apple, Microsoft, Amazon and Alphabet (A and C) in total comprised 17% of the market capitalization of the S&P 500. ## Appendix G Calibrated synthetic model parameters ## Appendix H Comparison of CD and CS objectives In this section, we numerically compare the CS objective function (2.13) with the CD objective function (2.11). As we briefly discussed in Section 2.3, one caveat of the CD objective function is that it not only penalizes the underperformance relative to the elevated target but also penalizes the outperformance over the elevated target. In practice, the outperformance of the elevated target is favorable, and managers may not want to penalize the strategy when it happens. Therefore, in such cases, the cumulative quadratic shortfall (CS) objective (2.13) and (2.14) may be more appropriate. For the remainder of the paper, we focus on the discrete-time CS problem with the LFNN parameterization and the equally-spaced rebalancing schedule \(T_{\Delta t}\) defined in Appendix C.1, i.e., \[(\text{Parameterized}\;CS(\beta)):\quad\inf_{\mathbf{\theta}\in\mathbb{R}^{N_{ \mathbf{\theta}}}}\mathbb{E}_{f(\cdot;\mathbf{\theta})}^{(t_{0},w_{0})}\Bigg{[}\sum_{ t\in\mathcal{T}_{\Delta t}}\Big{(}\min\big{(}W_{\mathbf{\theta}}(t)-e^{\beta t} \hat{W}(t),0\big{)}\Big{)}^{2}+\epsilon W_{\mathbf{\theta}}(T)\Bigg{]},\] (H.1) The CS objective function in (H.1) only penalizes the underperformance against the elevated target. Here \(\epsilon W(T)\) is a regularization term. We remark that problem (H.1) without the regularization term can be ill-posed. To see this, consider a case where \(W_{\mathbf{\theta}}(t)\gg e^{\beta t}\hat{W}(t)\), for some \(t\in[t_{0},T]\). In this case, the future cumulative quadratic shortfall (on \([t,T]\)) will almost surely be zero without the regularization term, so the control from thereon has no effect on the objective function under that scenario. We choose \(\epsilon\) to be a small positive scalar. As William Bernstein once said, "if you have won the game stop playing." If one has accumulated as much wealth as Warren Buffet, then it does not matter what assets she invests in. The positive regularization factor of \(\epsilon\) forces the strategy to put all wealth into less risky assets when the portfolio has already performed extremely well. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \(\mu_{1}\) & \(\sigma_{1}\) & \(\lambda_{1}\) & \(\nu_{1}\) & \(\iota_{1}\) & \(\varsigma_{1}\) & \(\mu_{2}\) & \(\sigma_{2}\) & \(\lambda_{2}\) & \(\nu_{2}\) & \(\iota_{2}\) & \(\varsigma_{2}\) & \(\rho\) \\ \hline 0.051 & 0.146 & 0.178 & 0.2 & 7.13 & 7.33 & -0.014 & 0.017 & 0.321 & 0 & N/A & 44.48 & 0.14 \\ \hline \hline \end{tabular} \end{table} Table G.1: Estimated annualized parameters for double exponential jump-diffusion model (C.3) from CRSP cap-weighted stock index, 30-day U.S. T-bill index deflated by the CPI. Sample period: concatenated 1940:8-1951:7 and 1968:9-1985:10. We design a numerical experiment to compare the CS objective with the symmetric CD objective in the following problem (H.2) with the same LFNN parameterization and equally-spaced rebalancing schedule \(\mathcal{T}_{\Delta t}\) \[(\text{Parameterized }CD(\beta)):\quad\inf_{\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{ \theta}}}}\mathbb{E}_{f(\cdot;\mathbf{\theta})}^{(t_{0},w_{0})}\Bigg{[}\sum_{t\in \mathcal{T}_{\Delta t}}\big{(}W_{\mathbf{\theta}}(t)-e^{\beta t}\hat{W}(t)\big{)}^{ 2}\Bigg{]}.\] (H.2) Specifically, we adopt the investment scenario in Table C.1 with \(\beta=0.02\) and reuse the training and testing data sets simulated from the calibrated double exponential jump-diffusion model. The neural network strategies follow the trained LFNN models on the training data set \(\mathbf{Y}\) for both the CS and CD objective. We then evaluated both strategies on the same testing data set \(\mathbf{Y}^{test}\). We compare the _wealth ratio_, i.e., the wealth of the managed portfolio divided by the wealth of the benchmark portfolio, over time, for both strategies. The wealth ratio metric reflects how well the active strategy performs against the benchmark strategy along the investment horizon; a higher wealth ratio metric is better. Below we show the percentiles of the wealth ratio for both strategies evaluated on \(\mathbf{Y}^{test}\). We can see from Figure H.1 that the CS strategy (neural network strategy trained under the CS objective) yields a more favorable wealth ratio than the CD strategy (neural network strategy trained under the CD objective). On average, the CS strategy achieves a consistently higher terminal wealth ratio than the CD strategy. Even in the 20th percentile case, the CS strategy lags initially but recovers over time.24 The result indicates that the CS objective might be a wiser choice for managers in practice. In the following numerical experiments with bootstrap resampled data, we use the CS objective (H.1) instead of the CD objective (H.2).25 Footnote 24: The CS strategy starts with a higher allocation to the stock, and thus encounters more volatility early on. Figure H.1: Percentiles of wealth ratio of the neural network strategy (i.e., the neural network model) learned under the cumulative quadratic tracking difference (CD) objective and the neural network strategy learned under the cumulative quadratic shortfall (CS) objective. The results shown are based on evaluations on the testing data set \(\mathbf{Y}^{test}\). Experiments with non-zero borrowing premium In Section 3, we conducted the numerical experiments, assuming the borrowing premium is zero. This assumption is based on the fact that large sovereign wealth funds are often considered to have almost risk-free credit ratings, due to their state-backed nature. In other words, we assume that sovereign wealth funds can borrow funding at the same rate as risk-free treasury bills. This assumption may be too benign for general public funds. In general, it is unlikely that a non-sovereign wealth fund can borrow at a risk-free rate. However, the actual borrowing cost within large public funds is often unavailable. For this reason, we use the corporate bond yields issued by corporations with similar credit ratings as these large public funds as an approximation to the borrowing cost. Currently, large public funds such as the Blackstone Group or Apollo Global Management are rated between Aaa and Baa rating by Moody's. We obtain the nominal corporate bond yields with Moody's Aaa (Moody's, 2023a) and Baa (Moody's, 2023b) ratings and adjust them with CPI returns. During the two high-inflation regimes we have identified, Aaa-rated corporate bonds have an average real yield of 0.7%, while Baa-rated bonds have 1.8%. Taking an average of the two, we use 1.25% as an estimate for the real yield of corporate bonds as well as the borrowing cost of large public funds.26 As discussed in Section D.3, the average real return for T-bill index is -1.4%. This gives us an average borrowing premium rate of 2.65%. In this section, as a stress test, we conduct the same experiment as in Section 3.3, except that we use a fixed borrowing premium of 3% instead of 0. We note that the historical corporate bond yields are based on bonds with a long maturity. Typically, long-term yields are higher than short-term yields, which accounts for the term risk. Therefore, the assumption of a 3% borrowing premium should be a fairly aggressive stress test for the use of leverage. Footnote 26: Note that the corporate bonds from Moody’s yield data have maturities of more than 20 years. Usually, long-term bonds have higher yields than short-term bonds. Thus, using corporate yields likely overestimates the borrowing cost, since we assume the manager is only borrowing short-term funding. Figure I.1: Percentiles of wealth ratio over the investment horizon, and CDF of terminal wealth ratio. Annualized borrowing premium is 3%. Results are based on the evaluation of the learned neural network model (from high-inflation data) on the testing data set (low-inflation data). We plot the percentiles of the wealth ratio and the CDF of the terminal wealth ratio. As we can see from Figure I.1 and Table I.1, the neural network strategy is only marginally affected by the increased borrowing premium rate. Specifically, the terminal wealth statistics for the case with the borrowing premium all slightly worse. However, the impact is so marginal that the median IRR does not change, and the neural network strategy still maintains more than a 200 bps advantage in terms of median IRR compared to the benchmark. The most noticeable difference is in the allocation fraction, as shown in Figure I.2. With a significantly higher borrowing cost, the neural network strategy does not leverage as much in the first two years, resulting in a less negative allocation to the T-bill and a lower allocation to the equal-weighted stock index. However, as we have seen in Figure I.1 and Table I.1, this only results in minimal impact on the performance of the strategy. inflation regimes (1940:8-1951:7 and 1968:9-1985:10) from the full historical data of 1926:1-2022:1 and obtain several low-inflation data segments. We concatenate the low-inflation data segments and use the stationary bootstrap (Appendix E.1) to generate a testing data set. We adopt the investment scenario described in Table 3.2 and evaluate the performance of the neural network strategy obtained in Section 3.3 on this low-inflation data set. Note that we continue to use the equal-weighted stock index/30-day T-bill fixed-mix portfolio as the benchmark. This is validated by Figure J.1, which plots the CDF of the terminal wealth of the fixed-mix portfolios using 70% equal-weighted stock index vs 70% cap-weighted stock index (both with 30% 30-day U.S. T-bill as the bond component). As we can see from Figure J.1, the fixed-mix portfolio with an equal-weighted stock index clearly has a more right-skewed distribution than the portfolio with a cap-weighted stock index. This seems to suggest that the equal-weighted index is the superior choice to use in the benchmark portfolio, even in low inflation regimes. We then present the performance of the neural network strategy learned on high-inflation data on the testing data set bootstrapped from low-inflation historical returns. Surprisingly, as we can see from Figure J.2a, the performance of the neural network strategy learned under high-inflation regimes performs quite well in low-inflation environments. Compared to the testing results on the high-inflation data set, there is a noticeable performance degradation; for example, the probability of outperforming the benchmark strategy in terminal wealth is now slightly less than 90%. However, the degradation is quite minimal. The neural network strategy still has more than an 85% chance of outperforming the benchmark strategy at the end of \begin{table} \begin{tabular}{l c c c c c} \hline Strategy & Median[\(W_{T}\)] & E[\(W_{T}\)] & std[\(W_{T}\)] & 5th Percentile & Median IRR (annual) \\ \hline Neural network & 429.7 & 489.6 & 301.9 & 151.6 & 0.100 \\ Benchmark & 368.3 & 420.8 & 238.2 & 175.7 & 0.079 \\ \hline \end{tabular} \end{table} Table J.1: Statistic of strategies. Results are based on the evaluation of the learned neural network model (from high-inflation data) on the low-inflation testing data set. Figure J.1: Cumulative distribution functions (CDFs) for cap-weighted and equal-weighted indexes, as a function of final real wealth \(W\) at \(T=10\) years. Initial stake \(W_{0}=100\), no cash injections or withdrawals. Block bootstrap resampling, expected blocksize 6 months. 70% stocks, 30% bonds, rebalanced monthly. Bond index: 30-day U.S. T-bills. Stock index: CRSP capitalization-weighted or CRSP equal-weighted index. Underlying data excludes high-inflation regimes. All indexes are deflated by the CPI. \(10,000\) resamples. Data set 1926:1-2022:1, excluding high inflation regimes (1940:8-1951:7 and 1968:9-1985:10). the investment horizon. As shown in Table 1, the median IRR of the neural network strategy is still 2% higher than the median IRR of the benchmark strategy, meeting the investment target. The above results indicate that the neural network strategy is surprisingly robust. Despite being specifically trained under a high-inflation scenario, the strategy performs admirably well in a low-inflation environment. Figure 2: Percentiles of wealth ratio over the investment horizon, and CDF of terminal wealth ratio. Results are based on the evaluation of the learned neural network model (from high-inflation data) on the low-inflation testing data set).
2306.15825
Distinguishing C*-algebras by their unitary groups
We obtain partial affirmative answers to the question whether isomorphism of the unitary groups of two C*-algebras, either as topological groups or as discrete groups, implies isomorphism of the C*-algebras as real C*-algebras.
Lionel Fogang Takoutsing, Leonel Robert
2023-06-27T23:17:19Z
http://arxiv.org/abs/2306.15825v1
# Distinguishing C*-algebras by their unitary groups ###### Abstract. We obtain partial affirmative answers to the question whether isomorphism of the unitary groups of two C*-algebras, either as topological groups or as discrete groups, implies isomorphism of the C*-algebras as real C*-algebras. ## 1. Introduction We investigate whether two unital C*-algebras with isomorphic unitary groups must be isomorphic as real C*-algebras. The converse is always true, as the real C*-algebra structure of a C*-algebra is sufficient to define its unitary group. This question was first addressed by Dye [4] in the context of von Neumann algebra factors. Al-Rawashdeh, Booth, and Giordano further studied this question in [1], using results from the classification program for simple nuclear C*-algebras to obtain the desired isomorphism at the C*-algebra level. Very recently, Sarkowicz followed a similar strategy in [11], relying on up-to-date classification results. Chand and the second author approached the question in [12] using the Lie group-Lie algebra correspondence and structure theorems for Lie homomorphisms between C*-algebras. However, the result in [12] was confined to traceless C*-algebras. In this paper, we extend this approach to a broader class of C*-algebras. Let \(A\) be a unital C*-algebra. Denote by \(\mathrm{U}(A)\) the group of unitaries in \(A\) and by \(\mathrm{U}_{0}(A)\) the connected component of 1 in \(\mathrm{U}(A)\). Both groups are considered as topological groups equipped with the topology induced by the norm on \(A\). In addition to \(\mathrm{U}(A)\) and \(\mathrm{U}_{0}(A)\), we consider the group \(\mathrm{SU}_{0}(A)\) defined as the subgroup of \(\mathrm{U}_{0}(A)\) generated by the set \(\{e^{h}:h\in\mathfrak{su}(A)\}\). Here \(\mathfrak{su}(A):=[\overline{iA_{sa},iA_{sa}}]\) denotes the closed linear span of commutators of skewadjoint elements of \(A\). The topology on \(\mathrm{SU}_{0}(A)\) is such that for a sufficiently small \(\varepsilon>0\) the exponential map \(\mathfrak{su}(A)\ni h\mapsto e^{h}\in\mathrm{SU}_{0}(A)\) is a homeomorphism from \(\mathfrak{su}(A)\cap B_{\varepsilon}(0)\) to an open neighborhood of 1 in \(\mathrm{SU}_{0}(A)\). This topology may in general defer from the topology induced by the norm. **Theorem 1.1**.: _Let \(A\) and \(B\) be unital prime \(C^{*}\)-algebras without 1-dimensional or 2-dimensional representations. The following are equivalent:_ 1. \(\mathrm{U}(A)\) _and_ \(\mathrm{U}(B)\) _are isomorphic as topological groups._ 2. \(\mathrm{U}_{0}(A)\) _and_ \(\mathrm{U}_{0}(B)\) _are isomorphic as topological groups._ 3. \(\mathrm{SU}_{0}(A)\) _and_ \(\mathrm{SU}_{0}(B)\) _are isomorphic as topological groups._ 4. \(A\) _and_ \(B\) _are isomorphic as real C*-algebras._ _Moreover, if \(\alpha\colon\mathrm{SU}_{0}(A)\to\mathrm{SU}_{0}(B)\) denotes the group isomorphism in (iii), then the isomorphism \(\phi\colon A\to B\) in (iv) can be chosen such that it extends \(\alpha\)._ By an isomorphism of \(A\) and \(B\) as real C*-algebras we understand an \(\mathbb{R}\)-algebra isomorphism preserving the involution (and, automatically, the norm). In the context of Theorem 1.1, where the C*-algebras are prime, such a map must be either linear (over \(\mathbb{C}\)) or conjugate linear (see Lemma 2.7). In the latter case, \(x\mapsto\phi^{*}(x)\) is a C*-algebra isomorphism. It follows that condition (iv) can be rephrased as "\(A\) is isomorphic to \(B\) or to its opposite algebra \(B^{op}\), as C*-algebras". The technique of proof for Theorem 1.1 follows a well-trodden path: We regard \(\mathrm{U}_{0}(A)\) and \(\mathrm{SU}_{0}(A)\) as Banach-Lie groups with Lie algebras \(iA_{sa}\) (the skewadjoint elements of \(A\)) and \(\mathfrak{su}(A)\). From the topological groups isomorphisms we derive an isomorphism of the corresponding Lie algebras. Then, using powerful results on the structure of Lie algebra homomorphisms from [1], we derive the existence of a real C*-algebra isomorphism. It is in this second step that the hypotheses of primality and absence of 1 or 2-dimensional representations get used. The inclusion of \(\mathrm{SU}_{0}(A)\) in our analysis proves to be crucial in obtaining affirmative answers to the isomorphism question without assuming a topological isomorphism at the group level. Let \([A,A]\) denote the linear span of the elements \([x,y]:=xy-yx\), with \(x,y\in A\), i.e., the commutators in \(A\). Let \(\overline{[A,A]}\) denote the norm closure of \([A,A]\). Let us say that the C*-algebra \(A\) has _bounded commutators generation_ (BCG) if there exist \(N\in\mathbb{N}\) and \(C>0\) such that for all \(x\in\overline{[A,A]}\), we have \[x=\sum_{i=1}^{N}[a_{i},b_{i}]\] for some \(a_{i},b_{i}\in A\) such that \(\|a_{i}\|\cdot\|b_{i}\|\leqslant C\|x\|\) for all \(i\). An element \(x\) of a C*-algebra is called _full_ if it generates the C*-algebras as a closed two-sided ideal, and square-zero if \(x^{2}=0\). It is shown in [1] that if \(A\) has BCG and a full square-zero element, then \(\mathrm{SU}_{0}(A)\) has the invariant automatic continuity property (recalled in the next section). Both the existence of a full square-zero element and the BCG property are not uncommon among well-studied classes of C*-algebras. For example, exact C*-algebras that tensorially absorb the Jiang-Su algebra, and more generally, exact C*-algebras whose Cuntz semigroup is almost unperforated and almost divisible, have both properties ([16]). Traceless unital C*-algebras have BCG by [20]. **Theorem 1.2**.: _Let \(A\) and \(B\) be separable unital prime \(C^{*}\)-algebras without 1-dimensional or 2-dimensional representations. Suppose also that \(A\) and \(B\) both have BCG and a full square-zero element. The following are equivalent:_ 1. \(\mathrm{U}(A)\) _and_ \(\mathrm{U}(B)\) _are isomorphic as groups._ 2. \(\mathrm{U}_{0}(A)\) _and_ \(\mathrm{U}_{0}(B)\) _are isomorphic as groups._ 3. \(\mathrm{SU}_{0}(A)\) _and_ \(\mathrm{SU}_{0}(B)\) _are isomorphic as groups._ 4. \(A\) _and_ \(B\) _are isomorphic as real C*-algebras._ To prove Theorem 1.2 we exploit the invariant automatic continuity of \(\operatorname{SU}_{0}(A)\) and invoke Theorem 1.1. The implication (ii) implies (iv) in the case that the C*-algebras are assumed to be traceless was obtained in [1]. It seems likely that the equivalence of (i)-(iv) in Theorems 1.1 and 1.2 holds for larger classes of C*-algebras than those covered by these theorems. The authors are not aware of an example of two C*-algebras with isomorphic unitary groups that are not isomorphic as real C*-algebras. _Notation conventions_: Given subsets \(X,Y\) of \(A\), we denote by \(X\cdot Y\) the additive group generated by the products \(xy\) with \(x\in X\) and \(y\in Y\). We denote by \([X,Y]\) the additive group generated by the commutators \([x,y]\), with \(x\in X\) and \(y\in Y\). In all cases where we use this notation, the sets \(X\) and \(Y\) are closed under scalar multiplication by \(\mathbb{R}\), so that \(X\cdot Y\) and \([X,Y]\) are also vector subspaces. ### Acknowledgements We thank Pawel Sarkowicz for feedback on this note. ## 2. From groups to Lie algebras to C*-algebras Let \(A\) be a unital Banach algebra over \(\mathbb{R}\). Let \(\operatorname{GL}(A)\) denote the group of invertible elements of \(A\), which we regard as a Banach-Lie group with Lie algebra \(A\) and exponential map \(A\ni a\mapsto e^{a}\in\operatorname{GL}(A)\). We review some facts about _analytic subgroups_ of \(\operatorname{GL}(A)\), in the sense of [10]. We refer the reader to [12, Remark IV.4] for a discussion of various notions of Banach-Lie subgroup of a Banach-Lie group. Let \(\mathfrak{g}\subseteq A\) be a norm-closed real vector subspace of \(A\) such that \([\mathfrak{g},\mathfrak{g}]\subseteq\mathfrak{g}\), henceforth called a closed Lie subalgebra of \(A\). Denote by \(G_{\mathfrak{g}}\) the subgroup of \(\operatorname{GL}(A)\) generated by the set \(\{e^{l}:l\in\mathfrak{g}\}\). By [13, Theorem 5.52 (i)] there is a unique connected group topology on \(G_{\mathfrak{g}}\) such that the exponential map \(\mathfrak{g}\ni h\mapsto e^{h}\in G_{\mathfrak{g}}\) is a homeomorphism from some neighborhood of \(0\) in \(\mathfrak{g}\) to a neighborhood of \(1\) in \(G_{\mathfrak{g}}\). Then \(G_{\mathfrak{g}}\) is a Banach-Lie group whose Lie algebra \(\mathcal{L}(G):=\operatorname{Hom}(\mathbb{R},G_{\mathfrak{g}})\) is isomorphic to \(\mathfrak{g}\) as a completely normable Lie algebra. Upon identifying \(\mathcal{L}(G)\) with \(\mathfrak{g}\), the exponential map takes the form \(\mathfrak{g}\ni l\mapsto e^{l}\in G_{\mathfrak{g}}\). We will make use of the functoriality of the Lie algebra of a Lie group in the context of the Lie subalgebras \(\mathfrak{g}\) and their Banach-Lie groups \(G_{\mathfrak{g}}\): **Theorem 2.1**.: _Let \(\mathfrak{g}_{1}\subseteq A\) and \(\mathfrak{g}_{2}\subseteq B\) be closed Lie subalgebras of real Banach algebras \(A\) and \(B\). If \(\alpha\colon G_{\mathfrak{g}_{1}}\to G_{\mathfrak{g}_{2}}\) is continuous group homomorphism, then there exists a unique bounded Lie algebra homomorphism \(\psi\colon\mathfrak{g}_{1}\to\mathfrak{g}_{2}\) such that \(\alpha(e^{h})=e^{\psi(h)}\) for all \(h\in\mathfrak{g}_{1}\)._ Proof.: (Sketch) To each \(h\in\mathfrak{g}_{1}\) we associate the unique \(\psi(h)\in B\) such that \(\alpha(e^{th})=e^{t\psi(h)}\) for all \(t\in\mathbb{R}\). The continuity of \(\alpha\) readily implies that \(\psi(h)\in\mathfrak{g}_{2}\). We use the standard arguments for recovery of addition, scalar multiplication, and the Lie product, to show that \(\psi\) is a Lie homomorphism (see [12, Theorem IV.2]). The continuity of \(\psi\) at \(0\) follows from the fact that \(\psi(h)=\log(\alpha(e^{h}))\) for all \(h\in\mathfrak{g}_{1}\) in a sufficiently small neighborhood of \(0\) Suppose now that \(A\) is a unital C*-algebra. Let \(A_{sa}\) denote the set of selfadjoint elements of \(A\). Recall that we denote by \([A,A]\) the linear span of the the commutators in \(A\) and by \(\overline{[A,A]}\) the norm closure of \([A,A]\). Define \[\mathfrak{su}(A):=\overline{[iA_{sa},iA_{sa}]}=\overline{[A_{sa},A_{sa}]}= \overline{[A,A]}\cap iA_{sa}.\] 1. If we choose the Lie subalgebra \(\mathfrak{g}=iA_{sa}\), then \(G_{\mathfrak{g}}=\mathrm{U}_{0}(A)\), and the topology on \(G_{\mathfrak{g}}\) is simply the norm topology. 2. If we choose the Lie subalgebra \(\mathfrak{g}=\mathfrak{su}(A)\), then \(G_{\mathfrak{g}}=\mathrm{SU}_{0}(A)\), by the very definition of \(\mathrm{SU}_{0}(A)\). The topology on \(\mathrm{SU}_{0}(A)\) is finer than, and may in general be different from, the norm topology (see [13, Remark 5.4]). We immediately deduce from the previous theorem the following corollary: **Corollary 2.2**.: _Let \(A\) and \(B\) be unital C*-algebras._ 1. _Let_ \(\alpha\colon\mathrm{U}_{0}(A)\to\mathrm{U}_{0}(B)\) _be a continuous group homomorphism. Then there exists a unique bounded Lie algebra homomorphism_ \(\psi\colon iA_{sa}\to iB_{sa}\) _such that_ \(\alpha(e^{h})=e^{\psi(h)}\) _for all_ \(h\in iA_{sa}\)_._ 2. _Let_ \(\alpha\colon\mathrm{SU}_{0}(A)\to\mathrm{SU}_{0}(B)\) _be a continuous group homomorphism. Then there exists a unique bounded Lie algebra homomorphism_ \(\psi\colon\mathfrak{su}(A)\to\mathfrak{su}(B)\) _such that_ \(\alpha(e^{h})=e^{\psi(h)}\) _for all_ \(h\in\mathfrak{su}(A)\)_._ Next we examine the Lie homomorphism obtained in Corollary 2.2 (ii). Our main tool is [1, Corollary 6.20], which we reproduce here with minor changes of notation and wording. For a subset \(X\) of a ring, we use the notation \(\langle X\rangle\) for the ring generated by \(X\). **Theorem 2.3** ([1, Corollary 6.20]).: _Let \(S\) be a Lie ideal of the Lie ring \(L\) of skew elements of a ring with involution \(A\). Let \(B\) be a prime ring with involution. Denote by \(C\) the extended centroid of \(B\) and by \(K\) be the skewadjoint elements of \(B\). Let \(R\) be a noncentral Lie ideal of \(K\). Suppose that \(S\) admits the operator \(\frac{1}{2}\), and suppose that \(\deg(B)\geqslant 21\) and \(\mathrm{char}(B)\neq 2\). If \(\psi\colon S\to R/(R\cap C)\) is an onto Lie homomorphism, then there exists a ring homomorphism \(\phi\colon\langle S\rangle\to\langle R\rangle C+C\) such that \(\psi(x)=q(\phi(x))\) for all \(x\in S\), where \(q\colon B\to B/B\cap C\) denotes the quotient map._ _Note_: The ring homomorphism \(\phi\) from the theorem automatically preserves the involution. This follows from the fact that the elements of \(\langle S\rangle\) are sums of finite products of elements of \(S\), and from the calculation \[\phi((s_{1}\cdots s_{n})^{*}) =(-1)^{n}\phi(s_{n}\cdots s_{1})\] \[=(\phi(s_{n}))^{*}\cdots(\phi(s_{1}))^{*}=\Big{(}\phi(s_{1}) \cdots\phi(s_{n})\Big{)}^{*}=\Big{(}\phi(s_{1}\cdots s_{n})\Big{)}^{*}.\] We shall apply Theorem 2.3 in the context of Corollary 2.2 (ii), simplifying some of the hypotheses in the process. **Lemma 2.4**.: _Let \(A\) be a unital C*-algebra without 1-dimensional or 2-dimensional representations. Then_ \[A=[A_{sa},A_{sa}]^{2}+[A_{sa},A_{sa}]^{3}+[A_{sa},A_{sa}]^{4}.\] _Note_: Recall that, for sets \(X,Y\subseteq A\), we denote by \(X\cdot Y\) the additive group generated by \(\{xy:x\in X,\,y\in Y\}\). Proof.: Given \(a,b\in A\), we denote by \(a\circ b\) their Jordan product; \(a\circ b=ab+ba\). Let \(M\) denote the additive group generated by \(\{a\circ b:a,b\in[A_{sa},A_{sa}]\}\). Note that \(M\) is an \(\mathbb{R}\)-vector subspace of \(A_{sa}\) and that it is also the additive group generated by the set \(\{a^{2}:a\in[A_{sa},A_{sa}]\}\). Let us show that \(L:=M+iM\) is a Lie ideal of \(A\), i.e., \([L,A]\subseteq L\). To this end, we write \(L=M+iM\) and \(A=A_{sa}+iA_{sa}\) in \([L,A]\). It is then clear that it is sufficient to show that \([M,iA_{sa}]\subseteq M\). But this inclusion follows from the calculation \([a^{2},b]=a\circ[a,b]\), for if \(a\in[A_{sa},A_{sa}]\) and \(b\in iA_{sa}\), then \(a\circ[a,b]\) is the Jordan product of two elements in \([A_{sa},A_{sa}]=[iA_{sa},iA_{sa}]\). Let us show that \(L\) is _fully noncentral_, i.e., that it is not mapped to the center of any nonzero quotient of \(A\). Suppose for the sake of contradiction that \(L\) is mapped to the center of a nonzero quotient \(A/I\), which we may assume to be a simple quotient enlarging \(I\) to a maximal ideal if necessary. Since \([A_{sa},A_{sa}]\) is mapped onto \([(A/I)_{sa},(A/I)_{sa}]\), the set \[\{a\circ b:a,b\in[(A/I)_{sa},(A/I)_{sa}]\}\] is contained in the center of \(A/I\). By assumption, every representation of \(A/I\) has dimension at least \(3\). Hence, by Glimm's lemma, there exists a non-zero homomorphism \(\phi\colon M_{3}(\mathbb{C})\otimes C_{0}(0,1]\to A/I\) ([13, Proposition 3.10]). Setting \(x_{1}=\phi(e_{21}\otimes t)\) and \(x_{2}=\phi(e_{31}\otimes t)\), we get \(x_{1},x_{2}\in A/I\) and nonzero pairwise orthogonal positive elements \(a,b,c\in A/I\) such that \[a=x_{1}^{*}x_{1}=x_{2}^{*}x_{2},\,b=x_{1}x_{1}^{*},\,c=x_{2}x_{2}^{*}.\] Observe that \[a-b=[x_{1}^{*},x_{1}]\in[A/I,A/I]\cap A_{sa}=i[(A/I)_{sa},(A/I)_{sa}],\] and similarly \(a-c\in i[(A/I)_{sa},(A/I)_{sa}]\). Hence, \(2a^{2}=(a-b)\circ(a-c)\) is a central element of \(A/I\), and by functional calculus \(a\) is also central. Similarly, \(b\) and \(c\) are central. But, since \(A/I\) is simple, its center is \(\mathbb{C}\cdot 1\), and thus cannot contain three pairwise orthogonal nonzero elements. This is the desired contradiction. By [13, Theorem 3.1 (ii)], a fully non-central Lie ideal in a unital C*-algebra must contain \([A,A]\). Thus, \([A,A]\subseteq L\). Comparing the selfadjoint parts of these two sets we get that \(i[A_{sa},iA_{sa}]\subseteq M\). Using now that \([A,A]=i[A_{sa},A_{sa}]+[A_{sa},A_{sa}]\), we get that \[[A,A]\subseteq M+[A_{sa},A_{sa}]\subseteq[A_{sa},A_{sa}]^{2}+[A_{sa},A_{sa}].\] Since \(A\) is unital and has no \(1\)-dimensional representations, \(A=[A,A]^{2}\) by [1, Theorem 4.3] (cf. [13, Theorem 3.2]). Squaring both sides of the inclusion \([A,A]\subseteq[A_{sa},A_{sa}]^{2}+[A_{sa},A_{sa}]\) the lemma readily follows. We call a map \(\phi\colon A\to B\) between C*-algebras an \(\mathbb{R}^{*}\)-homomorphism if it is an \(\mathbb{R}\)-algebra homomorphism that preserves the involution. Equivalently, it suffices to require that \(\phi\) be additive, multiplicative and involution preserving, as in this case \(\mathbb{R}\)-linearity follows automatically. **Theorem 2.5**.: _Let \(A\) and \(B\) be unital C*-algebras. Suppose that \(A\) has no 1 or 2-dimensional representations and \(B\) is prime and infinite dimensional. Then any surjective continuous Lie algebra homomorphism \(\psi\colon\mathfrak{su}(A)\to\mathfrak{su}(B)\) has a unique extension to an \(\mathbb{R}^{*}\)-homomorphism \(\phi\colon A\to B\)._ Proof.: Since, by the previous lemma, \(\mathfrak{su}(A)\) generates \(A\) as a ring, it is clear that \(\phi\) is unique. To prove its existence, we apply Theorem 2.3 with \(S=\mathfrak{su}(A)\), which is Lie ideal of the Lie ring \(L=iA_{sa}\) of skewadjoint elements of \(A\). It is clear that this choice of \(S\) "admits the operator \(1/2\)", since it is a vector subspace. By the previous lemma, the ring generated by \(\mathfrak{su}(A)\) is equal to \(A\). On the codomain side, we set \(R=\mathfrak{su}(B)\), which is a Lie ideal of \(iB_{sa}\). This Lie ideal is non-central unless \(B\) is commutative, a possibility that we have ruled out by assumption. (Proof: If \(a,b\in B_{sa}\) are such that \([a,b]\) commutes with \(a\), then \([a,b]\) is quasinilpotent, by the Kleinecke-Shirokov Theorem [10]. Since \([a,b]\) is normal, we get that \([a,b]=0\). Applied to every pair \(a,b\in B_{sa}\), this shows that if \(\mathfrak{su}(B)\) is contained in the center then \(B\) is commutative.) The extended centroid of a prime C*-algebra is \(\mathbb{C}\cdot 1\), by [1, Corollary 2.4]. Thus, \(C=\mathbb{C}\cdot 1\). In Theorem 2.3, \(\deg(B)\) denotes the algebraicity degree of \(B\) over its extended centroid, i.e., the least \(n\) such that every element satisfies a polynomial equation of degree \(n\) over \(C=\mathbb{C}\cdot 1\). Since we have assumed \(B\) to be infinite dimensional, we have \(\deg(B)=\infty\geqslant 21\) (see [1, Theorem C.2]). Let \(q\colon B\to B/\mathbb{C}\cdot 1\) denote the quotient map dividing by the center of \(B\). Then \(\bar{\psi}=q\circ\psi\) is a Lie homomorphism from \(\mathfrak{su}(A)\) onto \(\mathfrak{su}(B)/(\mathfrak{su}(B)\cap\mathbb{C}\cdot 1)\) (identified with the image of \(\mathfrak{su}(B)\) under \(q\)). By Theorem 2.3, there exists a ring homomorphism \(\phi\colon A\to B\) such that \(q(\phi(x))=\bar{\psi}(x)\) for all \(x\in\mathfrak{su}(A)\). By the remark after Theorem 2.3, \(\phi\) preserves the involution. It follows that \(\phi\) is an \(\mathbb{R}^{*}\)-homomorphism. Let us show that \(\phi\) extends \(\psi\). Define \(\lambda(x):=\phi(x)-\psi(x)\) for all \(x\in\mathfrak{su}(A)\). Observe that \(\lambda\) is an additive map taking values in \(\mathbb{C}\cdot 1\). For \(a,b\in\mathfrak{su}(A)\) we have that \[\lambda([a,b]) =[\phi(a),\phi(b)]-[\psi(a),\psi(b)]\] \[=[\lambda(a)+\psi(a),\lambda(b)+\psi(b)]-[\psi(a),\psi(b)]=0.\] Thus, \(\lambda\) vanishes on \([\mathfrak{su}(A),\mathfrak{su}(A)]\), and by continuity, on \(\overline{[\mathfrak{su}(A),\mathfrak{su}(A)]}\). To finalize the proof, it will suffice to show that the Lie algebra \(\mathfrak{su}(A)\) is topologically perfect, i.e., \(\mathfrak{su}(A)=\overline{[\mathfrak{su}(A),\mathfrak{su}(A)]}\), as then \(\lambda=0\), showing that \(\phi\) extends \(\psi\). In any C*-algebra we have that \[\overline{[A,A]}=\overline{[[A,A],[A,A]]}\] ([1, Corollary 2.8]). Intersecting both sides with \(iA_{sa}\) and repeatedly using that \([A,A]=i[A_{sa},A_{sa}]+[A_{sa},A_{sa}]\) we get that \[\mathfrak{su}(A)=\overline{[A,A]}\cap iA_{sa}=\overline{[[A,A],[A,A]]}\cap iA_ {sa}\subseteq\overline{[\mathfrak{su}(A),\mathfrak{su}(A)]}.\] Thus, \(\mathfrak{su}(A)\) is topologically perfect. Combining Theorem 2.5 and Corollary 2.2 we get: **Corollary 2.6**.: _Let \(A\) and \(B\) be separable and as in Theorem 2.5. Then any surjective homomorphism of topological groups \(\alpha\colon\operatorname{SU}_{0}(A)\to\operatorname{SU}_{0}(B)\) extends uniquely to a surjective \(\mathbb{R}^{*}\)-homomorphism \(\phi\colon A\to B\)._ Proof.: By Corollary 2.2 (ii), there exists a unique \(\psi\colon\mathfrak{su}(A)\to\mathfrak{su}(B)\), bounded Lie homomorphism, such that \(\alpha(e^{h})=e^{\psi(h)}\) for all \(h\in\mathfrak{su}(A)\). Let us show that \(\psi\) is surjective. Since \(\alpha\) is a surjective group homomorphism between polish groups, it is open ([11, Theorem 1.5]). Let \(\varepsilon>0\) be such that if \(z\in\mathfrak{su}(B)\) and \(\|z\|<\varepsilon\), then there exists \(h\in\mathfrak{su}(A)\) such that \(\alpha(e^{h})=e^{z}\) and \(\|h\|<1/\|\psi\|\). Then \[e^{\psi(h)}=\alpha(e^{h})=e^{z}.\] Since \(\|\psi(h)\|\leqslant 1\), taking the logarithm of both sides we get \(\psi(h)=z\). Thus, \(B_{\varepsilon}(0)\cap\mathfrak{su}(B)\) is contained in the range of \(\psi\), showing that it is surjective. By Theorem 2.5, \(\psi\) has a unique extension to an \(\mathbb{R}^{*}\)-homomorphism \(\phi\colon A\to B\). Then \[\phi(e^{h})=e^{\phi(h)}=e^{\psi(h)}=\alpha(e^{h}),\] for all \(h\in\mathfrak{su}(A)\). Since \(\{e^{h}:h\in\mathfrak{su}(A)\}\) generates \(\operatorname{SU}_{0}(A)\), we get that \(\phi\) extends \(\alpha\). Let us prove the first theorem stated in the introduction: Proof of Theorem 1.1.: The restriction of an \(\mathbb{R}^{*}\)-isomorphism \(\phi\colon A\to B\) to \(\operatorname{U}(A)\), \(\operatorname{U}_{0}(A)\), and \(\operatorname{SU}_{0}(A)\) is readily seen to give an isomorphism of topological groups with \(\operatorname{U}(B)\), \(\operatorname{U}_{0}(B)\), and \(\operatorname{SU}_{0}(B)\), respectively. Thus, (iv) implies (i), (ii), and (iii). (iii) implies (iv): Let \(\alpha\colon\operatorname{SU}_{0}(A)\to\operatorname{SU}_{0}(B)\) be a topological groups isomorphism. By Corollary 2.2 (ii), \(\alpha\) gives rise to a continuous Lie algebras isomorphism \(\psi\colon\mathfrak{su}(A)\to\mathfrak{su}(B)\). Suppose first that both \(A\) and \(B\) are infinite dimensional. Then, by Theorem 2.5, there exists a unique \(\mathbb{R}^{*}\)-homomorphism \(\phi\colon A\to B\) that extends \(\psi\). Similarly \(\psi^{-1}\colon\mathfrak{su}(B)\to\mathfrak{su}(A)\) has a unique extension to an \(\mathbb{R}^{*}\)-homomorphism \(\phi^{\prime}\colon B\to A\). Since \(\phi^{\prime}\phi\) is the identity on \(\mathfrak{su}(A)\), and \(\mathfrak{su}(A)\) generates \(A\) as a ring (Lemma 2.4), \(\phi^{\prime}\phi\) is the identity on \(A\). Similarly, \(\phi\phi^{\prime}\) is the identity on \(B\). Hence, \(\phi\) is an \(\mathbb{R}^{*}\)-isomorphism from \(A\) to \(B\). As in the proof of Corollary 2.6, we deduce that \(\phi\) extends \(\alpha\). This proves (iv) and the assertion that the \(\mathbb{R}^{*}\)-homomorphism \(\phi\) extends \(\alpha\). Suppose now that either \(A\) or \(B\) is finite dimensional. Let's say \(A\) is finite dimensional. By Theorem 2.2, from the topological groups isomorphism \(\operatorname{SU}_{0}(A)\cong\operatorname{SU}_{0}(B)\) we deduce that \(\mathfrak{su}(A)\cong\mathfrak{su}(B)\) as Lie algebras. In particular, \(\mathfrak{su}(B)\) is a finite dimensional vector space. Then, by Lemma 2.4, \(B\) is finite dimensional. Since \(A\) and \(B\) are both prime and finite dimensional, we have that \(A\cong M_{m}(\mathbb{C})\) and \(B\cong M_{n}(C)\) form some \(m,n\in\mathbb{N}\). Comparing the vector space dimensions of \(\mathfrak{su}(A)\cong\mathfrak{su}(m)\) and \(\mathfrak{su}(B)\cong\mathfrak{su}(n)\) we deduce that \(m=n\). Thus, \(A\cong B\cong M_{n}(\mathbb{C})\) for some \(n\geqslant 3\). It remains to show that every (continuous) automorphism \(\alpha\colon\operatorname{SU}(n)\to\operatorname{SU}(n)\) has an extension to an \(\mathbb{R}^{*}\)-automorphism \(\phi\colon M_{n}(\mathbb{C})\to M_{n}(\mathbb{C})\). Indeed, in this case either \(\alpha\) is inner or \(x\mapsto\overline{\alpha(x)}\) is inner, as follows from the calculation of the group of automorphisms of \(\mathfrak{su}(n)\) ([11, Proposition D.40], [12]). (i) implies (ii): This is straightforward since a topological groups isomorphism from \(\operatorname{U}(A)\) to \(\operatorname{U}(B)\) maps \(\operatorname{U}_{0}(A)\) bijectively onto \(\operatorname{U}_{0}(B)\). (ii) implies (iv): By Corollary 2.2 (i), an isomorphism of \(\operatorname{U}_{0}(A)\) with \(\operatorname{U}_{0}(A)\) gives rise to a bounded Lie algebras isomorphism \(\psi\colon iA_{sa}\to iB_{sa}\). Then \(\psi\) restricts to an isomorphism from \(\mathfrak{su}(A)\) to \(\mathfrak{su}(B)\). As argued in the proof of (iii) implies (iv), \(\psi|_{\mathfrak{su}(A)}\) extends to an isomorphism of \(A\) and \(B\) as real C*-algebras. The reformulation of Theorem 1.1 (iv) as \(A\cong B\) or \(A\cong B^{op}\) as C*-algebras follows from the following lemma: **Lemma 2.7**.: _Let \(A\) be a unital C*-algebra and \(\phi\colon A\to B\) an \(\mathbb{R}^{*}\)-homomorphism onto a C*-algebra \(B\) with center \(\mathbb{C}\cdot 1\) (in particular, a unital prime C*-algebra). Then \(\phi\) is either \(\mathbb{C}\)-linear or conjugate \(\mathbb{C}\)-linear. In the latter case, \(x\mapsto\phi(x)^{*}\) is \(\mathbb{C}\)-linear._ Proof.: Consider \(u=\phi(i\cdot 1)\). This is a central element in \(B\) such that \(u^{2}=-1\). Since the center of \(B\) is \(\mathbb{C}\cdot 1\), either \(u=i1\) or \(u=-i1\). In the first case we deduce that \(\phi\) is \(\mathbb{C}\)-linear, and in the second that it is conjugate \(\mathbb{C}\)-linear. We now proceed to prove the second theorem stated in the introduction. Let us recall the definition of the invariant automatic continuity property, defined in [10]. A topological group \(G\) has the invariant automatic continuity property if every group homomorphism \(\alpha\colon G\to H\), where \(H\) is a topological separable SIN group, is continuous. Here, a SIN group, or "small invariant neighborhoods" group, is a group that has a basis of neighborhoods of the identity which remain invariant under conjugation. By [11, Theorem B], if a unital C*-algebra \(A\) has the bounded commutators generation (BCG) property and a square-zero element, then \(\operatorname{SU}_{0}(A)\) has the invariant automatic continuity property. Proof of Theorem 1.2.: We have already remarked on the fact that (iv) implies (i), (ii), (iii). (iii) implies (iv): Let \(\alpha\colon\operatorname{SU}_{0}(A)\to\operatorname{SU}_{0}(B)\) be a group isomorphism. By [11, Theorem B], \(\operatorname{SU}_{0}(A)\) and \(\operatorname{SU}_{0}(B)\) are SIN groups with the invariant automatic continuity property. Thus, \(\alpha\) is a topological groups isomorphism. It follows from Theorem 1.1 that \(A\) and \(B\) are isomorphic as real C*-algebras. (ii) implies (iii): Let \(\alpha\colon\operatorname{U}_{0}(A)\cong\operatorname{U}_{0}(B)\) be a group isomorphism. Then \(\alpha\) restricts to an isomorphism between the commutators subgroups \(\operatorname{DU}_{0}(A)\) and \(\operatorname{DU}_{0}(B)\). The assumption that \(A\) has the BCG property implies that \([A,A]=\overline{[A,A]}\), and this in turn implies that \(\operatorname{SU}_{0}(A)=\operatorname{DU}_{0}(A)\), since \(\{e^{h}:h\in[A_{sa},A_{sa}]\}\) is a generating set for \(\operatorname{DU}_{0}(A)\) ([12, Theorem 6.2]). Similarly, \(\operatorname{SU}_{0}(B)=\operatorname{DU}_{0}(B)\). We thus get that \(\operatorname{SU}_{0}(A)\cong\operatorname{SU}_{0}(B)\). (i) implies (iii): Let \(\alpha\colon\operatorname{U}(A)\to\operatorname{U}(B)\) be a group isomorphism. Since \(\operatorname{SU}_{0}(A)\) has the invariant automatic continuity property, the restriction of \(\alpha\) to \(\operatorname{SU}_{0}(A)\), regarded as a group homomorphism from \(\operatorname{SU}_{0}(A)\) to \(\operatorname{U}(A)\), is continuous. Since \(\operatorname{SU}_{0}(A)\) is connected, \(\alpha\) maps \(\operatorname{SU}_{0}(A)\) into \(\operatorname{U}_{0}(B)\). As argued in the preceding paragraph, \(\operatorname{SU}_{0}(A)\) agrees with the commutator subgroup \(\operatorname{DU}_{0}(A)\) under the assumptions of the theorem. Moreover, \(\operatorname{DU}_{0}(A)\) is a perfect group, by [11, Theorem 6.2]. Hence, \(\alpha\) must map \(\operatorname{SU}_{0}(A)\) into \(\operatorname{DU}_{0}(B)=\operatorname{SU}_{0}(B)\). Arguing symmetrically, \(\alpha^{-1}\) maps \(\operatorname{SU}_{0}(B)\) to \(\operatorname{SU}_{0}(A)\). Thus, \(\operatorname{SU}_{0}(A)\cong\operatorname{SU}_{0}(B)\).
2310.19721
Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained Image Foundation Models
To address prevalent issues in medical imaging, such as data acquisition challenges and label availability, transfer learning from natural to medical image domains serves as a viable strategy to produce reliable segmentation results. However, several existing barriers between domains need to be broken down, including addressing contrast discrepancies, managing anatomical variability, and adapting 2D pretrained models for 3D segmentation tasks. In this paper, we propose ProMISe,a prompt-driven 3D medical image segmentation model using only a single point prompt to leverage knowledge from a pretrained 2D image foundation model. In particular, we use the pretrained vision transformer from the Segment Anything Model (SAM) and integrate lightweight adapters to extract depth-related (3D) spatial context without updating the pretrained weights. For robust results, a hybrid network with complementary encoders is designed, and a boundary-aware loss is proposed to achieve precise boundaries. We evaluate our model on two public datasets for colon and pancreas tumor segmentations, respectively. Compared to the state-of-the-art segmentation methods with and without prompt engineering, our proposed method achieves superior performance. The code is publicly available at https://github.com/MedICL-VU/ProMISe.
Hao Li, Han Liu, Dewei Hu, Jiacheng Wang, Ipek Oguz
2023-10-30T16:49:03Z
http://arxiv.org/abs/2310.19721v3
# Promise: Prompt-Driven 3D Medical Image Segmentation Using Pretrained Image Foundation Models ###### Abstract To address prevalent issues in medical imaging, such as data acquisition challenges and label availability, transfer learning from natural to medical image domains serves as a viable strategy to produce reliable segmentation results. However, several existing barriers between domains need to be broken down, including addressing contrast discrepancies, managing anatomical variability, and adapting 2D pretrained models for 3D segmentation tasks. In this paper, we propose ProMISe, a prompt-driven 3D medical image segmentation model using only a single point prompt to leverage knowledge from a pretrained 2D image foundation model. In particular, we use the pretrained vision transformer from the Segment Anything Model (SAM) and integrate lightweight adapters to extract depth-related (3D) spatial context without updating the pretrained weights. For robust results, a hybrid network with complementary encoders is designed, and a boundary-aware loss is proposed to achieve precise boundaries. We evaluate our model on two public datasets for colon and pancreas tumor segmentations, respectively. Compared to the state-of-the-art segmentation methods with and without prompt engineering, our proposed method achieves superior performance. The code is publicly available at [https://github.com/MedICL-VU/ProMISe](https://github.com/MedICL-VU/ProMISe) Hao Li, Han Liu, Dewei Hu, Jiacheng Wang, Ipek Oguz Vanderbilt University Medical image segmentation, lightweight adapter, transfer learning, prompt engineering, pretrained Segment Anything Model (SAM) ## 1 Introduction Recently, image segmentation foundation models [1, 2] have revolutionized the field of image segmentation, demonstrating wide generalizability and impressive performance by training on massive amounts of data to learn general representations. Prompt engineering further improves the segmentation capability of these models. Given proper prompts as additional inputs, these models can handle various zero-shot tasks across domains and produce reliable segmentations during inference. Unlike these broad successes, medical image segmentation is often limited by issues such as expensive data acquisition and time-consuming annotation processing, resulting in a lack of massive public datasets available for training. Thus it is desirable to leverage transfer learning from the natural image domain for robust medical image segmentation [3]. However, directly leveraging pretrained 2D natural image foundation models for 3D medical image segmentation often leads to sub-optimal results [4]. This is primarily because: (1) medical images have their own unique contrast and texture characteristics; (2) anatomical differences among individuals make medical image segmentation challenging; and (3) slice-wise (2D) segmentation with transfer learning discards important depth-related spatial context in 3D medical data. Given these challenges, can we effectively adapt the pretrained models to achieve robust 3D medical segmentations? In this paper, we propose ProMISe, **p**rompt-driven 3D **m**edical **m**age **s**egmentation using pretrained image foundation models (see Fig. 1). Specifically, ProMISe takes a 3D input image and a single point prompt as inputs, and uses image and prompt encoders to produce segmentation. Unlike most promptable models, a shallow convolutional neural network (CNN) is used as complementary path alongside the pretrained transformer image encoder [1], with adapters employed within the transformer to capture 3D depth context. During training, most weights of the adapted transformer encoder remain static; the other components in the proposed method are designed in a lightweight manner for efficiency and trained from scratch. We use a structural loss and a novel boundary-aware loss for precise decisions. The main novel contributions are: * We propose a method for 3D medical segmentation that adapts pretrained image foundation models. Plug-and-play lightweight adapters are used to better optimize knowledge transfer across domains and more effectively capture fine-grained features. Our method is compatible with various pretrained image models, easy to implement, and cost-effective to train. * We present a simple yet efficient boundary-aware loss for ambiguous edges. This ready-to-use loss can be seamlessly integrated into any training process without the need for offline edge map generation from ground truth. * We validate the performance on two public datasets for challenging tumor segmentations. Our method outperforms state-of-the-art segmentation methods consistently. **Related works.** Fully fine-tuning image foundation models for a task requires a large amount computational resources and is not training-efficient. In contrast, partially fine-tuning [5] or introducing and training new shallow layers, such as lightweight adapters [6, 7, 8, 9, 10] and the Low-Rank Adaptation (LoRA) module [11, 12], have demonstrated robust performance as parameter-efficient fine-tuning methods. Recent works use SAM [1] for 3D medical image segmentation in a 2D slice-wise manner, which discard important depth-wise (3D) information and may require additional efforts to create prompts [5, 12]. Other models use adapters; this approach has proven effective for adapting a pretrained model from 2D images to 3D (2D+time) videos [6, 9], and it has subsequently been utilized in 3D medical image segmentation [10] with the use of adapters in the pretrained transformer block [7]. Although these models can segment 3D medical images, the image encoder still operates in a slice-wise (2D) manner with an additional branch for depth information. The weights for this branch that are replicated from the spatial branch demand more computational resources. In contrast, a holistic adaptation of SAM for 3D medical segmentation was proposed in [8], which avoids a depth branch by including an adaptor with depth-wise convolution [6]. However, a single adapter in each transformer block may not fully achieve accurate adaptation due to the notable discrepancies between natural and medical images. Moreover, this method struggles to adequately capture details and can lead to sub-optimal results, especially for tumor segmentation. These challenges and the critical importance of precise segmentation in medical applications motivate our proposed model as a more robust solution. ## 2 Methods Fig. 1 illustrates ProMISe, our proposed framework for 3D medical image segmentation, which employs prompt engineering and a pretrained image foundation model. Specifically, a 3D patch is taken as input and is fed through complementary CNN and transformer encoders. The prompt encoder utilizes the deepest feature from the transformer encoder (blue arrow in Fig. 1) as input together with the point prompt. Subsequently, all features, including the original input, are used to predict the segmentation mask via a lightweight CNN decoder. During training, the transformer encoder is partially tuned, while the rest are trained from scratch. **Image encoders.** Our model is designed to effectively capture both global and local information using complementary transformer and CNN encoders, respectively. For the transformer encoder (Fig. 1(b)), the input 3D image patch first passes through an embedding layer to create tokens with their positional information. Specifically, the pre-trained weights from SAM [1] are employed for spatial patch embedding, and we introduced a trainable depth embedding layer for 3D data. The same approach is applied for positional encoding. Furthermore, we adapted the pretrained weights from SAM and fine-tuned the normalization layer in every transformer block. Unlike other works that employ a single adapter at the beginning of the transformer block [6, 8], an additional lightweight adapter is used before the output to optimize knowledge transfer across domains and further refine the image features. Notably, the adapter employs depth-wise convolution to handle 3D images. Inspired by the hybrid network design [13], a CNN encoder is used to capture detailed information to complement the transformer. This is particularly desirable for tumor segmentation, as the boundaries are often ambiguous. It is designed as a shallow network for efficiency (Fig. 2(a)). **Prompt encoder.** We adapt the visual prompt encoder based on [2] (Fig. 3). Unlike the prompt encoder proposed in SAM [1], we incorporate image embeddings from the transformer encoder as an additional input. Point embeddings are derived from the given point prompt and image embedding using visual sampling (e.g., grid sampling) to ensure that their semantic features are aligned with image embeddings. Subsequently, the self-attention layer is applied to the point embeddings and learnable global queries. Afterwards, the image embeddings are applied to these queries via cross-attention. The output of the prompt encoder is fed to the mask decoder. During training, 10 random points from background are provided for each input patch to increase the generlizability to noisy prompts. In contrast to previous work that utilized 40 points from target region as prompts [8], we randomly se Figure 1: The proposed framework (ProMISe) and details of transformer encoder are shown as (a) and (b), respectively. lect 10 point prompts during each iteration if the input patch contains foreground. For prompt engineering, our goal is a single click with minimal prior knowledge, but more prompts are supported if desired during inference. **Mask decoder.** Instead of directly adapting the mask decoder from the foundation model in a 2D manner, we designed a shallow network to efficiently capture features in 3D and trained it from scratch (Fig. 2(b)). The multi-level features from the transformer encoder (Fig. 1(b)) are refined by two successive convolutional blocks. These are followed by a transposed convolution to ensure the features remain the same size. The fused features are processed through another convolutional block and a segmentation head for final results. **Boundary-aware loss.** In medical image segmentation, accurately delineating the boundaries of objects is important, especially for irregularly shaped objects such as tumors [14]. Besides popular structural segmentation losses, such as the combined Dice loss and cross-entropy loss (denoted as \(L_{structural}\)), we further propose a boundary loss (\(L_{boundary}\)) to preserve fine details and produce robust segmentations. Moreover, by emphasizing edge accuracy, the model might generalize better to unseen data for tumor segmentation. As shown in Fig. 1, we extract a smooth boundary map rather than a binary boundary for a more robust representation, and because learning from a binary boundary is a challenging task. Specifically, we use average-pooling operation \(P_{ave}\) with kernel size 5 as boundary generator. Given a binary mask \(M\), the smooth boundary is derived: \(B(M)=|M-P_{ave}(M)|\). The total objective function is: \[L(S,G)=\lambda_{1}L_{structural}(S,G)+\lambda_{2}L_{boundary}(B(S),B(G))\] where \(S\) and \(G\) represent segmentation and ground truth. \(L_{structural}=L_{Dice}+L_{CE}\) is used to capture the structural information and \(L_{boundary}=L_{MSE}\) recovers the detailed contours. Unlike other methods [15] that require complicated offline computation of edge or distance maps to avoid iterative generation, our proposed ready-to-use boundary loss is computationally efficient and can be easily adapted to any segmentation task, and is independent of any augmentation. ## 3 Experiments ### Experimental settings **Datasets.** We evaluated our proposed method on two public datasets from the Medical Segmentation Decathlon ([http://medicaldecathlon.com/](http://medicaldecathlon.com/)) for challenging tumor tasks from pancreas and colon applications, where ambiguous edges are present. These consist of 281 (\(0.61\times 0.61\times 0.7\) to \(0.98\times 0.98\times 7.5mm^{3}\)) and 126 (\(0.54\times 0.54\times 1.25\) to \(0.98\times 0.98\times 7.5mm^{3}\)) 3D CT volumes, respectively. Following the setup from the prior study [8], we used the same data split for each task with a training/validation/testing split of 0.7/0.1/0.2 and only use tumor labels to focus on binary segmentation. **Preprocessing.** We resample to \(1mm\) isotropic resolution, intensity clip based on foreground \(0.5\) and \(99.5\) percentiles, and Z-score normalize based on all foreground voxels. Four data augmentations were used: random flip, rotation, zoom, and intensity shift. During training, an input patch of size \(128\times 128\times 128\) was randomly selected such that its center pixel is equally likely to be foreground or background. Subsequently, each dimension was upsampled to 512. **Implementation details.** We utilized pretrained ViT-B from SAM [1] as transformer encoder, and set \(\lambda_{1}:\lambda_{2}=1:10\) during training. The batch size was 1, and initial learning rate was 0.0004 with decreased amount \(2e^{-}6\) every epoch. The AdamW optimizer was used with a maximum of 200 epochs. We used PyTorch, MONAI and an NVIDIA A6000 GPU for our experiments. The Dice score and normalized surface Dice (NSD) are used for evaluation. Compared state-of-the-art methods include: CNN (nnU-Net [16]), CNN with large kernel (3D UX-Net [17]), Swin-encoder with CNN decoder (Swin-unet [19]), pure transformer (nnFormer [18]), and adaptation method with adapters (3DSAM-adapter [8]). We retrained using their official codes, and the pretrained weights are also employed if publicly available. ### Results **Quantitative results.** Tab. 1 presents a detailed comparison of results for colon and pancreas tumor segmentation. Notably, while CNN-based networks segment these tumors more effectively than transformers, prompt-driven methods outperform others when provided with only a single point in the entire volume. Our proposed method consistently outperforms all in terms of both Dice and boundary (NSD) metrics. **Ablation study.** We also investigated the efficiency variations of the proposed ProMISe (Tab. 2). The use of two adapters and the boundary-aware loss mostly improved the Figure 3: The details of the proposed prompt encoder. Figure 2: The details of (a) CNN encoder, and (b) mask decoder. results. Interestingly, switching from trilinear upsampling to up-convolution improved the performance for the colon, but showed a decline for the pancreas. This implies that trilinear upsampling may be more appropriate for pancreas tumors, which are typically round in shape. Using concatenation (-C) in the CNN encoder offers better Dice scores than residual connections (-R), though the latter improves surface quality more. While the performance of ProMISe improves with 10 prompts, the improvement is limited over a single prompt. Furthermore, it is challenging to identify the tumor area due to ambiguous boundaries, making the use of a single click preferable in practice, as it requires less expert knowledge. **Qualitative results.** Fig. 4 shows qualitative visualizations from top-performing promptable methods. ProMISe yields results that closely align with the ground truth. 3DSAM-adapter [8] fails to detect certain regions that ProMISe captures, even without the boundary-aware loss. This indicates the improved generalizability of the model through our proposed modifications. Moreover, the use of the boundary-aware loss yields robust segmentations, alleviating issues of both under-segmentation for colon and over-segmentation for pancreas tumors, respectively. Notably, the boundary-aware loss improves segmentation not just for the irregularly shaped colon tumors but also for the pancreas tumors, which typically have a more regular, rounded shape. However, slight under-segmented areas are found in pancreas segmentation. ## 4 Conclusion In this paper, we propose a promptable network, named ProMISe, designed for robust 3D tumor segmentation using pretrained weights from image foundation models. We evaluate on two public datasets, where our model consistently outperforms state-of-the-art methods across all tasks. Moreover, the critical role of the two adapters and boundary-aware loss techniques are demonstrated. Future work will aim to improve the efficiency through knowledge distillation. **Acknowledgments.** This work was supported, in part, by NIH U01-NS106845, NSF grant 2220401. \begin{table} \begin{tabular}{l l|c c|c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{Colon} & \multicolumn{2}{c}{Pancreas} \\ \hline Method & Dice & NSD & Dice & NSD \\ \hline baseline [8] & 57.32 & 73.65 & 54.41 & 77.88 \\ + two adapters & 61.61 & 73.88 & 56.08 & 77.89 \\ + up-Conv & 62.92 & 77.62 & 55.37 & 77.38 \\ \hline ProMISe-R & 63.67 & 79.96 & 55.15 & 79.02 \\ ProMISe-R-B & 64.75 & 79.77 & 56.57 & 79.46 \\ \hline ProMISe-C & 64.76 & 77.59 & 56.35 & 78.01 \\ ProMISe-C (**proposed**) & 66.81 & 81.24 & 57.46 & 79.76 \\ \hline baseline [8] (10 prompts) & 63.09 & 79.97 & 55.94 & 79.18 \\ ProMISe-C-B (10 prompts) & 67.28 & 81.63 & 58.05 & 80.36 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative results of ablation study with single point prompt unless noted. R and C represent residual and concatenate fusions, and B indicates boundary loss. + shows the cumulative variants. Best viewed by individual sections. Figure 4: Qualitative results. BL denotes boundary-aware loss. The major differences are highlighted by orange arrows. \begin{table} \begin{tabular}{c l|c c c c c} \hline \hline Dataset & Metric & nnU-Net [16] & 3D UX-Net [17] & nnFormer [18] & Swin-UNETR [19] & 3DSAM-adapter [8] & ProMISe \\ \hline Colon & Dice & 45.60 & 23.07 & 21.36 & 37.23 & 57.32 & **66.81\({}^{*}\)** \\ & NSD & 53.01 & 32.84 & 32.05 & 51.16 & 73.65 & **81.24\({}^{*}\)** \\ \hline \multirow{2}{*}{Pancreas} & Dice & 39.12 & 37.57 & 35.98 & 37.98 & 54.41 & **57.46\({}^{*}\)** \\ & NSD & 57.66 & 55.25 & 53.45 & 56.42 & 77.88 & **79.76\({}^{*}\)** \\ \hline \hline \end{tabular} \end{table} Table 1: Dice and normalized surface Dice (NSD) for colon and pancreas tumor. Bold indicates best performance. Significant improvements (2-tailed paired t-test, \(p<0.05\)) are denoted via \({}^{*}\). The promptable models use 1 point prompt per 3D volume. ## 5 Compliance with Ethical Standards This research study was conducted retrospectively using human subject data made available in open access by MSD. Ethical approval was not required as confirmed by the license attached with the open access data.
2303.06739
Extreme values of Dirichlet polynomials with multiplicative coefficients
We study extreme values of Dirichlet polynomials with multiplicative coefficients, namely \[D_N(t) : = D_{f,\, N}(t)= \frac{1}{\sqrt{N}} \sum_{n\leqslant N} f(n) n^{it}, \] where $f$ is a completely multiplicative function with $|f(n)|=1$ for all $n\in\mathbb{N}$. We use Soundararajan's resonance method to produce large values of $\left|D_N(t)\right|$ uniformly for all such $f$. In particular, we improve a recent result of Benatar and Nishry, where they establish weaker lower bounds and only for almost all such $f$.
Max Wenqiang Xu, Daodao Yang
2023-03-12T19:44:28Z
http://arxiv.org/abs/2303.06739v1
# Extreme values of Dirichlet polynomials with multiplicative coefficients ###### Abstract. We study extreme values of Dirichlet polynomials with multiplicative coefficients, namely \[D_{N}(t):=D_{f,\,N}(t)=\frac{1}{\sqrt{N}}\sum_{n\leqslant N}f(n)n^{it},\] where \(f\) is a completely multiplicative function with \(|f(n)|=1\) for all \(n\in\mathbb{N}\). We use Soundararajan's resonance method to produce large values of \(|D_{N}(t)|\) uniformly for all such \(f\). In particular, we improve a recent result of Benatar and Nishry, where they establish weaker lower bounds and only for almost all such \(f\). ## 1. Introduction Resonance methods are very successful in giving extreme values of arithmetic functions. The earliest applications of resonance methods can at least be traced back to Voronin's work in [17]. A more general and powerful version of the resonance method was introduced by Soundararajan in [15] which was successful in finding extreme values of zeta and \(L\) functions. There have been many further developments and applications of the method, we refer readers to [1, 2, 3, 5, 6, 7, 8, 9, 16, 18, 19, 20] and references therein. In this paper, we study the extreme values of Dirichlet polynomials with multiplicative coefficients. Let \(\mathscr{F}\) be the set of completely multiplicative functions \(f\) with \(|f(n)|=1\) for all \(n\in\mathbb{N}\). Let \[D_{N}(t):=D_{f,\,N}(t)=\frac{1}{\sqrt{N}}\sum_{n\leqslant N}f(n)n^{it}.\] **Theorem 1.1**.: _Let \(D_{f,\,N}(t)\) be defined as above. Let \(\delta,\gamma\in(0,1)\) be fixed. Let \(T=N^{C(N)}\) where \(C(N)\) satisfies \(2/\delta\leqslant C(N)\leqslant(\log N)^{\gamma}\). Then for sufficiently large \(N\), we have_ \[\sup_{|t|\leqslant T}|D_{f,\,N}(t)|\geqslant\exp\left(\sqrt{(1-\delta)\frac{ \log T}{\log\log T}}\right)\,,\] _uniformly for all \(f\in\mathscr{F}\)._ A Steinhaus random multiplicative function \(\mathbb{X}(n)\) is a completely multiplicative function and \(\mathbb{X}(p)\) are independent random variables taking value on the complex unit circle for all primes \(p\). Another way to state Theorem 1.1 is that \(\mathscr{F}\) is the family of all possible Steinhaus random multiplicative functions. In particular, for all \(\mathbb{X}\), and all large \(N\), \[\sup_{|t|\leqslant N^{C(N)}}\left|\frac{1}{\sqrt{N}}\sum_{n\leqslant N}\mathbb{X} (n)n^{it}\right|\geqslant\exp\left(\sqrt{(1-\delta)\frac{\log T}{\log\log T}} \right)\geqslant\exp\left(\sqrt{\left(\frac{1-\delta}{1+\gamma}\right)\frac{C( N)\log N}{\log\log N}}\right).\] This improves the lower bound obtained in a recent interesting paper [4, Theorem 1.1] where they have \((\log\log N)^{4}\) instead of our \(\log\log N\) in the denominator. Our result is stronger also in the sense that the bound holds _uniformly for all_\(f\), rather than just _almost all_\(f\) as in [4, Theorem 1.1]. We remark that the multiplicativity of \(\mathbb{X}(n)\) is crucial here. Without it, the extreme values are significantly smaller (see [4, Section 3.3] for more discussions). We refer readers to recent results in extreme values of sums of random multiplicative functions to [12, 13, 14, 10]. ## 2. Proof of Theorem 1.1 Our method was initiated by Soundararajan in [15]. We first set up the framework by using the resonance method. Let \[R(t):=\sum_{n\leqslant X}r_{f}(n)n^{it}\quad\text{where}\quad X=T^{1-\frac{2 \delta}{3}},\] where \(r_{f}(n)\) is a function defined on integers. As in [15], we define \(\Phi:\,\mathbb{R}\to\mathbb{R}\) to be a smooth function, compactly supported in \([\frac{1}{2},1]\), with \(0\leqslant\Phi(y)\leqslant 1\) for all \(y\), and \(\Phi(y)=1\) for \(5/8\leqslant y\leqslant 7/8\). Let \[M_{1}(R,T):=\int_{-\infty}^{+\infty}|R(t)|^{2}\Phi(\frac{t}{T})dt,\] and \[M_{2}(R,T):=\int_{-\infty}^{+\infty}|R(t)|^{2}\Phi(\frac{t}{T})|D_{N}(t)|^{2} dt\,.\] Then \[\sup_{|t|\leqslant T}|D_{N}(t)|\geqslant\sqrt{\frac{M_{2}(R,T)}{M_{1}(R,T)}}. \tag{2.1}\] The quantity \(M_{1}(R,T)\) has a nice expression. **Lemma 2.1** ([15]).: _We have_ \[M_{1}(R,T)=T\hat{\Phi}(0)(1+O(T^{-1}))\sum_{n\leqslant X}|r_{f}(n)|^{2}. \tag{2.2}\] Proof.: The result is in [15, Equation (2), page 471]. The key point is that partial integration gives that for any positive integer \(\nu\), \[\hat{\Phi}(y)\ll_{\nu}|y|^{-\nu} \tag{2.3}\] which leads to the expression (2.2). Now we focus on estimating \(M_{2}(R,T)\). \[M_{2}(R,T)=\frac{T}{N}\sum_{m,n\leqslant N}\sum_{a,b\leqslant X}f(n)\overline{f(m )}r_{f}(a)\overline{r_{f}(b)}\hat{\Phi}(T\log(\frac{ma}{nb})). \tag{2.4}\] We split \(M_{2}(R,T)\) into two parts by considering two cases \(ma=nb\) or \(ma\neq nb\). \[M_{2}(R,T)=\frac{T}{N}\hat{\Phi}(0)\sum_{\begin{subarray}{c}m,n\leqslant N\\ a,b\leqslant X\\ ma=nb\end{subarray}}f(n)\overline{f(m)}r_{f}(a)\overline{r_{f}(b)}+\frac{T}{N }\sum_{\begin{subarray}{c}m,n\leqslant N\\ a,b\leqslant X\\ ma\neq nb\end{subarray}}f(n)\overline{f(m)}r_{f}(a)\overline{r_{f}(b)}\hat{ \Phi}(T\log(\frac{ma}{nb})). \tag{2.5}\] We first consider the off-diagonal terms, i.e., \(ma\neq nb\). By our assumption, \(T=N^{C(N)}\geqslant N^{\frac{2}{3}}\). So in this case, we have \[T\left|\log\frac{ma}{nb}\right|\geqslant T\frac{1}{NX}=\frac{T^{\frac{2\delta }{3}}}{N}\geqslant T^{\frac{\delta}{6}}.\] Apply (2.3) to get that \[\left|\hat{\Phi}(T\log(\frac{ma}{nb}))\right|\ll_{\delta}T^{-10}\,.\] This leads to \[\begin{split}&\Big{|}\frac{T}{N}\sum_{\begin{subarray}{c}m,n \leqslant N\\ a,b\leqslant X\\ ma\neq nb\end{subarray}}f(n)\overline{f(m)}r_{f}(a)\overline{r_{f}(b)}\hat{ \Phi}(T\log(\frac{ma}{nb}))\Big{|}\\ \ll_{\delta}\frac{T}{N}\sum_{m,n\leqslant N}1\Big{(}\sum_{a \leqslant X}\ |r_{f}(a)|\Big{)}^{2}T^{-10}\\ \ll_{\delta}T^{-9}NX\sum_{n\leqslant X}\ |r_{f}(n)|^{2}\\ \ll_{\delta}T^{-7}\sum_{n\leqslant X}|r_{f}(n)|^{2}\,,\end{split}\] where in the second step we use Cauchy-Schwarz inequality. This shows that the off-diagonal terms are negligible. So in the next steps, we just need to consider the diagonal terms in (2.5), i.e the sum involving terms \(ma=nb\). We use the following trick to get rid of dependence on \(f\). Set \[r_{f}(n)=\overline{f(n)}r(n),\] where \(r\) is a non-negative multiplicative function to be chosen later, independent of \(f\). By noticing that \(|f(n)|=1\), the sum involving diagonal terms is the same as \[\frac{T}{N}\hat{\Phi}(0)\sum_{\begin{subarray}{c}m,n\leqslant N\\ a,b\leqslant X\\ ma=nb\end{subarray}}r(a)r(b). \tag{2.6}\] By the estimates on off-diagonal terms and (2.2), we have \[\frac{M_{2}(R,T)}{M_{1}(R,T)}=\frac{1}{N}\sum_{\begin{subarray}{c}m,n\leqslant N\\ a,b\leqslant X\\ ma=nb\end{subarray}}r(a)r(b)\Big{/}\sum_{n\leqslant X}r(n)^{2}+O_{\delta} \left(\frac{1}{T}\right)\,. \tag{2.7}\] Now our theorem would immediately follow from Hough's work [11] on large character sums. In particular, his work would imply \[\frac{1}{N}\sum_{\begin{subarray}{c}m,n\leqslant N\\ a,b\leqslant X\\ ma=nb\end{subarray}}r(a)r(b)\Big{/}\sum_{n\leqslant X}r(n)^{2}\geqslant\exp \left((2+o(1))\sqrt{\frac{\log X}{\log\log X}}\right). \tag{2.8}\] For completeness, we present Hough's work. Let \(g=(a,b)\), \(h=(m,n)\), \(a=a^{\prime}g,b=b^{\prime}g\) then \((a^{\prime},b^{\prime})=1\) and \(m=hb^{\prime},n=ha^{\prime}\). With this parametrization, we have \[\begin{split}\sum_{\begin{subarray}{c}m,n\leqslant N\\ a,b\leqslant X\\ ma=nb\end{subarray}}r(a)&\geqslant\sum_{\begin{subarray}{c}a^{ \prime},b^{\prime}\leqslant X\\ (a^{\prime},b^{\prime})=1\end{subarray}}r(a^{\prime})r(b^{\prime})\sum_{ \begin{subarray}{c}g\leqslant\frac{X}{\max(a^{\prime},b^{\prime})}\\ (g,a^{\prime}b^{\prime})=1\end{subarray}}r^{2}(g)\sum_{h\leqslant\frac{N}{\max (a^{\prime},b^{\prime})}}1\\ &\geqslant(1+o(1))\,N\sum_{\begin{subarray}{c}a^{\prime},b^{ \prime}\leqslant\min\{X,N\}\\ (a^{\prime},b^{\prime})=1\end{subarray}}\frac{r(a^{\prime})r(b^{\prime})a^{ \prime}b^{\prime}}{\max(a^{\prime},b^{\prime})^{3}}\sum_{\begin{subarray}{c}g \leqslant\frac{X}{\max(a^{\prime},b^{\prime})}\\ (g,a^{\prime}b^{\prime})=1\end{subarray}}r^{2}(g).\end{split} \tag{2.9}\] We replace the sum over \(g\) by multiplicativity and Rankin's trick, for any \(\alpha>0\), \[\begin{split}\sum_{\begin{subarray}{c}g\leqslant\max(a^{\prime},b ^{\prime})\\ (g,a^{\prime}b^{\prime})=1\end{subarray}}r^{2}(g)&=\sum_{(g,a^{ \prime}b^{\prime})=1}r^{2}(g)-\sum_{\begin{subarray}{c}g>\frac{X}{\max(a^{ \prime},b^{\prime})}\\ (g,a^{\prime}b^{\prime})=1\end{subarray}}r^{2}(g)\\ &=\prod_{p|a^{\prime}b^{\prime}}(1+r(p)^{2})+O\left(\left(\frac{X} {\max(a^{\prime},b^{\prime})}\right)^{-\alpha}\prod_{p|a^{\prime}b^{\prime}}(1 +r(p)^{2}p^{\alpha})\right).\end{split}\] We also have, by multiplicativity of \(r(n)\), \[\sum_{n\leqslant X}r(n)^{2}\leqslant\prod_{p}(1+r(p)^{2}).\] Combining the above two estimates and (2.9), we have that the main term in (2.8) is at least \[M: =\sum_{\begin{subarray}{c}a^{\prime},b^{\prime}\leqslant\min(X,N)\\ (a^{\prime},b^{\prime})=1\end{subarray}}\frac{r(a^{\prime})r(b^{\prime})a^{ \prime}b^{\prime}}{\max(a^{\prime},b^{\prime})^{3}}\Big{/}\prod_{p|a^{\prime} b^{\prime}}(1+r(p)^{2}), \tag{2.10}\] and the error term is (let \(z:=\min(X,N)\)) at most \[E:\,=\prod_{p}(1+r(p)^{2})^{-1}X^{-\alpha}\sum_{\begin{subarray}{c}a^{\prime},b^{ \prime}\leqslant z\\ (a^{\prime},b^{\prime})=1\end{subarray}}\frac{r(a^{\prime})r(b^{\prime})(a^{ \prime}b^{\prime})^{1+\alpha}}{(a^{\prime}b^{\prime})^{\frac{3}{2}}}\sum_{(g,a ^{\prime}b^{\prime})=1}r(g)^{2}g^{\alpha}. \tag{2.11}\] Next, we make a choice of the resonator \(r(n)\). Set \(\lambda=\sqrt{\log X\log\log X}\). We choose \(r(n)\) supported square-free integers and let \[r(p)=\begin{cases}\frac{\lambda}{\sqrt{p}\log p}\,,&\lambda^{2}\leqslant p \leqslant\exp((\log\lambda)^{2})\\ 0\,,&\text{otherwise.}\end{cases}\] We define a multiplicative function \(t(n)\) supported on squarefree integers by setting \(t(p)=\frac{r(p)}{1+r(p)^{2}}\). We have the following estimates borrowed from [11]. **Lemma 2.2** ([11], Lemma 4.5).: _Uniformly in \(z\geqslant 1\),_ \[\sum_{\begin{subarray}{c}m_{1},m_{2}\leqslant z\\ (m_{1},m_{2})=1\end{subarray}}\frac{t(m_{1})t(m_{2})m_{1}m_{2}}{\max(m_{1},m_{ 2})^{3}}\geqslant\frac{1}{\log z}\Big{(}\sum_{m\leqslant z}\frac{t(m)}{\sqrt{ m}}\Big{)}^{2}.\] _Assume that \(z>\exp(3\lambda\log\log\lambda)\) with \(\lambda=\sqrt{\log X\log\log X}.\) As \(X\to+\infty\), we have_ \[\sum_{\begin{subarray}{c}m_{1},m_{2}\leqslant z\\ (m_{1},m_{2})=1\end{subarray}}\frac{t(m_{1})t(m_{2})m_{1}m_{2}}{\max(m_{1},m_{ 2})^{3}}\geqslant\exp\Big{(}(1+o(1))\frac{\lambda}{\log\lambda}\Big{)}.\] **Lemma 2.3** ([11], Lemma 4.3).: _Assume that \(z>\exp(3\lambda\log\log\lambda)\) with \(\lambda=\sqrt{\log X\log\log X}\) and \(\alpha=(\log\lambda)^{-3}\). Then_ \[X^{-\alpha}\sum_{\begin{subarray}{c}m_{1},m_{2}\leqslant z\\ (m_{1},m_{2})=1\end{subarray}}\frac{r(m_{1})r(m_{2})}{(m_{1}m_{2})^{1/2-\alpha }}\sum_{(d,m_{1}m_{2})=1}r(d)^{2}d^{\alpha}\Big{/}\left(\sum_{m\leqslant X} \frac{t(m)}{\sqrt{m}}\right)^{2}\sum_{d}r(d)^{2}\leqslant\exp\Big{(}-(1+o(1)) \frac{32\log X}{(\log\log X)^{4}}\Big{)}.\] By our assumption that \(C(N)\leqslant(\log N)^{\gamma}\) and \(X=T^{1-\frac{2\delta}{3}}=N^{C(N)(1-\frac{2\delta}{3})}\), we have \(\log N>3\lambda\log\log\lambda\). And clearly, \(\log X>3\lambda\log\log\lambda\) for all large \(X\). Apply Lemma 2.2 to get that \[\sum_{\begin{subarray}{c}a^{\prime},b^{\prime}\leqslant\min(X,N)\\ (a^{\prime},b^{\prime})=1\end{subarray}}\frac{r(a^{\prime})r(b^{\prime})a^{ \prime}b^{\prime}}{\max(a^{\prime},b^{\prime})^{3}}\Big{/}\prod_{p|a^{\prime} b^{\prime}}(1+r(p)^{2})\geqslant\exp\left((2+o(1))\sqrt{\frac{\log X}{\log\log X }}\right).\] By Lemma 2.3, we find that the ratio \(E/M\) tends to zero as \(X\to+\infty\), where the quantities \(M\), \(E\) are defined in (2.10) and (2.11). Thus we complete the proof of (2.8). Combining (2.1), (2.7), (2.8) and recalling that \(X=T^{1-\frac{2\delta}{3}}\), we are done. **Acknowledgement.** The authors would like to thank Kannan Soundararajan for interesting discussions. Yang thanks the hospitality of the math department at Stanford University during his visit when the project started. Xu is supported by the Cuthbert C. Hurd Graduate Fellowship in the Mathematical Sciences, Stanford. Yang is supported by the Austrian Science Fund (FWF), project W1230.
2302.06410
Models to support forest inventory and small area estimation using sparsely sampled LiDAR: A case study involving G-LiHT LiDAR in Tanana, Alaska
A two-stage hierarchical Bayesian model is developed and implemented to estimate forest biomass density and total given sparsely sampled LiDAR and georeferenced forest inventory plot measurements. The model is motivated by the United States Department of Agriculture (USDA) Forest Service Forest Inventory and Analysis (FIA) objective to provide biomass estimates for the remote Tanana Inventory Unit (TIU) in interior Alaska. The proposed model yields stratum-level biomass estimates for arbitrarily sized areas. Model-based estimates are compared with the TIU FIA design-based post-stratified estimates. Model-based small area estimates (SAEs) for two experimental forests within the TIU are compared with each forest's design-based estimates generated using a dense network of independent inventory plots. Model parameter estimates and biomass predictions are informed using FIA plot measurements, LiDAR data that are spatially aligned with a subset of the FIA plots, and complete coverage remotely detected data used to define landuse/landcover stratum and percent forest canopy cover. Results support a model-based approach to estimating forest variables when inventory data are sparse or resources limit collection of enough data to achieve desired accuracy and precision using design-based methods.
Andrew O. Finley, Hans-Erik Andersen, Chad Babcock, Bruce D. Cook, Douglas C. Morton, Sudipto Banerjee
2023-02-13T14:46:28Z
http://arxiv.org/abs/2302.06410v5
Models to support forest inventory and small area estimation using sparsely sampled LiDAR: A case study involving G-LiHT LiDAR in Tanana, Alaska ###### Abstract A two-stage hierarchical Bayesian model is proposed to estimate forest biomass density and total given sparsely sampled LiDAR and georeferenced forest inventory plot measurements. The model is motivated by the United States Department of Agriculture (USDA) Forest Service Forest Inventory and Analysis (FIA) objective to provide biomass estimates for the remote Tanana Inventory Unit (TIU) in interior Alaska. The proposed model yields stratum-level biomass estimates for arbitrarily sized areas of interest. Model-based estimates are compared with the TIU FIA design-based post-stratified estimates. Model-based small area estimates (SAEs) for two experimental forests within the TIU are compared with each forest's design-based estimates generated using a dense network of independent inventory plots. Model parameter estimates and biomass predictions are informed using FIA plot measurements, LiDAR data that is spatially aligned with a subset of the FIA plots, and wall-to-wall remotely sensed data used to define landuse/landcover stratum and percent forest canopy cover. Results support a model-based approach to estimating forest variables when inventory data are sparse or resources limit collection of enough data to achieve desired accuracy and precision using design-based methods. Keywords: forest inventory, Bayesian, stratification, geostatistical, Gaussian process ## 1 Introduction Large scale forest monitoring programs have traditionally used design-based inference that uses probability sampling and associated estimators to deliver forest variable estimates. For example, the United States Department of Agriculture (USDA) Forest Service Forest Inventory and Analysis (FIA) program conducts the US national forest inventory (NFI), collecting data describing the condition of forest ecosystems on a large network of permanent inventory plots distributed across all lands in the nation (Smith, 2002). These data offer a unique and powerful resource for determining the extent, magnitude, and causes of long-term changes in forest health, timber resources, and forest landowner characteristics across large regions in the US (Wurtzebach et al., 2020). The FIA program uses design-based post-stratified estimators to improve precision of point and change estimates (Westfall et al., 2011; Bechtold and Patterson, 2005). Depending on the desired level of estimate precision, such approaches often require costly measurements over a relatively dense network of inventory plots; hence, from a cost efficiency standpoint, there is interest in methods that can deliver comparable inference using fewer inventory plots. At the same time, like other NFIs (Breidenbach and Astrup, 2012; Kohl et al., 2006), FIA has experienced increased demand for estimates within smaller spatial, temporal, and biophysical extents than design-based inference can reasonably deliver (e.g., annual or stand-level estimates). Developing estimation methods that support inference on small areas--referred to as small area estimation (SAE) methods--using FIA data is an active area of research, with considerable progress made in the last several years (Hou et al., 2021; Coulston et al., 2021; Schroeder et al., 2014; Lister et al., 2020). SAE methods are numerous and diverse, though most seek to improve inference on small areas by making use of statistical models and auxiliary information that is correlated with outcome variables (Rao and Molina, 2015). In recent years, FIA and similar NFIs have explored the efficacy of model-assisted and model-based modes of inference to reduce cost and improve precision for both large and small area estimates. These approaches often leverage rich information content of satellite and airborne remote sensing to augment information gleaned from the inventory plot network. Model-assisted approaches employing wall-to-wall auxiliary (e.g., remote sensing) information to improve the precision of inventory estimates within the design-based inferential paradigm have been presented (Breidt and Opsomer, 2017; McConville et al., 2017; Strunk et al., 2019; Magnussen et al., 2018; Ekstrom and Nilsson, 2021), while model-based techniques (Stahl et al., 2011a; McRoberts, 2010; Finley et al., 2011; Saarela et al., 2016; Babcock et al., 2018; May et al., 2023a) have been developed that can provide estimates in cases where useful models are available to relate remote sensing metrics to inventory measurements. Model-based SAE methods offer a valuable alternative to the design-based post-stratified estimators implemented by FIA. Model-based SAE methods seek to borrow information from outside the small area of interest (e.g., from neighboring areal-units or point-referenced observations) and auxiliary data (e.g., remote sensing data) to improve precision of estimated quantities. SAE methods can generally be classified into two groups: unit-level and area-level models. Unit-level models are constructed at the level of population units, where a population unit is defined as the minimal unit that can be sampled from a population. With respect to FIA's survey design, field plot centers represent population units. Unit-level models typically relate outcome variable measurements on sampled population units to auxiliary data that is available for all population units (e.g., wall-to-wall remote sensing data). Prediction for small areas of interest is achieved by aggregating unit-level predictions within the given areal extent (Rao and Molina, 2015). In contrast, area-level models are constructed across areal units where relationships are built between area-specific outcome variable direct estimates (e.g., generated using design-based estimators) and auxiliary data (Rao and Molina, 2015). Hence, area-level models effectively "adjust" direct estimates given auxiliary information. The methods developed in this paper are motivated by FIA's objective to improve biomass estimates in interior Alaska. FIA plot measurements in interior Alaska's Tanana Inventory Unit (TIU) were carried out over four years starting in 2014. Implementing FIA's standard sampling intensity is cost-prohibitive in this remote region, due to challenging logistics and high transportation costs--lack of roads requires that virtually every plot is accessed via helicopter. For this reason, FIA has implemented a modified sampling design in interior Alaska using a reduced sampling intensity for field plots (1 plot per 12 140 ha), supplemented with high-resolution airborne imagery acquired by NASA Goddard's Lidar, Hyperspectral, and Thermal (G-LiHT) Airborne Imager (Cook et al., 2013) in a sampling mode (Cahoon and Baer, 2022). Given the large geographic expanse of interior Alaska (approx. 46 million ha of forestland), a strip sampling mode is considered to be the most economical and spatially balanced acquisition strategy for the airborne data. The use of high-resolution airborne LiDAR strip sampling to support forest inventories has been the focus of several recent studies in Europe and North America where a variety of estimation approaches have been developed and evaluated, including model-assisted (Andersen et al., 2009; Gregoire et al., 2011; Strunk et al., 2019), model-based (Saarela et al., 2018; Stahl et al., 2011), and hybrid estimation (Stahl et al., 2016). A comprehensive account of model-based inference for survey data and its richness over design-based estimators is offered by Little (2004), which includes specific pitfalls of the Horwitz-Thompson estimator. Recent explorations into Bayesian survey sampling from spatially correlated frameworks include Chan-Golston et al. (2020) and Chan-Golston et al. (2022) who show the benefits of modeling finite populations as realizations of a spatial process with the latter also accommodating modeling spatial associations in sampling indicators (also see Finley et al., 2011). In this paper, we apply a model-based approach to estimate forest biomass within the TIU. The proposed model yields stratum specific biomass estimates for arbitrarily sized areas of interest. We generate biomass estimates for the entire TIU and two small areas of interest. Model-based estimates are compared with the TIU FIA design-based post-stratified estimates. Model-based SAEs for two experimental forests within the TIU are compared with each forest's design-based estimates generated using a dense network of independent inventory plots (i.e., independent, meaning the data were not used to inform the model-based estimates). Model parameter estimates and biomass predictions are informed using FIA plot measurements, LiDAR data that is spatially aligned with a subset of the FIA plots, and wall-to-wall remotely sensed data used to define four landuse/landcover stratum and percent forest canopy cover. Model data input and subsequent biomass predictions are point-referenced (i.e., indexed by spatial locations) hence, from a SAE standpoint, we pursue a unit-level approach. The remainder of the paper is as follows. Section 2 provides a description of the data and exploratory analysis. The proposed model, submodels, and model selection criteria are developed in Section 3. Model selection results and biomass estimates for the TIU and two small areas are presented in Section 5. Results and description of possible next steps are discussed in Section 6. ## 2 Data In this section, we describe TIU data used to inform the proposed biomass models presented in Section 3. The 13.533 million ha TIU is shown in Figure 0(a) along with data locations and extents. ### FIA plots The standard FIA plot comprises four 1/60-th ha (7.3 m radius) fixed-area, circular subplots spaced 36.6 m apart (see Cahoon and Baer, 2022, for details on FIA plot design and measurements). Subplot coordinates were obtained using a GLONASS-enabled Trimble GeoXH mapping-grade GNSS receivers that provide \(<2\) m geolocation error (McGaughey et al., 2017; Andersen et al., 2009, 2022). Following FIA procedures, individual tree dry biomass was estimated, summarized to the plot-level, and expressed on a per ha basis (see FIA DRY-BIOT variable in Burrill et al., 2021). This plot-level summarized and expanded DRYBIOT variable is \(y\) (Mg/ha) in subsequent model development. As noted previously, TIU FIA plot measurements were collected over four years (2014, 2016-2018). Over this period, 1 091 FIA plots were sampled. The subset of 880 plots that spatially align with G-LiHT data (described in the Section 2.2) are depicted in Figure 1a. To protect the plot integrity and ownership information, FIA plot location is proprietary. While actual locations were used for modeling, figures presented here depict spatially perturbed (i.e., fuzzed) locations. ### G-LiHT LiDAR To augment sparsely sampled FIA plots, linear swaths of high-resolution airborne remote sensing measurements, placed approximately 9.2 km apart and spatially aligned with most of the FIA plots, were acquired in 2014 using G-LiHT mounted on a fixed-wing aircraft platform (Cook et al., 2013). G-LiHT data specifications are provided in Table 1. Figure 1: (a) Tanana Inventory Unit with data collection locations (FIA locations are perturbed). (b) Zoom-in that shows all three location types (FIA locations are perturbed). The dense grid of prediction locations exist across the entire TIU. (c) Strata used in subsequent modeling. (d) Percent forest cover. A 1-by-1 m canopy height model (CHM) was computed as the difference between G-LiHT derived digital terrain model (DTM) and a canopy surface model (CSM). The average canopy height (m) was computed from the CHM over the 880 spatially coinciding FIA plot footprints and a series of 61 029 "LiDAR plots" each with area equal to the FIA plot and spaced 200 m apart along the linear swaths. The average CHM height variable is \(x_{CH}\) in subsequent analyses. G-LiHT LiDAR plot locations are shown in Figures 0(a) and 0(b). We could have tessellated the G-LiHT swath into 10s of millions of FIA plot sized LiDAR plots; however, results from Finley et al. (2019), Shirota et al. (2022), and Peruzzi et al. (2021) show there is strong spatial dependence among G-LiHT canopy height metrics in TIU and hence little additional information gain at the expense of massive model fitting computational cost. ### Stratification and forest canopy cover A complete coverage (i.e., covering the entire TIU) 30-by-30 m resolution stratification variable was formed using forest and non-forest National Land Cover Database (NLCD) (Homer et al., 2015) with stratum "Deciduous" (Class 41), "Conifer" (Class 42), "Mixed" (Class 43), and "Other" (all non-forest classes) (Figure 0(c)). Using a model-assisted approach to analyze the TIU data described above, Andersen et al. (2023) found this NLCD stratifying variable explained a substantial portion of variability in biomass. Hence, we too use this stratification variable to facilitate comparison with Andersen et al. (2023). In addition to the NLCD stratification variable, a complete coverage 30-by-30 m resolution percent tree cover variable was formed using Hansen et al. (2013) fractional tree cover, updated to account for forest cover loss prior to 2014 (Figure 0(d)). This tree cover variable is \(v_{TC}\) in subsequent analyses. ### Exploratory data analysis In Section 3, we define a hierarchical Bayesian regression model used to predict biomass (\(y\)) using NLCD strata and G-LiHT \(x_{CH}\). As with all modeling efforts, we begin with exploratory \begin{table} \begin{tabular}{l l|l l} \hline Specification & & Specification & \\ \hline Instrument & Riegl VQ-480 & Footprint size & 10 cm \\ Laser wavelength & 1550 nm & Half-scanning angle & 15 degrees \\ Flying height & 3 35 m (AGL) & Average pulse density & 3 pulses/m\({}^{2}\) \\ Beam divergence & 0.3 mrad & Swath width & 400 m \\ \hline \end{tabular} \end{table} Table 1: G-LiHT LiDAR specifications for TIU. data analysis (EDA) to build intuition about the data and relationships of interest. Figure 1(a) shows the distribution of FIA plot biomass density measurements is different for each stratum. Conifer forests have lower median biomass than Deciduous or Mixed forest types, but also several plots with very high biomass (i.e., right skewed distribution). Among the forested strata, Deciduous has the largest median biomass. As the stratum name suggests, biomass distribution within Mixed appears to be a mix of Conifer and Deciduous distributions. As described in Section 2.3, the Other stratum is a catch-all for non-forest classes. Across the TIU, non-forest landcover is dominated by barren, herbaceous, and wetlands, hence the forest biomass distribution in Other is concentrated just above zero. The substantial right skew in Other is due to forested FIA plots that fall within non-forest NLCD classes. Figure 1(b) shows distribution of mean forest canopy height, summarized using \(x_{CH}\), is also different for each stratum. As observed in numerous studies, forest biomass and forest canopy structure metrics, like \(x_{CH}\), have a positive correlation, therefore it is not surprising the patterns seen in Figure 1(a) are reflected in Figure 1(b). Next we explore the relationship between biomass and mean forest canopy height by stratum using the simple linear regression \[y=\beta_{0}+\beta_{CH}x_{CH}+\epsilon, \tag{1}\] where \(\beta_{0}\) and \(\beta_{CH}\) are the intercept and slope coefficient, respectively, and \(\epsilon\stackrel{{\text{iid}}}{{\sim}}N(0,\tau^{2})\) Figure 2: (a) distribution of biomass and (b) G-LiHT mean canopy height \(x_{CH}\) by NLCD strata. Horizontal boxplot lines indicate the distribution’s quantiles and points identify extreme measurements. This model was fit to stratum specific data. These data and resulting regression lines are shown in Figure 3. Sample size, \(n\), and parameter estimates are given in Table 2. Results suggest a strong linear relationship between biomass and \(x_{CH}\) within each stratum. Among the forested strata, the regression slope coefficient for Conifer is larger than Deciduous and Mixed. The slope coefficient for Other is substantially smaller than forested strata coefficients. These differences suggest subsequent model development could benefit from stratum-varying coefficients. The last row in Table 2 shows parameter estimates for the pooled model (i.e., not broken out by stratum). Given stratum specific differences in \(\tau^{2}\) estimates and magnitude of the pooled model's \(\tau^{2}\) estimate compared with that of the Other's model (i.e., \(\sim\)251 vs. \(\sim\)85), we might expect subsequent model development could benefit from stratum specific residual variance parameters. Regression model diagnostic analyses identified a few potentially high leverage observations and moderate residual heteroskedasticity. Additionally, semivariogram analysis suggested negligible spatial dependence among model residuals (i.e., spatial dependence among biomass measurements was, for the most part, captured using \(x_{CH}\)). Previous TIU modeling efforts conducted using FIA subplots, which allow for better estimate of the semivariogram nugget due to closer spatial proximity, and denser plot networks (see, e.g., Babcock et al., 2018; Taylor-Rodriguez et al., 2019; Shirota et al., 2022), showed that even after accounting for LiDAR derived forest canopy variables, residual spatial dependence in biomass was Figure 3: Biomass and G-LiHT derived mean canopy height \(x_{CH}\) observed at FIA plot locations. Model (1) regression lines correspond to parameter estimates given in Table 2. present, and model fit and predictive performance was improved via addition of spatial random effects. ## 3 Models In this section, we proposed a model to predict biomass density at any location(s) within the TIU. The model accommodates stratum-varying regression intercept, slope, and variance parameters, as well as residual spatial dependence. Biomass prediction for an area (e.g., entire TIU or small area within the TIU) is approximated by summarizing an appropriately dense grid of unit-level predictions within the area. For example, biomass density and total estimates for the TIU will be based on posterior predictive distributions estimated at each location within a 250-by-250 (m) grid that extends across the TIU--a portion of which is illustrated in Figure 0(b). In our setting, a key issue with this approach is that biomass is to be conditioned on G-LiHT derived canopy structure metrics, e.g., \(x_{CH}\), and these metrics are observed only along the flight swaths. This misalignment is remedied using a two-stage approach. The first stage is a biomass process model that is conditioned on canopy structure metrics, and the second stage comprises a separate process model for each canopy structure metric. When cast within a hierarchical Bayesian framework, this two-stage approach allows uncertainty in canopy structure metric predictions to be propagated through to biomass predictions. Let us consider a first stage biomass model at generic location \(\ell\) within the TIU. We model biomass \(y(\ell)\) using a set of \(p\) canopy structure variables \(\mathbf{x}(\ell)\) measured using G-LiHT. Based on the exploratory analysis, we allow the relationship between these canopy structure variables and biomass to vary by stratum via \(p\) stratum-varying coefficients \(\tilde{\mathbf{\beta}}_{j}\) for \begin{table} \begin{tabular}{l c c c c c c c c} & & \multicolumn{3}{c}{\(\beta_{0}\)} & \multicolumn{3}{c}{\(\beta_{CH}\)} & \multicolumn{3}{c}{\(\tau^{2}\)} \\ \cline{3-10} Stratum & \(n\) & (L. 95\%) & Mean (U. 95\%) & (L. 95\%) & Mean (U. 95\%) & (L. 95\%) & Mean (U. 95\%) \\ \hline Conifer & 265 & \multirow{2}{*}{(-5.206)} & -1.616 & (1.986) & (12.059) & 13.008 & (13.923) & (288.871) & 341.878 & (403.191) \\ Deciduous & 29 & & (-15.817) & -4.501 & (6.070) & (7.973) & 9.141 & (10.333) & (283.412) & 413.825 & (609.904) \\ Mixed & 56 & & (-12.645) & -2.328 & (7.459) & (7.619) & 8.952 & (10.435) & (141.354) & 226.322 & (396.705) \\ Other & 530 & & (-2.765) & -1.906 & (-0.948) & (7.161) & 7.596 & (8.022) & (76.062) & 85.039 & (96.779) \\ \hline Pooled & 880 & \multirow{2}{*}{(-2.464)} & -1.166 & (0.132) & (9.518) & 9.866 & \multirow{2}{*}{(10.212)} & \multirow{2}{*}{(227.651)} & \multirow{2}{*}{251.259} & \multirow{2}{*}{(275.621)} \\ \end{tabular} \end{table} Table 2: Sample size \(n\) and parameter estimates for stratum specific model (1) fit using biomass and G-LiHT derived mean canopy height \(x_{CH}\) observed at FIA plot locations. Data and regression lines are shown in Figure 3. \(j\) in \(1,2,\ldots,q\), where \(q\) is the number of strata. Additionally, an underlying latent spatial process \(w(\ell)\) accounts for local changes in biomass attributed to unobserved, but smoothly varying, environmental influences. The biomass \(y(\ell)\) for a location \(\ell\) situated inside the \(j\)-th stratum is modeled using the discrete stratum-varying regression coefficients and continuous spatial process as \[y(\ell)=\beta_{0}+\tilde{\beta}_{0,j}+\mathbf{x}_{j}(\ell)^{\top}(\mathbf{\beta}+\tilde {\mathbf{\beta}}_{j})+w(\ell)+\epsilon_{j}(\ell), \tag{2}\] where \(\beta_{0}\) is the intercept, \(\tilde{\beta}_{0,j}\) is a stratum effect, \(\mathbf{x}_{j}(\ell)\) is the \(p\times 1\) vector of canopy structure variables with associated global regression coefficients \(\mathbf{\beta}\) and stratum effects \(\tilde{\mathbf{\beta}}_{j}\), \(w(\ell)\) is a spatial random effect that adds local adjustment with spatial dependence, and the residual is modeled as \(\epsilon_{j}(\ell)\stackrel{{\text{iid}}}{{\sim}}N(0,\tau_{j}^{2})\). Stratum effects are modeled as \(\tilde{\beta}_{0,j}\stackrel{{\text{iid}}}{{\sim}}N(0,\sigma_{0 }^{2})\) and \(\tilde{\mathbf{\beta}}_{j}=(\tilde{\beta}_{1,j},\tilde{\beta}_{2,j},\ldots,\tilde {\beta}_{p,j})^{\top}\) with \(\tilde{\beta}_{k,j}\stackrel{{\text{iid}}}{{\sim}}N(0,\sigma_{k }^{2})\) for \(k\) in \(1,2,\ldots,p\). The spatial random effect is modeled as a Nearest Neighbor Gaussian Process (NNGP) \(w(\ell)\sim NNGP(0,\sigma_{w}^{2}\rho(\cdot,\cdot;\phi_{w}))\), where \(\sigma_{w}^{2}\) is the variance, \(\rho\) is a spatial correlation function defined for pairs of locations within the domain, and \(\phi_{w}\) is a parameter that governs the correlation between locations based on their spatial separation (see, e.g., Datta et al., 2016; Banerjee, 2017, and, in particular, Section 3.2 of the second reference for details on specifying NNGPs). Briefly, the NNGP specification implies the spatial random effect vector over \(n\) locations \(\mathbf{w}=(w(\ell_{1}),w(\ell_{2}),\ldots,w(\ell_{n}))^{\top}\) has probability distribution \(N(\mathbf{0},\mathbf{K}_{w})\), where \(\mathbf{K}_{w}\) is an \(n\times n\) NNGP covariance matrix derived from the covariance function \(K_{w}(\ell,\ell^{\prime};\mathbf{\theta}_{w})=\sigma_{w}^{2}\rho(\ell,\ell^{\prime };\phi_{w})=\sigma_{w}^{2}\exp(-\phi_{w}||\ell-\ell^{\prime}||)\) where \(\exp(\cdot)\) is an exponential spatial correlation function, \(\mathbf{\theta}_{w}=\{\sigma_{w}^{2},\phi_{w}\}\), and \(||\ell-\ell^{\prime}||\) is the Euclidean distance between possibly different locations \(\ell\) and \(\ell^{\prime}\). To allow prediction of \(y(\ell)\) at locations where G-LiHT is not observed, each canopy structure variable is modeled similar to biomass and includes both stratum-varying regression coefficients and a continuous spatial process. Specifically, the second stage model for the \(k\)-th canopy structure variable at location \(\ell\) situated inside the \(j\)-th stratum is \[x_{k}(\ell)=\alpha_{k,0}+\tilde{\alpha}_{k,0,j}+\mathbf{v}_{k,j}(\ell)^{\top}(\bm {\alpha}_{k}+\tilde{\mathbf{\alpha}}_{k,j})+u_{k}(\ell)+\eta_{k}(\ell), \tag{3}\] where \(\alpha_{k,0}\) is the intercept, \(\tilde{\alpha}_{k,0,j}\) is a stratum effect, \(\mathbf{v}_{k,j}(\ell)\) is a \(r\times 1\) vector of predictor variables with associated global regression coefficients \(\mathbf{\alpha}_{k}\) and stratum effects \(\tilde{\mathbf{\alpha}}_{k,j}\), \(u_{k}(\ell)\) is a spatial random effect, and the residual is modeled as \(\eta_{k}(\ell)\stackrel{{\text{iid}}}{{\sim}}N(0,\gamma_{k,j}^{2})\). Stratum effects are modeled as \(\tilde{\alpha}_{k,0,j}\stackrel{{\text{iid}}}{{\sim}}N(0,\nu_{k,0}^{2})\) and \(\tilde{\mathbf{\alpha}}_{k,j}=(\tilde{\alpha}_{k,1,j},\tilde{\alpha}_{k,2,j}, \ldots,\tilde{\alpha}_{k,r,j})^{\top}\) with \(\tilde{\alpha}_{k,i}\stackrel{{\text{iid}}}{{\sim}}N(0,\nu_{k,i} ^{2})\) for \(i\) in \(1,2,\ldots,r\). The spatial random effect is modeled as \(u_{k}(\ell)\sim NNGP(0,\nu_{k,u}^{2}\rho(\cdot,\cdot;\phi_{k,u}))\) where \(\nu_{k,u}^{2}\) is the variance, \(\rho\) is again taken as an exponential spatial correlation function, and \(\phi_{k,u}\) is the spatial decay parameter. We again collect the spatial process parameters into a vector \(\mathbf{\theta}_{k,u}=\{\nu_{k,u}^{2},\phi_{k,u}\}\). For the set of \(n\) locations \(\mathcal{L}=(\ell_{1},\ell_{2},\ldots,\ell_{n})\) where both biomass and G-LiHT canopy structure data are observed, we define the \(n\times 1\) vector \(\mathbf{y}=(y(\ell_{1}),y(\ell_{2}),\ldots,y(\ell_{n}))^{\top}\), \(n\times 1\) vector of ones \(\mathbf{1}\), \(n\times q\) matrix \(\mathds{1}\) with the \((i,j)\)-th element equaling \(1\) if \(\ell_{i}\) falls within the \(j\)-th stratum and zero otherwise, \(n\times p\) matrix \(\mathbf{X}\) with row \(i\) equal to \(\mathbf{x}(\ell_{i})^{\top}\), \(n\times pq\) matrix \(\tilde{\mathbf{X}}\) with row \(i\) and columns \((j-1)p+1\) through \((j-1)p+p\) equal to \(\mathbf{x}(\ell_{i})^{\top}\) when location \(\ell_{i}\) falls within the \(j\)-th stratum and zero otherwise, \(\mathbf{w}=(w(\ell_{1}),w(\ell_{2}),\ldots,w(\ell_{n}))^{\top}\), and \(\mathbf{\epsilon}=(\epsilon(\ell_{1}),\epsilon(\ell_{2}),\ldots,\epsilon(\ell_{n} ))^{\top}\). We then write model (2) as \[\mathbf{y}=\beta_{0}\mathbf{1}+\mathds{1}\tilde{\mathbf{\beta}}_{0}+\mathbf{X}\mathbf{\beta}+ \tilde{\mathbf{X}}\tilde{\mathbf{\beta}}+\mathbf{w}+\mathbf{\epsilon}, \tag{4}\] where \(\beta_{0}\) and \(\mathbf{\beta}\) are as defined earlier, \(\tilde{\mathbf{\beta}}_{0}=(\tilde{\beta}_{0,1},\tilde{\beta}_{0,2},\ldots,\tilde {\beta}_{0,q})^{\top}\), and \(\tilde{\mathbf{\beta}}=(\tilde{\mathbf{\beta}}_{1}^{\top},\tilde{\mathbf{\beta}}_{2}^{\top },\ldots,\tilde{\mathbf{\beta}}_{q}^{\top})^{\top}\). Let \(\mathbf{\Omega}_{y}=\{\beta_{0},\tilde{\mathbf{\beta}}_{0},\mathbf{\beta},\tilde{\mathbf{ \beta}},\{\sigma_{i}^{2}\}_{i=0}^{p},\mathbf{w},\mathbf{\theta}_{w},\{\tau_{j}^{2}\}_{ j=1}^{q}\}\) denote the collection of parameters in the above model including those specifying the prior distributions and the spatial process as described below (2) to complete the hierarchical model. Similarly, for the \(n_{s}\) locations \(\mathcal{L}_{s}=(\ell_{1},\ell_{2},\ldots,\ell_{n_{s}})\) where G-LiHT is observed, we define the \(n_{s}\times 1\) vector for the \(k\)-th canopy structure variable \(\mathbf{x}_{k}=(x_{k}(\ell_{1}),x_{k}(\ell_{2}),\ldots,x_{k}(\ell_{n_{s}}))^{\top}\), \(n_{s}\times 1\) vector of ones \(\mathbf{1}_{k}\), \(n_{s}\times q\) matrix \(\mathds{1}_{k}\) with row \(i\) and column \(j\) equal \(1\) when \(\ell_{i}\) falls within the \(j\)-th stratum and zero otherwise, \(n_{s}\times r\) matrix \(\mathbf{V}\) with row \(i\) equal to \(\mathbf{v}(\ell_{i})^{\top}\), \(n_{s}\times rq\) matrix \(\tilde{\mathbf{V}}\) with row \(i\) and columns \((j-1)r+1\) through \((j-1)r+r\) equal to \(\mathbf{v}(\ell_{i})^{\top}\) when location \(\ell_{i}\) falls within the \(j\)-th stratum and zero otherwise, \(\mathbf{u}_{k}=(u_{k}(\ell_{1}),u_{k}(\ell_{2}),\ldots,u_{k}(\ell_{n_{s}}))^{\top}\), and \(\mathbf{\eta}_{k}=(\eta_{k}(\ell_{1}),\eta_{k}(\ell_{2}),\ldots,\eta_{k}(\ell_{n_{ s}}))^{\top}\). We then write model (3) as \[\mathbf{x}_{k}=\alpha_{k,0}\mathbf{1}_{k}+\mathds{1}_{k}\tilde{\mathbf{\alpha}}_{k,0}+\mathbf{V }\mathbf{\alpha}_{k}+\tilde{\mathbf{V}}\tilde{\mathbf{\alpha}}_{k}+\mathbf{u}_{k}+\mathbf{\eta}_{k}. \tag{5}\] where \(\alpha_{k,0}\) and \(\mathbf{\alpha}_{k}\) are as defined earlier, \(\tilde{\mathbf{\alpha}}_{k,0}=(\tilde{\alpha}_{k,0,1},\tilde{\alpha}_{k,0,2}, \ldots,\tilde{\alpha}_{k,0,q})^{\top}\), and \(\tilde{\mathbf{\alpha}}_{k}=(\tilde{\mathbf{\alpha}}_{k,1}^{\top},\tilde{\mathbf{\alpha}}_{ k,2}^{\top},\ldots,\tilde{\mathbf{\alpha}}_{k,q}^{\top})^{\top}\). The spatial random effect vector \(\mathbf{u}_{k}\sim NNGP(\mathbf{0},\mathbf{K}_{k,u})\), where \(\mathbf{K}_{k,u}\) is an \(n_{s}\times n_{s}\) NNGP covariance matrix defined analogously to \(\mathbf{K}_{w}\). Model parameters are collected in the vector \(\mathbf{\Omega}_{x_{k}}=(\alpha_{k,0},\tilde{\mathbf{\alpha}}_{k,0},\mathbf{\alpha}_{k}, \tilde{\mathbf{\alpha}}_{k},\{\nu_{k,i}^{2}\}_{i=0}^{r},\mathbf{u}_{k},\mathbf{\theta}_{ k,u},\{\gamma_{k,j}^{2}\}_{j=1}^{q})\), which includes parameters specifying prior distributions in spatial process as described below (3). To complete the Bayesian specification of these models, a prior distribution is assigned to each parameter. For model (2) and (3) we assign: flat priors to \(\beta_{0}\), \(\alpha_{k,0}\) and elements in \(\mathbf{\beta}\) and \(\mathbf{\alpha}_{k}\); implicit Normal distributions for elements in \(\tilde{\mathbf{\beta}}_{0}\), \(\tilde{\mathbf{\beta}}\), \(\tilde{\mathbf{\alpha}}_{k,0}\), and \(\tilde{\mathbf{\alpha}}_{k}\); NNGP distribution for \(\mathbf{w}\) and \(\mathbf{u}_{k}\); weakly informative inverse-Gamma distributions with hyperparameter shape equal to 2 and scale value guided by EDA results for all variance parameters; Uniform distributions with support between 1 to 500 km for decay parameters \(\phi_{w}\) and \(\phi_{u}\). Following the Bayesian mode of inference, we employ a Markov chain Monte Carlo (MCMC) algorithm to generate samples from model (2) parameters' joint posterior distribution, \[[\boldsymbol{\Omega}_{y}\,|\,\boldsymbol{y},\mathds{1},\boldsymbol{X}]\propto[ \boldsymbol{\Omega}_{y}]\times[\boldsymbol{y}\,|\,\boldsymbol{\Omega}_{y}, \mathds{1},\boldsymbol{X}], \tag{6}\] where \([\boldsymbol{\Omega}_{y}\,|\,\boldsymbol{y},\mathds{1},\boldsymbol{X}]\) represents the joint posterior distribution of \(\boldsymbol{\Omega}_{y}\) conditioned on the data, \([\boldsymbol{\Omega}_{y}]\) is the joint prior distribution, and \([\boldsymbol{y}\,|\,\boldsymbol{\Omega}_{y},\mathds{1},\boldsymbol{X}]\) represents the likelihood. Parameter inference is obtained by sampling from (6) using MCMC. After diagnosing convergence, we collect \(L\) samples from (6), which are denoted as \((\boldsymbol{\Omega}_{y}^{(1)},\boldsymbol{\Omega}_{y}^{(2)},\ldots, \boldsymbol{\Omega}_{y}^{(L)})\). Similarly, we obtain estimates for (3) by sampling from the joint posterior distribution, \[[\boldsymbol{\Omega}_{x_{k}}\,|\,\boldsymbol{x}_{k},\mathds{1}_{k}, \boldsymbol{V}]\propto[\boldsymbol{\Omega}_{x_{k}}]\times[\boldsymbol{x}_{k} \,|\,\boldsymbol{\Omega}_{x_{k}},\mathds{1}_{k},\boldsymbol{V}], \tag{7}\] with \(L\) post-convergence MCMC samples collected as \((\boldsymbol{\Omega}_{x_{k}}^{(1)},\boldsymbol{\Omega}_{x_{k}}^{(2)},\ldots, \boldsymbol{\Omega}_{x_{k}}^{(L)})\). ### Posterior predictive inference and areal estimates As described in Section 1, our inferential objective is to estimate biomass density (Mg/ha) and total (Mg) for any user-defined areal unit within the TIU. This inference uses model (2) to estimate the biomass density posterior predictive distribution at each of the \(n^{*}\)=3 382 473 prediction locations, \(\mathcal{L}_{0}=(\ell_{0,1},\ell_{0,2},\ldots,\ell_{0,n^{*}})\), laid in a dense grid over the TIU. These predictions are then used to estimate the desired area summaries of biomass density and total. However, because G-LiHT canopy structure variables are not observed at prediction locations, we must condition biomass predictions from model (2) on predictions of canopy structure variables from model (3). Ideally, this is done in a way that propagates the uncertainty in canopy structure variable predictions through to biomass predictions. For inference at unobserved locations, we extend model (3) from observed locations to any arbitrary location \(\ell_{0}\). If \(\boldsymbol{x}_{k}^{*}=(x_{k}^{*}(\ell_{0,1}),x_{k}^{*}(\ell_{0,2}),\ldots,x_{k }^{*}(\ell_{0,n^{*}}))^{\top}\) is the vector of unknown measurements and \(\boldsymbol{u}_{k}^{*}\) is the analogously defined vector of spatial random effects corresponding to the \(k\)-th canopy structure variable, the NNGP extends model (3) such that if \(\ell_{0,m}\) falls within the \(j\)-th stratum then \[x_{k}^{*}(\ell_{0,m})=\alpha_{k,0}+\tilde{\alpha}_{k,0,j}+\boldsymbol{v}_{k,j} (\ell_{0,m})^{\top}(\boldsymbol{\alpha}_{k}+\tilde{\boldsymbol{\alpha}}_{k,j} )+u_{k}(\ell_{0,m})+\eta_{k}(\ell_{0,m})\hskip 14.226378ptm=1,2,\ldots,n^{*}\;, \tag{8}\] with \(\eta_{k}(\ell_{0,m})\overset{ind}{\sim}N(0,\gamma_{k,j}^{2})\) and an NNGP predictive model for \(u_{k}(\ell_{0,m})\). We briefly ex plain below and refer the reader to further details in Banerjee (2017) Section 3.2 and their Equation (19). It will be convenient, in what follows, to denote \(\mathbf{K}(\mathcal{A},\mathcal{B})\) for finite sets of locations \(\mathcal{A}\) and \(\mathcal{B}\) to be the matrix whose \((i,j)\)-th element is evaluated from the covariance function \(K(\ell_{i},\ell_{j})\), where \(\ell_{i}\) and \(\ell_{j}\) are the \(i\)-th entry in \(\mathcal{A}\) and \(j\)-th entry in \(\mathcal{B}\), respectively. We build a sequence of neighbor sets \(N(\ell_{0,m})\) for each \(m=1,2,\ldots,n^{*}\) that consist of a fixed number of nearest neighbors of \(\ell_{0,m}\) from the "past," where "past" refers to the \(n_{s}\) locations already accounted for in \(\mathcal{L}_{s}\) and the set of \(\ell_{0,i}\)'s for \(i<m\). The distribution of \(u_{k}(\ell_{0,m})\) is specified as \[u_{k}(\ell_{0,m})=\sum_{\ell^{\prime}\in N(\ell_{0,m})}a(\ell_{0,m},\ell^{ \prime})u_{k}(\ell^{\prime})+\omega_{k}(\ell_{0,m})\ \ \ \ m=1,2,\ldots,n^{*}\;, \tag{9}\] where solving \(\mathbf{K}(N(\ell_{0,m}),N(\ell_{0,m}))\mathbf{a}(\ell_{0,m})=\mathbf{K}(N(\ell_{0,m}), \ell_{0,m})\) for the vector \(\mathbf{a}(\ell_{0,m})\) renders the values of \(a(\ell_{0,m},\ell^{\prime})\) in (9), while \(\omega_{k}(\ell_{0,m})\sim N(0,\delta_{0,m}^{2})\) with \[\delta_{0,m}^{2}=K(\ell_{0,m},\ell_{0,m})-\mathbf{K}(\ell_{0,m},N(\ell_{0,m}))\mathbf{ K}(N(\ell_{0,m}),N(\ell_{0,m}))^{-1}\mathbf{K}(N(\ell_{0,m}),\ell_{0,m})\;.\] Predictive inference is achieved by sampling from \([x_{k}(\ell_{0,m})\,|\,\mathbf{x}_{k}]\), where \(\mathbf{x}_{k}\) is the collection of observations for the \(k\)-th canopy structure variable. Since the joint posterior distribution of the hierarchical model extended over \(\mathcal{L}_{0}\) is \([\mathbf{\Omega}_{x_{k}},\mathbf{x}_{k}^{*},\mathbf{u}_{k}^{*}\,|\,\mathbf{x}_{k}]\propto[\bm {\Omega}_{x_{k}}\,|\,\mathbf{x}_{k}]\times[\mathbf{u}_{k}^{*}\,|\,\mathbf{\Omega}_{x_{k}} ]\times[\mathbf{x}_{k}^{*}\,|\,\mathbf{u}_{k}^{*},\mathbf{\Omega}_{x_{k}}]\), we can use the posterior samples from \(\mathbf{\Omega}_{x_{k}}^{(l)}\sim[\mathbf{\Omega}_{x_{k}}\,|\,\mathbf{x}_{k}]\) for each \(l=1,2,\ldots,L\) and each \(\ell_{0,m}\in\mathcal{L}_{0}\) to draw \(u_{k}^{(l)}(\ell_{0,m})\sim[u_{k}(\ell_{0,m})\,|\,\mathbf{\Omega}_{x_{k}}^{(l)}]\) and \(x_{k}^{(l)}(\ell_{0,m})\,|\,[x_{k}(\ell_{0,m})\,|\,u_{k}(\ell_{0,m}),\mathbf{ \Omega}_{x_{k}}^{(l)}]\). The resulting \(x_{k}^{(l)}(\ell_{0,m})\) are the desired posterior predictive samples. Predictive inference for \(y(\ell)\) over \(\mathcal{L}_{0}\) proceeds analogously. We draw \(y^{(l)}(\ell_{0,m})\) from \([y(\ell_{0,m})\,|\,\mathbf{\Omega}_{y}^{(l)},\mathbf{x}_{k}^{(l)}(\ell_{0,m})]\) by generating a value from (2) for each sampled \(\mathbf{\Omega}_{y}^{(l)}\) and \(\mathbf{x}_{k}^{(l)}(\ell_{0,m})\), where the latter is generated from its posterior predictive distribution as above. We model biomass over an area \(D\) as a density, i.e., on a per unit area basis Mg/ha, which we denote by \(y(D)\). The posterior distribution for \(y(D)\) is evaluated by calculating \(y^{(l)}(D)\approx\sum_{m=1}^{n^{*}}y^{(l)}(\ell_{0,m})/n^{*}\) for each sampled \(y^{(l)}(\ell_{0,m})\). We execute this over a dense grid on \(D\). Similarly, posterior samples of total biomass are obtained from \(y_{Tot.}^{(l)}(D)=|D|\sum_{m=1}^{n^{*}}y^{(l)}(\ell_{0,m})/n^{*}\), where \(|D|\) is the area in ha. Samples from the posterior distribution of \(y(D)\) and \(y_{Tot.}^{(l)}(D)\) are then summarized to described any quantity of interest, e.g., in Section 5, we present posterior distribution mean and 95% credible intervals. ### Implementation Methods developed in the preceding sections were programmed in C++ and used openBLAS Zhang (2016) and Linear Algebra Package (LAPACK; www.netlib.org/lapack) for efficient matrix computations. openBLAS is an implementation of Basic Linear Algebra Subprograms (BLAS; www.netlib.org/blas) capable of exploiting multiple processors. Computing details about efficiently updating NNGP spatial random effects are provided in Finley et al. (2019). MCMC sampler code for (2) and (3) are provided in the supplemental material along with simulated data used for testing the code and estimation procedures. The computer used for subsequent analyses was running a linux operating system with a AMD Ryzen Threadripper 3990X 64-Core Processor (128 threads) with 264 GB of RAM. ## 4 Submodels, model selection, and design-based estimates Connecting Sections 2 and 3, for model (2) our analysis uses the NLCD strata to form \(\mathds{1}\) and single G-LiHT canopy structure variable \(x_{CH}\) to form \(\mathbf{X}\) and \(\tilde{\mathbf{X}}\) (i.e., \(q=4\) and \(p=1\)). Model (3) for \(x_{CH}\) is similarly informed using the NLCD strata and a single variable, \(v_{TC}\), to form \(\mathbf{V}\) and \(\tilde{\mathbf{V}}\) (i.e., \(r=1\)). We consider the full model (2) and four submodels. All submodels follow model (2) notation and indexing unless noted otherwise. Submodel 1: \[y(\ell)=\beta_{0}+x_{CH}(\ell)\beta_{CH}+\epsilon(\ell),\,\text{ where }\epsilon(\ell)\stackrel{{\text{iid}}}{{\sim}}N(0,\tau^{2}),\] Submodel 2: \[y(\ell)=\beta_{0}+x_{CH}(\ell)\beta_{CH}+\epsilon_{j}(\ell),\] Submodel 3: \[y(\ell)=\beta_{0}+\tilde{\beta}_{0,j}+x_{CH}(\ell)(\beta_{CH}+ \tilde{\beta}_{CH,j})+\epsilon(\ell),\,\text{where }\epsilon(\ell)\stackrel{{ \text{iid}}}{{\sim}}N(0,\tau^{2}),\] Submodel 4: \[y(\ell)=\beta_{0}+\tilde{\beta}_{0,j}+x_{CH}(\ell)(\beta_{CH}+ \tilde{\beta}_{CH,j})+\epsilon_{j}(\ell),\] Full model: \[y(\ell)=\beta_{0}+\tilde{\beta}_{0,j}+x_{CH}(\ell)(\beta_{CH}+ \tilde{\beta}_{CH,j})+w(\ell)+\epsilon_{j}(\ell).\] To assess the contribution of G-LiHT derived canopy structure information, we also consider the full model without \(x_{CH}\), i.e., \(y(\ell)=\beta_{0}+\tilde{\beta}_{0,j}+w(\ell)+\epsilon_{j}(\ell)\). We consider several criterion for selecting the "best" model. The deviance information criterion (DIC)(Spiegelhalter et al., 2002) and widely applicable information criterion (WAIC) (Watanabe, 2010) model fit criterion were computed for each candidate model. DIC equals \(-2(\mathrm{L}-\mathrm{p}_{D})\) where L is goodness of fit and \(\mathrm{p}_{D}\) is a model penalty term viewed as the effective number of parameters. Two WAIC criteria were computed based on the log pointwise predictive density (LPPD) with \(\mathrm{WAIC}_{1}=-2(\mathrm{LLPD}-\mathrm{p}_{1})\) and \(\mathrm{WAIC}_{2}=-2(\mathrm{LLPD}-\mathrm{p}_{2})\) where penalty terms \(\mathrm{p}_{1}\) and \(\mathrm{p}_{2}\) are defined in Gelman et al. (2014) just prior to, and in, their Equation (11). Models with lower DIC, \(\mathrm{WAIC}_{1}\), and \(\mathrm{WAIC}_{2}\) values have better fit to the observed data and should yield better out-of-sample prediction, see Gelman et al. (2014), Vehtari et al. (2017), or Green et al. (2020) for more details. Lastly, we compute root mean squared error (RMSE) between the observed and model fitted values. In Section 5, we present the full model estimates of biomass density and total for the TIU. We also present the design-based post-stratified estimates generated using the complete sample of \(1\,091\) TIU plots. In addition to the entire TIU, we consider two illustrative small areas of interest (SAIs)--Caribou-Poker Creek's Research Watershed (CPC) and Bonanza Creek's Experimental Forest (BCEF). CPC and BCEF are located on Alaska state land and run as long-term ecological research sites by the University of Alaska, Fairbanks. The location and extent of these SAIs is shown in Figure 1a. As illustrated in Figures 6a and 7a, CPC and BCEF have a continuous forest inventory (CFI) plot network where each plot follows the FIA layout and measurement protocol. These inventory data were used to generate design-based post-stratified estimates that we compare with our model-based estimates. Importantly, the TIU data inform the SAI model-based estimates, not the CPC and BCEF CFI data. In this way, design-based CPC and BCEF estimates provide an independent assessment using data separate from those used to inform the model-based estimates. Design- and model-based approaches follow different theories of inference. Both are well developed and compared in statistical and forestry literature (see, e.g., Sarndal et al., 1978, 2003; Gregoire, 1998; McRoberts, 2010). The design-based approach assumes a fixed finite population that can be accessible (in principle without error) through a census if all population units were observed. Randomness is incorporated via the selection of population units into a sample according to a sampling design. A sampling design assigns a probability of selection to each sample. This is often effective when the variability and dependence across the population units can be adequately captured by the sampling design. If, however, the units of the population exhibit associations or dependencies that are too complex to be accounted for by a sampling design, then a model-based approach to inference is preferable. The model-based approach assumes that the population is a realization from a data-generating stochastic process. Randomness is incorporated through distributional assumptions on this process via a posited model. These fundamental differences between design- and model-based inference yield different population parameter estimates and interpretation, particularly for uncertainty summaries of these estimates. In our setting, for example, a design-based 95% confidence interval for the TIU's biomass total is interpreted as "if a large number of independent and equally sized samples were collected according to the sampling design, 95% of the intervals computed using these samples would include the population total." In contrast, a Bayesian model-based 95% credible interval is interpreted as "there is a 95% chance or probability the interval includes the population total, given the posited model and observed data." Due to these very different interpretations, estimates from these modes of inference should not be compared. Rather, we use the design-based estimates as an informal coherence check on the model-based estimates, with a focus primarily on similarities and differences in point estimates. The candidate models were fit using \(n\)=880 spatially coinciding FIA and G-LiHT locations (Figure 1a). Model (3) parameter estimates and subsequent \(x_{CH}\) predictions were informed using the \(n_{s}\)=61 029 G-LiHT only locations (Figure 1a). Estimates for TIU biomass density and total were based on samples from posterior predictive distributions at \(n^{*}\)=2 165 220 locations (see, e.g., Figure 1b). Given CPC's and BCEF's small spatial extent and desire to provide more detailed prediction maps, estimates for SAIs were based on samples from posterior predictive distributions at locations on a 50-by-50 (m) grid. Posterior inference for all model parameters and predictions was based on 1 000 post-burn and thinned MCMC samples from each of three chains. ## 5 Results ### Model selection Candidate model fit criteria scores are given in Table 3. As suggested by Section 2.4 EDA, Submodel 2's improved fit over Submodel 1 and 3 supports stratum specific residual variances (i.e., \(\tau_{j}^{2}\) parameters). Submodel 4, which includes stratum-varying coefficients and residual variances, provides the best fit among the submodels. The full model, with its spatially varying intercept, provides marginally better fit over that of Submodel 4. Given the strong relationships between biomass and \(x_{CH}\) seen in the EDA, it is not surprising to see substantially degraded fit when this canopy structure variable is not included in the full model (i.e., comparing full model fit criteria with and without \(x_{CH}\)). Given the full model (2) with \(x_{CH}\) provides the best fit, it is used for all subsequent inference. ### Tanana Inventory Unit estimates Estimates for the full model's stratum-varying regression coefficients and residual variance parameters are given in Table 4. The remainder of this model's parameter estimates are given in Table S1. As expected, coefficients' sign and magnitude generally follow EDA stratum specific regression coefficient estimates in Table 2. Compared with EDA models, the full model's information pooling, via stratum and spatial random effects, yields smaller and more precise residual variance estimates (i.e., \(\tau_{j}^{2}\) parameters). Spatial process parameter estimates in Table S1 show spatial random effects capture a relatively small amount of residual variation (i.e., \(\sigma_{w}^{2}\) to \(\tau_{j}^{2}\)s ratios are small). Spatial decay parameter estimates suggest mean effective spatial range is 37.92 (km) with lower and upper 95% credible intervals 25.39 and 115.22 (km), respectively (we define the "effective spatial range" as the distance at which the spatial correlation drops below 0.05, which equals \(-\log(0.05)/\phi_{w}\)). Parameter estimates for \(x_{CH}\)'s model (3) are given in Tables S2 and S3. Here, estimates of stratum-varying regression coefficients suggest that both strata and \(v_{TC}\) explain a substantial portion of variability in \(x_{CH}\) (e.g., 95% credible interval bound for each slope coefficient does not include zero). The model's spatial random effect explains a portion of residual variation (i.e., \(\nu_{CH,u}^{2}\) to \(\gamma_{j}^{2}\)s ratios are larger); however, the effective spatial range for the spatial process is short, extending only about 5 (km). This is not too surprising given \(v_{TC}\) explains \begin{table} \begin{tabular}{l r a substantial portion of \(x_{CH}\)'s variability. The predictive model for \(x_{CH}\) (8) yields a posterior predictive distribution at the TIU's \(n^{*}\) prediction locations. The posterior predictive distribution mean and standard deviation for \(x_{CH}\) at each prediction location is given in Figures 3(a) and 3(b), respectively. Given the short effective spatial range, approximately 5 (km), the G-LiHT flight lines where \(x_{CH}\) was measured appear as increased accuracy and precision stripes in the maps--predictions are improved along and adjacent to flight lines. The two stage modeling approach developed in Section 3 is designed to use \(x_{CH}\)'s posterior predictive information to inform biomass (\(y\)) predictions. This information, propagated across model components, is seen as increased accuracy and precision along flight lines in biomass posterior predictive distribution summary maps Figures 4(a) and 4(b), respectively. Turning now to area biomass estimates. Table 5 gives the stratum area, sample size, and design-based biomass density and total estimates. Corresponding model-based estimates derived from the \(n^{*}\) posterior predictive distributions are given in Table 6. The estimators yield similar biomass density point estimates. The Mixed stratum shows the largest disparity, with a design-based estimate of 52.834 (Mg/ha) and model-based estimate of 40.288 (Mg/ha). The model-based point estimate for TIU biomass total is 9 831.88 (1 000 Mg) more than the design-based point estimate (i.e., \(261\,441.680-251\,609.800=9\,831.88\)). The Other stratum has the largest land area in the TIU. Hence, the seemingly small difference in biomass density between the design- and model-based estimates (i.e., 4.624 and 7.525, respectively) results in a large difference in biomass total (i.e., 39 748.36 and 64 689.31, respectively). Figure 4: (a) and (b) mean and standard deviation, respectively, of the posterior predictive distribution for the G-LiHT mean canopy height variable estimated using (3) and associated predictive model. Difference between the design- and model-based estimate for the Other stratum comprises about 62% of the difference between the estimators' TIU biomass total estimate, i.e., 62% of the 9 831.88 (1 000 Mg) is due to different Other density estimates (contribution of the remaining strata are Conifer 18%, Deciduous 7%, and Mixed 12%). ### Small area estimates Next we consider the CPC and BCEF analysis results. As noted at the end of Section 4, posterior inference comes directly from the TIU data and model; however, the SAIs use a denser prediction location grid for posterior predictive inference and hence area estimates (i.e., one prediction every 1/4 ha for the SAIs vs. 6.25 ha for the TIU). The denser grid yields more detailed maps and, given the fairly large size of the SAIs, has negligible effect on subsequent biomass density and total estimates. We begin with the CPC, which is the northernmost SAI shown in Figure 0(a). Figure 5(a) shows the CPC boundary, locations with G-LiHT derived \(x_{CH}\), single FIA plot, and CFI plot network. The strata are shown in Figure 5(b). The posterior predictive distribution mean and standard deviation for \(x_{CH}\) at each prediction location is given in Figures 5(c) and 5(d), respectively. The increased precision seen in the broader TIU \(x_{CH}\) standard deviation map Figure 5(b) is less pronounced in Figure 5(d) but still visible where the G-LiHT only observations coincide with the Other stratum (the large ratio of _signal_ variance \(\nu_{CH,u}^{2}\) to _noise_ variance \(\gamma_{j}^{2}\), where \(j\) equals the Other stratum index, allows the random effect influence to be more apparent). Variability in \(x_{CH}\) within a given stratum is due primarily to variability in tree cover \(v_{TC}\) values. Biomass posterior predictive distribution mean and standard deviation maps are given in Figures 5(e) and 5(f), respectively. These maps make apparent the stratum-varying biomass density and variability, and propagated \(x_{CH}\) uncertainty. Table 7 gives the CPC's stratum areas, CFI sample size, and design-based post-stratified biomass density and total estimates. Comparing the CPC and TIU design-based estimates (i.e., Tables 5 and 7), we see the CPC's stratum specific biomass density differs substantially from that of the broader TIU. For example, CPC's Conifer and Mixed strata have about half the density of the TIU, i.e., 15.261 vs. 36.183 (Mg/ha) and 28.434 vs. 52.834 (Mg/ha), respectively. Whereas the CPC's Deciduous and Other strata have a greater density than \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Biomass (Mg/ha)} & \multicolumn{3}{c}{Biomass (1000 Mg)} \\ \cline{2-7} Stratum & \multicolumn{1}{c}{(Lower 95\%)} & Mean & \multicolumn{1}{c}{(Upper 95\%)} & SD & \multicolumn{1}{c}{(Lower 95\%)} & Mean & \multicolumn{1}{c}{(Upper 95\%)} & SD \\ \hline Conifer & \({}_{\text{(32.312)}}\) & 34.188 & \({}_{\text{(36.502)}}\) & 1.040 & \({}_{\text{(118238.780)}}\) & 125102.840 & \({}_{\text{(133568.720)}}\) & 3806.029 \\ Deciduous & \({}_{\text{(58.907)}}\) & 62.939 & \({}_{\text{(67.118)}}\) & 2.179 & \({}_{\text{(52476.700)}}\) & 56069.070 & \({}_{\text{(59791.490)}}\) & 1941.281 \\ Mixed & \({}_{\text{(35.632)}}\) & 40.288 & \({}_{\text{(44.415)}}\) & 2.163 & \({}_{\text{(13779.930)}}\) & 15580.460 & \({}_{\text{(17176.670)}}\) & 836.452 \\ Other & \({}_{\text{(6.629)}}\) & 7.525 & \({}_{\text{(8.571)}}\) & 0.529 & \({}_{\text{(56981.520)}}\) & 64689.310 & \({}_{\text{(73676.780)}}\) & 4544.735 \\ \hline TIU & \({}_{\text{(18.312)}}\) & 19.319 & \({}_{\text{(20.239)}}\) & 0.513 & \({}_{\text{(247810.560)}}\) & 261441.680 & \({}_{\text{(273898.360)}}\) & 6945.950 \\ \hline \hline \end{tabular} \end{table} Table 6: Tanana Inventory Unit model (2) biomass density and total estimates. the TIU, i.e., 73.195 vs. 66.259 (Mg/ha) and 12.263 vs. 4.625 (Mg/ha). CPC model-based estimates derived from the \(n^{*}\) posterior predictive distributions are given in Table 8. Relative to the CPC's design-based biomass density point estimates, the model-based estimates look more similar to the broader TIU. This is not surprising, because the model draws information from the entire TIU dataset to inform CPC biomass estimates. Despite the differences between the design- and model-based stratum specific density and total estimates seen when comparing Tables 7 and 8, the CPC-wide densities and totals are quite similar, i.e., 32.284 vs. 37.695 (Mg/ha) and 342.341 vs. 399.725 (1 000 Mg), respectively. Next we consider BCEF analysis results. The BCEF is the southernmost SAI shown in Figure (a)a. Figure (a)a shows the BCEF boundary, locations with G-LiHT derived \(x_{CH}\), two FIA plots, and CFI plot network. The strata are shown in Figure (b)b. The posterior predictive distribution mean and standard deviation for \(x_{CH}\) at each prediction location is given in Figures (c)c and (d)d, respectively. Like the CPC, increased precision in \(x_{CH}\) prediction \begin{table} \begin{tabular}{c r r r r r r} \hline \hline & & & \multicolumn{2}{c}{Biomass (Mg/ha)} & \multicolumn{2}{c}{Biomass (1000 Mg)} \\ \cline{4-7} Stratum & Area (ha) & \(n\) & Mean & SE & Total & SE \\ \hline Conifer & 3793.290 & 13 & 15.261 & 5.378 & 57.888 & 20.400 \\ Deciduous & 2922.567 & 10 & 73.195 & 6.398 & 213.918 & 18.698 \\ Mixed & 1413.225 & 6 & 28.434 & 7.290 & 40.183 & 10.303 \\ Other & 2475.102 & 6 & 12.263 & 5.669 & 30.352 & 14.032 \\ \hline CPC & 10604.180 & 35 & 32.284 & 3.083 & 342.341 & 32.693 \\ \hline \hline \end{tabular} \end{table} Table 7: Caribou-Poker Creek Research Watershed strata area, number of inventory plots \(n\), and design-based post-stratified biomass density and total estimates. \begin{table} \begin{tabular}{c r r r r r r} \hline \hline & \multicolumn{3}{c}{Biomass (Mg/ha)} & \multicolumn{3}{c}{Biomass (1000 Mg)} \\ \cline{2-7} Stratum & (Lower 95\%) & Mean & (Upper 95\%) & SD & (Lower 95\%) & Mean & (Upper 95\%) & SD \\ \hline Conifer & (22.810) & 36.338 & (49.723) & 7.125 & (86.524) & 137.839 & (188.614) & 27.026 \\ Deciduous & (52.516) & 64.087 & (75.373) & 5.983 & (153.481) & 187.300 & (220.281) & 17.486 \\ Mixed & (24.555) & 36.604 & (48.815) & 6.240 & (34.702) & 51.730 & (68.986) & 8.818 \\ Other & (1.907) & 9.247 & (16.340) & 3.645 & (4.721) & 22.886 & (40.444) & 9.021 \\ \hline CPC & (27.620) & 37.695 & (47.904) & 5.350 & (292.884) & 399.725 & (507.978) & 56.731 \\ \hline \hline \end{tabular} \end{table} Table 8: Caribou-Poker Creek Research Watershed model (2) biomass density and total estimates. (seen in 7d) is most apparent in the Other stratum (due to the larger signal to noise ratio, i.e., \(\nu_{CH,u}^{2}/\gamma_{j}^{2}\), where \(j\) equals the Other stratum index). Biomass posterior predictive distribution mean and standard deviation maps are given in Figures 7e and 7f, respectively. Table 9 gives the BCEF's stratum areas, CFI sample size, and design-based post-stratified biomass density and total estimates. The BCEF and TIU design-based biomass density estimates (i.e., Tables 5 and 9) are similar, with the exception of the Other stratum. The Other point estimate for the BCEF is 18.070 vs. TIU's 4.624 (Mg/ha). BCEF's model-based estimates derived from the \(n^{*}\) posterior predictive distributions are given in Table 10. The BCEF's model-based biomass density point estimates for Confer and Deciduous stratum are a bit higher than the design-based estimates. Despite differences between design- and model-based stratum specific density and total estimates seen when comparing Tables 9 and 10, BCEF-wide densities and totals are quite similar, i.e., 41.397 vs. 49.391 (Mg/ha) and 867.503 vs. 1 198.903 (1 000 Mg), respectively. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Biomass (Mg/ha)} & \multicolumn{3}{c}{Biomass (1000 Mg)} \\ \cline{3-8} Stratum & Area (1000 ha) & \(n\) & Mean & SE & Total & SE \\ \hline Conifer & 7701.783 & 21 & 39.139 & 6.872 & 301.439 & 52.926 \\ Deciduous & 5615.655 & 30 & 60.951 & 7.394 & 342.278 & 41.520 \\ Mixed & 2124.105 & 8 & 58.444 & 17.532 & 124.140 & 37.239 \\ Other & 5514.323 & 17 & 18.070 & 6.677 & 99.645 & 36.820 \\ \hline BCEF & 20955.870 & 76 & 41.397 & 4.068 & 867.503 & 85.250 \\ \hline \hline \end{tabular} \end{table} Table 9: Bonanza Creek Experimental Forest strata area, number of inventory plots \(n\), and design-based post-stratified biomass density and total estimates. Figure 6: Caribou-Poker Creek Research Watershed (CPC) data and analysis results. (a) locations of G-LiHT, FIA, and CPC’s continuous forest inventory (CFI) data. (b) strata use for design- and model-based biomass estimates. (c) and (d) mean and standard deviation, respectively, of the posterior predictive distribution for the G-LiHT mean canopy height variable estimated using (3) and associated predictive model (8). (e) and (f) mean and standard deviation, respectively, of the posterior predictive distribution for biomass estimated using (2) and associated predictive model. Figure 7: Bonanza Creek Experimental Forest (BCEF) data and analysis results. (a) locations of G-LiHT, FIA, and BCEF’s continuous forest inventory (CFI) data. (b) strata use for design- and model-based biomass estimates. (c) and (d) mean and standard deviation, respectively, of the posterior predictive distribution for the G-LiHT mean canopy height variable estimated using (3) and associated predictive model (8). (e) and (f) mean and standard deviation, respectively, of the posterior predictive distribution for biomass estimated using (2) and associated predictive model. Discussion and future work As presented in the results, the proposed model yields stratum specific biomass estimates for arbitrarily sized areas of interest, e.g., large area TIU and small area CPC and BCEF. From a SAE standpoint, the unit-level model we propose also provides maps for the areas of interest and at a user-defined spatial resolution. Under the Bayesian paradigm used here, these maps (and resulting areal estimates) can summarize any posterior predictive distribution characteristic, e.g., mean, median, standard deviation, or credible intervals. Further, in the same way we condition biomass prediction on \(x_{CH}\), access to biomass posterior predictive samples facilitates uncertainty propagation through other functions or models that take biomass as input (e.g., economic or ecological models). Candidate model fit metric comparison suggested substantive information gain using G-LiHT derived mean canopy height, stratum-varying effects, a space-varying intercept, and stratum specific residual variance parameters. These model features were identified using EDA. One might also consider allowing for stratum-varying spatial random effects (see, e.g., Chan-Golston et al., 2020); however, given the paucity of spatial structure after accounting for the mean canopy height, it is unlikely stratum-varying spatial processes are warranted. As discussed in Section 4, fundamental differences between design- and model-based approaches means we should not compare their resulting population parameter estimates. This is particularly the case for uncertainty estimates, for which interpretation is completely incompatible. With the understanding that the true population parameter is unknown (and perhaps unknowable), we do mildly compare population point estimates. The design-based post-stratified point estimate, using 1 091 FIA plots, puts TIU's total biomass at 251 609.8 (1 000 Mg). The model-based point estimate, using 880 FIA plots and auxiliary information, puts TIU's total biomass at a slightly higher 261 441.68 (1 000 Mg). Much of the difference in these estimates is due to the model placing more biomass in the Other stratum. Looking at the observed distribution of biomass in Other (Figure 2a) and canopy cover map (Figure 1d), future work might consider a refined set of strata that partition Other into non-vegetation (e.g., water, barren) and vegetation. Or, following Finley et al. (2011) and May et al. (2023b), one might introduce an additional hierarchical level to differentiate between locations with and without biomass. Like other studies, we demonstrate the model-based approach is particularly useful for SAE. Here, paucity of FIA plots in CPC and BCEF precludes design-based estimation. In comparison, the proposed model draws on the broader TIU dataset, to deliver CPC and BCEF high-resolution maps and areal estimates by stratum. A key contribution here is demonstrating how a two-stage hierarchical Bayesian model allows information to be shared between multiple model components and how sparse data is used to inform parameter estimates within and across components, and ultimately to prediction. This model uses information where available to improve prediction accuracy and precision. For example, biomass prediction near G-LiHT measurements is seen in Figures 4(a) and 4(b) as stripes of higher fidelity and increased precision, respectively. Away from G-LiHT flight lines, biomass prediction retreats to the stratum mean and its variance is dominated by that of the non-spatial residual process. Because model-based inference is not confined to using probability sampling, future work could explore opportunities for adaptive sampling designs given objective functions that balance data acquisition cost at each model level (e.g., LiDAR and plot measurements) with inferential objectives (e.g., maximizing prediction accuracy and precision), see, e.g., Xia et al. (2006) and Mateu and Muller (2012). Given a posited model like those presented here, such adaptive sampling designs could facilitate cost efficient data collection efforts in remote regions while meeting inferential objectives. ## Acknowledgments Funding was provided by: NASA Carbon Monitoring System (CMS) grants Hayes (CMS 2020) and Cook (CMS 2018); National Science Foundation (NSF) DMS-1916395; joint venture agreement with the USDA Forest Service Forest Inventory and Analysis; USDA Forest Service, Region 9, Forest Health Protection, Northern Research Station; Michigan State University AgBioResearch. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{\(\alpha_{CH,0}+\tilde{\alpha}_{CH,0,j}\)} & \multicolumn{3}{c}{\(\alpha_{CH,TC}+\tilde{\alpha}_{CH,TC,j}\)} & \multicolumn{3}{c}{\(\gamma_{j}^{2}\)} \\ \cline{2-9} Stratum & (L. 95\%) & Mean & (U. 95\%) & (L. 95\%) & Mean & (U. 95\%) & (L. 95\%) & Mean & (U. 95\%) \\ \hline Conifer & \multirow{2}{*}{(2.431)} & 2.502 & (2.587) & \multirow{2}{*}{(2.416)} & 2.484 & \multirow{2}{*}{(2.559)} & \multirow{2}{*}{(2.078)} & 2.147 & \multirow{2}{*}{(2.221)} \\ Deciduous & & 6.035 & (6.179) & & (5.903) & 6.025 & & (11.500) & 12.081 & (12.710) \\ Mixed & \multirow{2}{*}{(3.689)} & 3.852 & (4.013) & \multirow{2}{*}{(3.687)} & 3.825 & \multirow{2}{*}{(3.978)} & 3.878 & \multirow{2}{*}{(6.792)} & 7.307 & (7.864) \\ Other & & 1.617 & (1.695) & & (1.533) & 1.597 & & (1.668) & (0.315) & 0.340 & (0.362) \\ \hline \hline \end{tabular} \end{table} Table S2: Mean canopy height \(x_{CH}\) model (3) parameter estimates, with \(j\) indexing stratum. Associated process parameter estimates are given in Table S3.
2308.13720
Entropic Timescales of Dynamic Heterogeneity in Supercooled Liquid
Non-Gaussian displacement distributions are universal predictors of dynamic heterogeneity in slowly varying environments. Here, we explore heterogeneous dynamics in supercooled liquid using molecular dynamics simulations and show the efficiency of the information-theoretic measure in quantifying dynamic heterogeneity over the widely used moment-based quantifications of nonGaussianity. Our analysis shows that the heterogeneity quantified by the negentropy is significantly different from the one obtained using the conventional approach that considers deviation from Gaussianity up to lower-order moments. Further, we extract the timescales of dynamic heterogeneity using the two methods and show that the differential changes diverge as the system experiences strong intermittency near the glass transition.
Vinay Vaibhav, Suman Dutta
2023-08-26T00:57:00Z
http://arxiv.org/abs/2308.13720v1
# Entropic Timescales of Dynamic Heterogeneity in Supercooled Liquid ###### Abstract Non-Gaussian displacement distributions are universal predictors of dynamic heterogeneity in slowly varying environments. Here, we explore heterogeneous dynamics in supercooled liquid using molecular dynamics simulations and show the efficiency of the information-theoretic measure in quantifying dynamic heterogeneity over the widely used moment-based quantifications of non-Gaussianity. Our analysis shows that the heterogeneity quantified by the negentropy is significantly different from the one obtained using the conventional approach that considers deviation from Gaussianity up to lower-order moments. Further, we extract the timescales of dynamic heterogeneity using the two methods and show that the differential changes diverge as the system experiences strong intermittency near the glass transition. _Introduction.--_ Fickian theory of diffusion has been unquestionably successful for more than a century in analyzing particle-level dynamics in soft condensed matter that appear in different forms and shapes unless the system is intermittent or has widely separating timescales [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. In metastable systems, the fundamental route to diffusion becomes difficult in the presence of complex energy landscapes, specifically, when a small magnitude of thermal fluctuation is not enough to supply the energy cost of achieving diffusion, overcoming the energy barriers [15; 24]. For instance, molecular displacements deviate from their usual Gaussian form [3; 5] in liquids approaching glass transition [25; 26], showing slow heterogeneous density relaxation [27]. Such dramatic slowing down observed generically in a host of systems without any reproducible thermodynamic transition has remained a surprise even after decades of research [2; 16; 18]. Dynamic heterogeneity is the observed complex dynamics of particles in such temporally fluctuating environments in the presence of spatial degrees of heterogeneity, where both locally fast and slow relaxation processes coexist simultaneously [15; 28; 29; 30; 21]. Dynamic heterogeneity in the supercooled liquid has been affirmed with persisting non-Gaussian tails in the displacement distributions, even when the mean-squared displacement linearly increases with time [31; 32; 33]. Such class of non-Gaussian diffusion has been explained as an effective dynamics in the presence of a diffusion spectrum where the dynamics is strongly influenced by the presence of _ages_ and diffusion is only restored upon _cage-breaking_, making it distinct from the Fickian class of liquids [31; 32]. The persistence of dynamic heterogeneity is, thus, debated for the presence of multiple timescales of relaxation and various ways of their determination. It has also remained indecisive whether the onset of diffusion at all occurs in the finite time when a liquid approaches its glass transition [34; 35; 14; 33]. Extracting the fundamental timescale of dynamic heterogeneity directly from the displacement distributions or the _self-Van Hove_ function is advantageous than obtaining it by other quantities which are either related to its moments or derivatives. Using the displacement distributions, it is possible to identify timescales of heterogeneity by finding the maximal non-Gaussianity using conventional measures that rely on moment-based relationships [36]. However, one hindrance of this method is that such moment ratios are primarily limited to lower-order moments, which raises natural questions: _How optimal are the moment-based predictions of dynamic heterogeneity or its detection by the conventional techniques?_ This calls for newer directions in the precise identification of non-Gaussianity by informative approaches with the data-driven resources, like the information-theory-based optimizations simplify challenging multi-scale and inverse problems in bio-informatics [37; 38] or the prediction of dynamics and structures using machine intelligence is unveiling newer avenues in physics [39; 40; 41]. Here, we explore dynamic heterogeneity in the supercooled liquid above the glass transition using molecular dynamics simulations in three dimensions. We examine the spatio-temporal dynamics in terms of the evolving probability distribution functions of the displacements at single particle-level which are strongly heterogenous and non-Gaussian till it diffuses at sufficiently long time. We quantify temporal intermittency in terms of non-Gaussianity extracted from the molecular displacements using the conventional moment-based descriptions and a _relative entropy_ based _non-Gaussian information_ that considers the statistical distance between the time-dependent displacement distribution and its equal-time nearest Gaussian trained from the original probability distribution functions. We extract and compare the identified timescales of optimal heterogeneity obtained from the two methods and show that they surprisingly differ in estimating microscopic heterogeneity, in particular when the situation is strongly intermittent at low temperatures. Further, we show that such difference diverge when approaching the glass transition while the timescales are similar at relatively high temperatures within the supercooled regime. We correlate the two quantities and interpret the deviation. _Model.--_ We simulate a well studied model glass-forming system, popularly known as the Kob-Anderson 80:20 (A:B) binary mixture [42], in three dimensions at different temperatures \(T\) within the supercooled regime. The particles (each with unit mass) interact via the Lennard-Jones (LJ) pair-potential with a cutoff at a distance \(R_{c}\). Here, we use \(N=1000\) particles at a constant number density (\(\rho=1.20\)) in a cubic box of length \(L=9.41\) and \(R_{c}=2.5\). All measurements are done in the LJ reduced unit (see Ref. [43] for model related information). Independent molecular dynamics trajectories are generated using LAMMPS [44] under periodic boundary conditions with integration time step \(\Delta t=0.004\) at each of the different \(T\in[0.45,0.70]\), above the mode-coupling temperature \(T_{MCT}\approx 0.435\) of the mixture [42]. _Results.--_ We track the particle trajectories and compute time-dependent probability distribution functions of particle displacements (also known as self-van Hove functions) which have been mapped in one dimension, \(x\) in time interval, \(t\), \(G_{s}(x,t)=\frac{1}{N}\sum_{i=1}^{N}\delta(x-(x_{i}(t)-x_{i}(0)))\)[45] by averaging over displacement distributions in all spatial dimensions obtained in different independent trajectories. We show the time evolution of \(G_{S}(x,t)\) for different \(t\) in Fig. 1 for two different values of \(T\). At finite temperatures, \(G_{S}(x,t)\) spreads with increasing \(t\) due to diffusion which we see here. For \(T=0.70\), it develops non-Gaussian tail for intermediate times that reverts back to Gaussian at long times, suggesting that the dynamics is intermittent and heterogeneous even at relatively high temperatures, within the supercooled regime [Fig. 1(a)]. For \(T=0.45\), the heterogeneity is not only stronger, but also its lifetime is prolonged as the non-Gaussian tail (\(G_{S}(x,t)\sim\exp(-x/\lambda(t))\) with \(\lambda(t)\sim\sqrt{t}\)) persists for very long time [Fig. 1(b)]. Such persistent exponential tails have been earlier reported in both simulations [10; 25] and experiments [3; 15; 32]. In order to assess the degree of temporal heterogeneity in microscopic dynamics, we show the temporal evolution of \(G_{S}(x,t)\) for various \(t\) for two different \(T\) in Figs. 1(c)-1(n) and compare it with the reconstructed _equal-time nearest Gaussian distributions_ (see Ref. [46]), \(G_{S}^{G}(x,t)\) that has first two moments same as \(G_{S}(x,t)\). This ensures that for a purely Gaussian distribution, \(G_{S}^{G}(x,t)\to G_{S}(x,t)\) and \(G_{S}^{G}(x,t)\neq G_{S}(x,t)\) when it is strictly non-Gaussian. We observe \(G_{S}(x,t)\approx\delta(x)\) for \(t=0\), but the calculation of \(G_{S}^{G}(x,t)\) will become more meaningful when \(G_{S}(x,t)\) has some finite support at finite \(t\). For \(T=0.70\), we observe that for very small \(t\), the difference between \(G_{S}(x,t)\) and \(G_{S}^{G}(x,t)\) is not very much significant [Fig. 1(c)]. The difference enhances with increasing \(t\) [Figs. 1(d), 1(e)] where \(G_{S}(x,t)\) is prominently non-Gaussian while for larger \(t\), again \(G_{S}(x,t)\) Figure 1: [(a)-(b)] Temporal evolution of the Non Gaussian displacement distributions, \(G_{S}(x,t)\) vs. \(x\) for two different \(T\), (a) \(T=0.70\) (b) \(T=0.45\) for different \(t\): \(t_{1}=928.98\), \(t_{2}=9527.62\), \(t_{3}=39913.85\), \(t_{4}=68300.79\), \(t_{5}=139796.16\) with \(x\in[X,Y,Z]\). Long time behavior of \(G_{S}(x,t)\) at large \(x\) is Gaussian for \(T=0.70\) and exponential for \(T=0.45\). [(c)-(n)] Temporal evolution of the displacement distributions, \(G_{S}(x,t)\) (open symbols) and respective reconstructed equal-time nearest Gaussians, \(G_{S}^{G}(x,t)\) (black solid line) vs. \(x\) for \(T=0.70\) [(c)-(h)] and \(T=0.45\) [(i)-(n)] for \(t=0.60\) [(c),(i)], 21.62 [(d),(j)], 129.59 [(e),(k)], 27899.00 [(f),(l)], 47740.94 [(g),(m)] and 167210.14 [(h),(n)]. \(G_{S}^{G}(x,t)\) are the optimal Gaussians with identical first two moments same as \(G_{S}(x,t)\). reverts back to Gaussian form with \(G_{S}^{G}(x,t)\approx G_{S}(x,t)\) [Figs. 1(f)-1(h)], suggesting the onset of Fickian diffusion that is characterized by the presence of Gaussian displacement distribution and linear mean squared displacement [32]. The situation is surprisingly different for \(T=0.45\) which we show in Figs. 1(i)-1(n) and compare it with respective \(G_{S}^{G}(x,t)\), as earlier. For small \(t\), the distributions are highly spiked, centered at \(x=0\). Within the smaller intervals, \(G_{S}(x,t)\approx G_{S}^{G}(x,t)\) [Fig. 1(i)] and smaller deviations from delta functions are mostly contributed by the lower-order moments. For larger \(t\), as \(G_{S}(x,t)\) broadens in \(x\), more deviation could be seen from \(G_{S}^{G}(x,t)\) in the form of exponential tail that grows with a significantly large proportion of displacements appearing beyond the support of the equal time nearest Gaussians [Figs. 1(j), 1(k)], suggesting that the heterogeneity is maximally contributed by the higher-order moments. With increasing \(t\) [Figs. 1(l)-1(n)], the difference between \(G_{S}(x,t)\) and \(G_{S}^{G}(x,t)\) decreases yet it remains non-Gaussian within our observation time-window. Now we attempt to quantify the dynamic heterogeneity in terms of non-Gaussianity in \(G_{S}(x,t)\) using conventional approaches that use lower order moment-based relationships. One such very simple quantification of non-Gaussianity that is widely available in the literature is based on the deviation of kurtosis from the square of second moment, defined as \(\alpha_{2}(t)=\frac{\langle x^{2}(t)\rangle}{3\langle x^{2}(t)\rangle^{2}}-1\) with \(<x^{n}(t)>=\int dx\ x^{n}G_{S}(x,t)\), as in Ref. [1; 3; 42; 47]. We show the dependence of \(\alpha_{2}\) with \(t\) for different \(T\) in Fig. 2(a). For \(T=0.70\), \(\alpha_{2}\) grows with increasing \(t\) till a peak is observed. After that, \(\alpha_{2}\) decreases for larger \(t\). For smaller temperatures, the peak in \(\alpha_{2}\) shifts to larger \(t\) and the height of the peak grows. In all these cases, \(\alpha_{2}\) decreases monotonically after the peak. The timescale corresponding to the peak is generally identified as the characteristic timescale of dynamic heterogeneity. However, such quantification is fundamentally limited to fourth-order moments of \(G_{S}(x,t)\). Figs. 1(c)-1(n) suggest that the higher order moments may be associated with the larger degrees of heterogeneity for which we explore another quantification of non-Gaussianity that captures contributions of all order moments of \(G_{S}(x,t)\) at a given time. Non-Gaussianity in \(G_{S}(x,t)\) is now analysed using _non-Gaussian information_[46] that we developed following Ref. [48], originally proposed as _Negentropy_. It uses the statistical distance between \(G_{S}(x,t)\) and \(G_{S}^{G}(x,t)\) to quantify \(\Delta s_{ng}(t)=A\ D_{KL}(G_{S}(x,t)||G_{S}^{G}(x,t))\) where \(A\) is a constant. We further define, \(\Delta S_{ng}(t)(=\Delta s_{ng}(t)/A)=-\int dx\ G_{S}(x,t)\log_{c}\frac{G_{S}^{G }(x,t)}{G_{S}(x,t)}=S_{ng}^{G}-S_{ng}\) when \(G_{S}(x,t)\) has some finite support. Here, \(S_{ng}(t)=-\int dx\ G_{S}(x,t)\log_{c}G_{S}(x,t)\) and \(S_{ng}^{G}(t)=-\int dx\ G_{S}^{G}(x,t)\log_{c}G_{S}^{G}(x,t)\) and \(D_{KL}(P||Q)\) is the _Kullback-Leibler (KL)_ divergence [49] between two probability distribution functions, \(P(x)\) and \(Q(x)\). We show the time dependence of \(\Delta S_{ng}\) for different \(T\) in Fig. 2(b). \(\Delta S_{ng}\) shows non-monotonic dependence on \(t\), similar to \(\alpha_{2}\). For all these cases, \(\Delta S_{ng}\) grows up to a peak and then decreases monotonically. Surprisingly, for every \(T\), we observe that \(\Delta S_{ng}\) still grows to higher values when \(\alpha_{2}\) has already reached the maximum. This suggests that the dynamic heterogeneity in \(G_{S}(x,t)\) may not be completely captured in \(\alpha_{2}\) and the peak in \(\alpha_{2}\) may also not be a true representation of timescale of the underlying heterogeneity as it underestimates the contribution of the higher order moments in \(G_{S}(x,t)\) because the information in \(\alpha_{2}\) is limited up-to the fourth moment of the displacement distribution. We compare the behavioural differences between \(\alpha_{2}\) and \(\Delta S_{ng}\) in Fig. 3. In Fig. 3(a), we statistically correlate \(\alpha_{2}\) and \(\Delta S_{ng}\) for the different cases of \(T\). In all these cases, they form loop-like shapes and the area covering the curve is larger for smaller \(T\). We observe that the behavioral difference between \(\alpha_{2}\) and \(\Delta S_{ng}\) grows when both have higher values, representing the maximal heterogeneity. The degree of such deviation significantly increases for decreasing \(T\), suggesting strongly that the dynamic heterogeneity is underestimated by \(\alpha_{2}\) when \(G_{S}(x,t)\) is maximally non-Gaussian. This is consistent with our earlier investigations [46] where we obtained similar loops in the case of a model supercooled liquid based on continuous time random walk (CTRW), Figure 2: Quantification of dynamic heterogeneity in terms of non-Gaussianity of \(G_{S}(x,t)\) via (a) \(\alpha_{2}\) and (b) \(\Delta S_{ng}\) vs. \(t\) for different \(T\) as marked. suggesting that the underlying dynamic heterogeneity picture is qualitatively similar and the behavior of non-Gaussianity based quantifications remain universal under such slowly varying conditions. We further extract the associated timescales corresponding to the peaks of \(\alpha_{2}\), \(\tau_{A}\) and that of \(\Delta S_{ng}\), \(\tau_{S}\) [see Fig. 2] and show its dependence on inverse of \(T\) in the inset of Fig. 3(b). Both \(\tau_{A}\) and \(\tau_{S}\) increase sharply for larger values of \(1/T\). In all these cases, we observe that \(\tau_{S}\) has higher values than that of \(\tau_{A}\). We finally correlate \(\log_{e}\tau_{A}\) and \(\log_{e}\tau_{S}\) in the main panel of Fig. 3(b). We fit the data of \(\log_{e}\tau_{S}\) and \(\log_{e}\tau_{A}\) and obtain \(\log_{e}\tau_{S}\approx\beta_{0}+\beta\log_{e}\tau_{A}\) with \(\beta_{0}\approx-0.379,\beta\approx 1.286\). This suggests that \(\tau_{S}\) diverges from \(\tau_{A}\) in a power-law (\(\tau_{S}\sim\tau_{A}^{\beta}\)) and such difference diverges with decreasing \(T\). Further, we extract the height of the peak in \(\Delta S_{ng}\), \(\Delta S_{ng}^{P}(T)(=\max_{t\in[0,\infty]}\Delta S_{ng}(t;T))\) and correlate it with the corresponding entropic timescale, \(\tau_{S}\). In inset of Fig. 3(c), we show that \(\log_{e}\tau_{S}\sim\zeta_{0}+\zeta\log_{e}\Delta S_{ng}^{P}\) where \(\zeta_{0}\approx 9.956,\zeta\approx 2.164\). Also, we observe that \(\Delta S_{ng}^{P}\) increases with decreasing \(T\), sharply near \(T=0.45\) [Fig. 3(c)], when \(\tau_{S}\) grows sharply as shown in inset of Fig. 3(b). We fit the data as \(\Delta S_{ng}^{P}\sim\nu_{0}-(T-T*)^{\gamma}\) with \(T*\approx 0.443,\nu_{0}\approx 0.928,\gamma\approx 0.064\). This suggests that the _non-Gaussian information_ diverges as \(T\to T*\) for the sharp increase of \(\tau_{S}\) when approaching to the state of dynamic arrest at the glass transition. This also indicates that the degree of heterogeneity at a given \(T\) is strongly connected with \(\Delta S_{ng}^{P}\). Thus, the system explores a strongly intermittent environment and dynamically heterogeneous states with larger non-Gaussian information when the divergence of \(\Delta S_{ng}^{P}\) represents the absolute entropic distance of the dynamic arrest from the nearest diffusive route. Hence, the term \(T\Delta S_{ng}^{P}\) can be considered as an energy term associated with the heterogeneity that competes with the diffusion in overcoming the dynamic heterogeneity at a given \(T\). So, we correlate the entropic timescale, \(\tau_{S}\) with \(1/(T\Delta S_{ng}^{P})\) and observe that \(\tau_{S}\) grows and shows a sharp rise with decreasing \(1/(T\Delta S_{ng}^{P})\) [Fig. 3(d)]. Our data follows \(\log_{e}\tau_{S}\sim\eta_{0}+\eta(T\Delta S_{ng}^{P})^{-\psi}\) with \(\eta_{0}\approx-8.993,\eta\approx 25.691,\psi\approx-0.205\). Using the form of \(\Delta S_{ng}^{P}\), one can obtain \(\log_{e}\tau_{S}\sim T^{-\psi}[1-(T-T*)^{\gamma}]^{-\psi}\), where the divergence of \(\tau_{S}\), for \(T\to T*\), is primarily governed by the term \([1-(T-T*)^{\gamma}]^{-\psi}\), which qualitatively explains the nature of growth of \(\tau_{S}\) with decreasing \(T\), as seen in inset of Fig. 3(b). The rapid rise of \(\tau_{S}\) is due to the presence of strong intermittency, for which achieving diffusion becomes increasingly difficult overcoming the dynamic heterogeneity, for the exploration within the rough energy landscapes at low temperatures. _Discussion.--_ These results align with our theoretical predictions of the entropic timescales of dynamic heterogeneity in a model supercooled liquid [46] where the development of intermittent non-Gaussian tail was modeled using the _Montroll-Weiss_ CTRW framework, considering complex hopping of particles within the _mobile_ and _immobile_ regions and jumps from one region to another [50]. The persistence of such vibrations and jumps within these regions overall controls the nature of intermittency and its lifetime [25; 26]. Such intermittent non-Gaussian tails were inferred as a convolution process of cooperative diffusion, known as the _Brownian yet Non-Gaussian diffusion_[32; 51]. Supportive literature [52; 53; 54; 21; 22] shows that the diffusion spectrum obtained upon the deconvolution of the non-Gaussian displacement distribution validates the physical picture of dynamic heterogeneity which considers the simultaneous presence of _slow_ and _fast_ regions within the system [30]. Such heterogeneity exhibits anomalous spatiotemporal fluctuations that have all order information [56]. Therefore, to estimate the dynamic heterogeneity effectively, consideration of all order moments is advantageous. Gaussianity reverts when the Fickian diffusion sets in and the dynamics follow the central limit theorem, marking the onset of diffusion. However, extraction of this timescale is difficult due to the scale-free decay of \(\alpha_{2}(t)\)[35], specifically, when the ambient temperature is close to the Glass transition and the situation becomes intrinsically non-equilibrium. Our analysis also affirms that both \(\tau_{A}\) and \(\tau_{S}\) diverge while the latter diverges faster than the former in a power law which can be further tested using other CTRW models [25; 26] or using Mode-coupling-theory [57]. The sharp increase of \(\Delta S_{ng}^{P}\) with decreasing \(T\) is due to diverging dynamic heterogeneity that also leads to a sharp increase in entropic timescale as the system approaches the state of dynamic arrest. Whether the non-Gaussian-information has any connection with the configurational entropy [58] or non-equilibrium free energy[59], needs further investigations. _Conclusions.--_ We quantify the timescales of dynamic heterogeneity in supercooled liquids using the conventional _non-Gaussian parameter_ and the _non-Gaussian information_. We show that the entropic timescales are significantly different and diverging from those obtained using the non-Gaussian parameter. This difference arises because the moment-based definitions are limited up to the fourth order, while the information-theoretic quantification takes into account all order moments. Although several other moment-based definitions are available [47; 60], it is always challenging to estimate timescales using them as the computation of higher-order moments can be increasingly noisy or unreliable without the quality data. On the other hand, our framework is easy to implement and extracts heterogeneity optimally. This makes the information-theoretic-framework scientifically robust in quantifying non-Gaussianity in practical situations, also in a more general context, in out-of-equilibrium systems in identifying or predicting novel cross-over or transition where small fluctuations lead to catastrophic changes, or differentiate phases or states of matter [56; 61]. _Acknowledgements.--_ V.V. gratefully acknowledges funding from the European Union through Horizon Europe ERC Grant number: 101043968 "Multimech". S.D. acknowledges support of the Department of Atomic Energy, Government of India, under project no RTI4001. We acknowledge HPC facilities at IMSc, ICTS-TIFR and supporting grants, ICTS/eiosm2018/08, ICTS/ISPCM2023/02. We thank D. Bagchi and S. Sikdar for sharing data. We thank S. Bose, P. Chaudhuri, C. Dasgupta, J. Horbach, S. Karmakar, S. K. Nandi, M. S. Shell and A. Zaccone for insightful discussions. _Research Contribution.--_ VV: Performed simulations and formal analysis, Validation, Visualizations. SD: Conceptualization, Methodology, Validation, Research Coordination, Data Interpretation, Writing.
2310.09532
Toward Open Repository of Performance Portability of Applications, Benchmarks and Models
The adoption of heterogeneous computing systems based on diverse architectures to achieve exascale computing power has worsened the performance portability problem of scientific applications that were designed to run on these platforms. To cope with the challenges posed by supercomputing, new performance portability frameworks have been developed alongside advanced methods and metrics to evaluate the performance portability of heterogeneous applications. However, many studies have shown that the new methods and metrics do not produce coherent results which yield clear conclusions that are required for designing the hardware and software architectures of tomorrow's supercomputing systems. We outline a proposal to establish an open repository of performance portability of applications, benchmarks and models which will be standardized, objective, and based on strict operating and reporting guidelines. Such guidelines will ensure a fair, comparable and meaningful measure of the performance portability while the requirement for a detailed disclosure of the obtained results and the configuration settings will ensure the reproducibility of the reported results.
Ami Marowka
2023-10-14T08:39:33Z
http://arxiv.org/abs/2310.09532v1
# Toward Open Repository of Performance Portability of Applications, Benchmarks and Models ###### Abstract The adoption of heterogeneous computing systems based on diverse architectures to achieve exascale computing power has worsened the performance portability problem of scientific applications that were designed to run on these platforms. To cope with the challenges posed by supercomputing, new performance portability frameworks have been developed alongside advanced methods and metrics to evaluate the performance portability of heterogeneous applications. However, many studies have shown that the new methods and metrics do not produce coherent results which yield clear conclusions that are required for designing the hardware and software architectures of tomorrow's supercomputing systems. We outline a proposal to establish an open repository of performance portability of applications, benchmarks and models which will be standardized, objective, and based on strict operating and reporting guidelines. Such guidelines will ensure a fair, comparable and meaningful measure of the performance portability while the requirement for a detailed disclosure of the obtained results and the configuration settings will ensure the reproducibility of the reported results. Performance Portability, Performance Efficiency, Metrics, SPEC ## I Introduction Emerging performance portability frameworks such as Kokkos [2], Raja [1] and SYCL [3] alongside mature heterogeneous high-level programming models such as OpenMP [5], OpenACC [4] and MPI [6] are the main software development infrastructures that will be available for software engineers to build scientific applications in the era of exascale computing. The interplay between the never-ending demand for high performance applications, on the one hand, and the demand for portability and productivity of those applications, on the other hand, becomes more complex as hardware architectures become more heterogeneous. The performance portability frameworks developed in recent years have shown impressive progress in everything related to functional portability with the appearance of high-level cross-platform programming models based on the approach of backend compilers, such as Kokkos, and a single-source C++ standard for heterogeneous computing, such as SYCL. Despite all this impressive progress, performance portability still poses challenging technological issues to software and hardware architects. Dealing with these issues requires, first and foremost, an agreed definition for the term performance portability and agreed metrics for measuring and evaluating the degree of performance portability of heterogeneous applications, benchmarks, and higher-level heterogeneous programming models. Furthermore, in order to measure performance portability in a way that it will be possible to compare different implementations of the same application in a meaningful and objective manner, clear and agreed upon guidelines and rules are needed for how the measurements should be performed and reported so that they can be reproduced. In addition, the results should be available and accessible to the High Performance Computing (HPC) community in an open repository. Of all the necessary requirements for having a methodological framework for measuring and comparing performance portability, it seems that regarding the definition of the term performance portability there is a broad consensus [7]: **Definition: performance portability** _A measurement of an application's performance efficiency for a given problem that can be executed correctly on all platforms in a given set._ The definition explicitly states that performance efficiency is the ultimate measure of performance portability. Therefore, several approaches were proposed to measure performance efficiency alongside several metrics to calculate performance portability [8, 11, 12, 7, 13]. And if we add to these facts that there is no agreed framework of rules and guidelines on how to measure and calculate performance efficiency and performance portability, then it is not surprising that it is not possible to draw informed insights from the dozens of studies that have been done in recent years, and it would not be an overstatement to claim that the current situation is a complete mess that can be reorganized. This paper is intended to delineate a way to organize future studies of performance portability under an uniform framework of rules and guidelines for measuring, calculating and reporting performance portability of applications, benchmarks and performance portability frameworks. We demonstrate our approach using the Standard Performance Evaluation Corporation (SPEC) benchmarks [14] as a way to solve the disorganization that exists in this important research area. However, other similar frameworks can be appropriate alternative infrastructures for the ideas presented in this paper. The main contribution of this paper lies in the novel idea of how to integrate the future studies of performance portability in an existing and dynamic framework that has proven itself over three decades and, as we will see later, it already has the basic definitions. Furthermore, we would like to emphasize that in this paper we are only sketching the proposed framework and the examples we use to demonstrate the calculation of the performance portability are based only on the measurements that appear within the current SPEC repository. We would like to remind the reader that SPEC was designed to be a performance benchmarking framework for HPC platforms and not performance portability benchmarking framework. With that goal, we make the following contributions: * We present the main problems which cause the inconsistent measurement, calculation, and reporting of performance portability results in the studies that have been carried out in recent years and which yield inconsistencies. * We introduce new types of performance efficiencies in addition to the existing ones in order to enable analysis of performance portability of applications and models from different perspectives. * We demonstrate the calculation of performance portability of applications and benchmarks based on currently published SPEC performance measurements. The rest of the paper is structured as follows. Section 2 presents the motivation to establish an orderly framework for examining the performance portability of applications. Section 3 presents related studies. Section 4 presents the SPEC benchmarks framework. Section 5 presents our suggestion to integrate in SPEC the evaluation of the performance portability. Section 6 demonstrates the calculation of performance portability of applications that are currently appear in SPEC and Section 7 presents conclusions. ## II Motivation In this section, we present the current main performance portability issues that call for organizing this research field in order to enable informed conclusions to be drawn from future studies. Furthermore, due to these issues there is fundamental motivation to maintain a rigid framework of rules and regulated measurement mechanisms for future studies of performance portability whose results will be stored in an open repository accessible to the HPC community. The report of the first Department of Energy (DOE) Performance, Portability and Productivity annual meeting in 2016 showed clearly that there is no consensus on a workable definition of the performance portability term [15]. This situation led researchers to propose the definition presented in the introduction and which has been widely accepted in the HPC community. This meeting motivated Pennycook et. al. to propose the \(\mathbf{\Phi}\) metric to calculate the performance portability based on the harmonic mean [7]. But this metric proved itself to be problematic as was articulated in many studies [8, 13, 16, 17, 18]. The main claims against the \(\mathbf{\Phi}\) metric were that it is unintuitive, unfamiliar, loses information, difficult to use, and the performance portability scores it yields are unrealistic. Therefore, the \(\mathbf{\overline{\Phi}}\) metric based on the arithmetic mean was proposed, which actually solved the above problems and yielded much more realistic results without losing information [8]. The designers of the \(\mathbf{\Phi}\) metric accepted some of the claims but left the rest of the problems unaddressed [11]. Currently, the situation is that there are studies that still use the \(\mathbf{\Phi}\) metric but not according to the original definition in order to avoid the aforementioned problems [19]. For example, based on the \(\mathbf{\Phi}\) metric, if one platform does not support an application, it suggests that the performance portability of the application is zero. This, however, just does not make sense because there is always a platform out there that does not support a given application. Therefore, what actually happens is that the metric is used in such a way that only those platforms which support the application are taken into account [18, 19]. Otherwise, the performance portability scores will be zero and thus meaningless, as has happened in many studies [20, 21, 22]. The current situation is that there are currently two metrics, including one that is still controversial, which is an undesirable situation. Another issue is related to the performance efficiency approaches that are currently in use: application efficiency and architectural efficiency approaches. The widespread claim is that it is not clear which approach to use, since each one produces different results [16]. Although the two approaches complement each other, the situation is still far from clear for many researchers. The best indication for this claim is that, to the best of our knowledge, there has not yet been even a single study that has used both approaches for a given application-platform pair and then performed an appropriate analysis of the results. In Section 5 we present different types of each of the approaches that can be included in the SPEC framework, but not necessarily all of them will be mandatory. Undoubtedly, a combination of types from both approaches provides more insights of the performance portability of a given application. ## III Related Work This section presents a few related studies that have criticized the \(\mathbf{\Phi}\) metric and those that have proposed solutions for improving the metric. Furthermore, this section elaborates on the issues presented in the previous section by presenting the misunderstandings of different researchers regarding how to calculate the performance portability of applications. Dreuning et al. convincingly presented some of the dilemmas that the \(\mathbf{\Phi}\) metric poses to developers and the ambiguity of the results obtained [16]. They demonstrated the usability and the usefulness of the \(\mathbf{\Phi}\) metric by implementing five OpenACC applications using a set of three platforms (one CPU and two GPUs). The first question they asked themselves was: Which measure to use, bandwidth or operational throughput? The solution they found was to use the Roofline model [23] to calculate the ratio of the application and the hardware operational intensity values to determine whether the application was compute- or memory-bound and accordingly whether to use bandwidth or operational throughput. The second question was: Which performance efficiency to use, application or architectural efficiency? From analyzing the results of their experiments, they concluded that to assess whether the performance of a given application can be improved further, architectural efficiency alone is not sufficient, and a diagnosis of what the application efficiency provides is also required. They also noted that the harmonic mean tracks the low values of the CPUs even though the values of the GPUs are significantly higher. We showed that this observation is typical of the \(\mathbf{\Phi}\) metric, but not of \(\overline{\mathbf{\Phi}}\), which is why we recommend always to present the scores for CPUs and GPUs separately [8]. Siklosi et al. examined the performance of Stencil applications on hybrid CPU-GPU systems [9]. They found that using the \(\mathbf{\Phi}\) metric to calculate the performance portability of applications is not intuitive. In their opinion, the reason for this is that if architectural efficiency is used, then the \(\mathbf{\Phi}\) metric tends to track the low values and therefore the improvement of a hybrid system is not reflected in the calculated \(\mathbf{\Phi}\) score. However, when using application efficiency, a hand-tuned baseline implementation is required, which to the best of their knowledge does not exist. Daniel and Panetta showed that the \(\mathbf{\Phi}\) metric is easily affected by the problem size [17]. To address this susceptibility, they proposed an alternative metric called _Performance Portability Divergence (\(P_{D}\))_ as the arithmetic mean of RMS divergences across a set of platforms \(H\): \[P_{D}=\frac{\sum_{i\neq H}\Delta_{RMS}}{|H|}\] where the divergence RMS, \(\Delta_{RMS}\), is the root mean square of performance distances between a set of input sizes and _Performance Distance_ is the relative error in performance measure between two applications solving the same problem with the same platform and input size. The performance measure used by Daniel and Panetta is the application efficiency. The \(P_{D}\) metric is different from the \(\mathbf{\Phi}\) and \(\overline{\mathbf{\Phi}}\) metrics. It does not capture the performance and portability of an application across platforms. The \(\mathbf{\Phi}\) and \(\overline{\mathbf{\Phi}}\) metrics calculate the average performance efficiencies of a given application on top of a given architecture set. On the other hand, the \(P_{D}\) metric calculates the average variability of the performance efficiencies of a number of input sizes of a given application on top of a given set of architectures. These are therefore two distinct products. The \(P_{D}\) metric can be a complementary metric to \(\mathbf{\Phi}\) and \(\overline{\mathbf{\Phi}}\) that shows the variance obtained from different input sizes. Sedova et al. proposed a performance portability metric denoted by the symbol \(PP_{MD}\), where \(MD\) stands for Molecular Dynamics [10]. It measures the contributions of non-portable components to an application's performance. The \(PP_{MD}\) metric is the harmonic mean of the speedups of the application's components that are low-level, optimized and non-portable. Sedova et al. do not explain why they chose to use the harmonic mean. Unlike the \(\mathbf{\Phi}\) and \(\overline{\mathbf{\Phi}}\) metrics, the \(PP_{MD}\) metric is calculated for a particular architecture rather than a set of architectures. The \(PP_{MD}\) metric purports to evaluate performance portability but in practice it measures the price in performance that must be paid to make the application portable. Bertoni et al. studied how several OpenCL implementations of the Rodinia Benchmarks performed across three platforms and used the \(\mathbf{\Phi}\) metric to estimate the performance portability of the tested implementations [18]. They claimed that the \(\mathbf{\Phi}\) metric was insufficient for this purpose because it scored different implementations equally despite the fact that their performance efficiencies were very different. Therefore, they proposed to measure the standard deviation of the performance efficiencies to add another perspective on the performance efficiencies distribution across platforms. It is argued here that using the \(\overline{\mathbf{\Phi}}\) metric improves the diagnoses. Table I shows the performance efficiencies of the various implementations on the platforms used and the scores of the \(\mathbf{\Phi}\) and \(\overline{\mathbf{\Phi}}\) metrics side by side along with their standard deviations. Clearly, the scores of the \(\overline{\mathbf{\Phi}}\) metric differentiate better which of the implementations have better performance portability and it also more reliably reflects the performance efficiencies from which the \(\overline{\mathbf{\Phi}}\) values are derived. Pay particular attention to how the scores of the SC and HS kernels have changed significantly. Bertoni et al. chose to calculate the performance efficiencies in relation to the Roofline peak performance. They describe in detail the methodology used to construct the Roofline graphs, thus demonstrating how complex and exhausting the process is. ## IV SPEC Benchmarks In this section we present the SPEC benchmark suites that are relevant to the topic of the present paper. We will focus on describing the main set of run-rules to which an implementer needs to adhere when using these benchmarks for measuring the performance of a given computing system. These rules and guidelines, or similar, can be also adopted for evaluating performance portability. In the next section we present our suggestion for extending the SPEC infrastructure for assessing the performance portability of applications and heterogeneous programming models. SPEC is a three-decade-old consortium formed to develop standardized and realistic benchmark suites for rating and comparing the performance of contemporary computing platforms ranging from a single processor to large-scale supercomputers of thousands of cores. Three benchmark suites are \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Platforms} & \multicolumn{1}{c|}{} \\ \hline Kernel & SKK & Ger9 & V100 & \(\mathbf{\Phi}\) & S.D.(HM) & \(\mathbf{\Phi}\) & S.D.(AM) \\ \hline LUD & 35.89\% & 48.71\% & 49.80\% & 45.81\% & 8.39 & 44.80\% & 6.31 \\ \hline BP-AM & - & 81.73\% & 91.60\% & 86.44\% & 7.00 & 86.67\% & 4.96 \\ \hline SC & 9.97\% & 50.88\% & 92.90\% & 22.94\% & 25.93 & 51.15\% & 33.85 \\ \hline KNN & 78.15\% & 40.32\% & 35.50\% & 45.61\% & 16.82 & 51.32\% & 19.07 \\ \hline HS & 16.47\% & 96.03\% & 72.40\% & 35.33\% & 35.10 & 61.63\% & 33.36 \\ \hline \end{tabular} \end{table} TABLE I: Comparison between the performance portability scores obtained by \(\mathbf{\Phi}\) vs. \(\overline{\mathbf{\Phi}}\) metrics in the study from [18]. relevant to the topic of this paper: SPEC ACCEL, SPEC OMP 2012, and SPEChpc 2021, each of which is described below. The **SPEC ACCEL benchmark** suite was designed to test the performance of computationally intensive parallel applications using three programming models: OpenCL (19 programs), OpenACC (15 programs), and OpenMP 4 target off-loading (15 programs). The **SPEC OMP2012 benchmark** suite provides 14 scientific and engineering application codes based on the OpenMP 3.1 standard for measuring the performance of shared-memory parallel machines. The applications were designed in mind to be portable to a variety of CPU architectures and operating systems. **SPEChpc 2021** provides large-scale scientific applications using the pure MPI standard or hybrid MPI+X, where X can be OpenMP or OpenACC. It contains four suites at different sizes of workload (tiny, small, medium, and large) for evaluating large-scale systems at different sizes, ranging from a single node to hundreds of nodes. SPEC's methodology is to provide the vendors of computing systems with a simple tool for measuring the performance of their products that will be standardized, objective, and based on strict operating and reporting guidelines. The requirement that the benchmark will be run and reported according to a set of rules makes the results comparable, meaningful, and reproducible. Each benchmark suite is available in source code that has already been ported to various platforms. The source code needs only to be compiled for the target system and then to be tuned for obtaining the best results possible. Each benchmark is comprised of a wide range of representative scientific programs ranging from basic kernels and mini-apps to large weather modeling applications. SPEC allows performance tuning at compilation time and at runtime. Performance tuning can be done by using optimal settings of the compiler options or selecting the number of ranks and threads per rank to obtain the best performance. According to SPEC, two levels of optimization and compilation are allowed: **Base metrics**. This level enforces strict rules of unaggressive compilation such as using the same flags in the same order for all programs of a given language in a benchmark suite. It demands a common set of optimizations and environment settings to all the programs in a suite, but it allows reordering of arithmetic and floating-point operands. Moreover, at the base level the same compiler must be used for all programs of a given language within a benchmark suite and the same libraries, compiler, and linker options. **Peak metrics**. This level is **optional** and allows more flexibility in choosing different compiler options for better performance tuning. At peak level, different compilers may be used for all programs of a given language within the benchmark suite. All flags or options that affect the compilation may be different for each benchmark in the benchmark suite. In principle, SPEC policy does not allow any modification of the source codes except under specific and restricted circumstances. The SPEC rules are intended to ensure a fair and objective measure of the performance of HPC platforms. For example, SPEC ACCEL allows source code modifications for the peak-level runs of OpenACC and OpenMP benchmarks. Changes to the compiler directives and source code are permitted for portable optimizations to achieve improved scalability. Changes in the algorithm are, however, not permitted. Vendor-specific extensions are allowed if they are portable. Examples of allowed source code modifications and optimizations are loop reordering, reshaping arrays, and memory distribution. On the other hand, language extensions and adding calls to vendor-specific functions are not allowed. Furthermore, SPEC allows runtime dynamic optimizations techniques under the control of hardware and software. Such optimizations include improving the instruction cache performance by rearranging the code, value prediction, and reallocation of functional units among hardware threads. A fundamental principle of SPEC's methodology is given to the requirement of a detailed disclosure of the obtained results and the configuration settings for reproducing benchmark results. Usually, a report of the benchmark results consists of three runs and the median of these runs. It must describe the performance methods that were used and the source-code modifications, if there were any, as well as a general description of each modification applied. Finally, it is important to note that SPEC encourages using the benchmark suites in academic and research institutions and therefore they are available free of charge for research purposes. ## V Extending SPEC Repository In this section we present the basic concepts and features for upgrading the SPEC infrastructure for rating and comparing the performance portability of applications, benchmarks and models from different perspectives and different application-architecture pair spaces within SPEC repository. Before we discuss and specify how performance portability measures can be integrated within the SPEC framework, we have to decide which performance portability metric to apply. Thereafter, we have to decide which performance efficiency approaches we want to use and the performance efficiency types that will be required in order to present the performance portability from different points of view. Finally, we have to recommend which of them will be optional and which ones will be mandatory. ### _Performance portability_ The search for a better performance portability metric is ongoing, and is one of the challenging research areas of the current generation of high-performance heterogeneous computing. The most promising metric proposed to date is the \(\mathbf{\widetilde{\Phi}}\) metric [8]. The \(\mathbf{\widetilde{\Phi}}\) metric is defined as the arithmetic mean of an application's performance efficiency observed across a set of platforms from the same architecture class. Formally, for a given supported set of platforms \(S\subseteq H\) from the same architecture class, the performance portability of a case-study application \(a\) solving problem \(p\) is: \[\overline{\mathbf{\Phi}}(a,p,S,H)=\begin{cases}\frac{\sum_{i\in S}e_{i}(a,p)}{|S|}& \text{if }|S|>0\\ 0&\text{otherwise}\end{cases} \tag{1}\] where \(S:=\{i\in H|e_{i}(a,p)>0\}\) and \(e_{i}(a,p)\) is the performance efficiency of case-study application \(a\) solving problem \(p\) on platform \(i\). A comprehensive research study based on dozens of practical studies showed that the \(\overline{\mathbf{\Phi}}\) metric has the key properties of a good performance portability metric [8, 12]. These studies show that the \(\overline{\mathbf{\Phi}}\) metric is objective, comparable, consistent, lossless, easy to use, intuitive, and familiar to users. We recommend adoption of the \(\overline{\mathbf{\Phi}}\) metric for calculating the performance portability scores of applications tested within the SPEC framework. We would like to bring to the reader's attention a special added value obtained from the incorporation of the performance portability assessment within the SPEC framework and concerning the set of platforms, \(H\), in the definition. From the dozens of studies conducted on the subject of performance portability in recent years, it appears that the average number of platforms on which the studies were based on was four, while the maximum number was 14. Needless to say, the larger the number of platforms, the more accurate the assessment of performance portability. Consolidation of the performance portability assessment within the SPEC framework will increase the number of platforms in \(H\) because over time it will include all the platforms that support a given application. Furthermore, the evaluation of the performance portability scores of any given application on any given platform will be done with the same rules and guidelines, and it will be possible to follow the changes of the performance portability over time. ### _Performance efficiency_ Recall the definition of performance portability: _A measurement of an application's performance efficiency for a given problem that can be executed correctly on all platforms in a given set._ It follows from the definition that it is based on measuring the performance efficiency of a given application on a specific platform: **Definition: Performance Efficiency** _A measurement of an application's achieved performance as a fraction of the baseline performance._ when performance is usually measured by runtime or throughput. The baseline performance can be the theoretical or practical peak performance, such as the theoretical peak throughput of a specific GPU or its Roofline peak throughput [23]. Two performance efficiency approaches have been proposed to date in the scientific literature: application efficiency and architectural efficiency. These two approaches present two different perspectives on the relative performance of a given application running on a particular platform and both yield different scores. Each of them examines the performance of a given application in relation to different reference performances. The application efficiency is measured in relation to the performance of the fastest known implementation on that platform, while the architectural performance is measured in relation to the theoretical or practical performance that can possibly be achieved on the given platform. Now let us define these two approaches formally. **Definition: application efficiency** _The achieved performance, on a given platform, normalized relative to the best-known performance of an application's implementation on the same platform._ **Definition: architectural efficiency** _The application's achieved throughput on a given platform normalized relative to the peak throughput of the given platform._ ### _Application efficiency approach_ SPEC's base metrics and peak metrics are actually the respective equivalents of the achieved performance and peak performance that define the performance efficiency ratio. Hence, we can define the SPEC efficiency as follows: **Definition: SPEC efficiency** _The ratio of SPEC's base metrics to SPEC's peak metrics._ Therefore, the first step that needs to be done in order to extend SPEC for performance portability is to modify the run-rules and the reporting of the results so that the measurements of the peak metrics will not be optional but mandatory, at least for the purpose of calculating performance portability. Application efficiency is a very popular measure because it is simple and easy to use [17, 19, 22]. All that is required is to measure the runtime of the application, on the given platform, and then calculate its fraction relative to the runtime of the fastest known portable application on the same platform. The problem is that we can never be sure if we have at hand the fastest implementation. And so, it can happen that immediately after we have published our research, a faster implementation is found which makes the results of our findings outdated. Furthermore, from the studies that have been done in recent years and which have used this measure, it appears that researchers always chose as the baseline performance the performance of the implementation that showed the best performance from three or four implementations studied in their research and not from those known in the literature [17, 19]. If we add the observation that different studies used different compilers, compiler options, input sizes, and that the source codes are not always available, it is clear that this situation leads to non-uniformity and incoherence of the results and difficulties in reproducing them. Such situations cannot occur when we restrict ourselves to a rule-based and supervised framework like SPEC. If an implementation with better performance enters the repository, the performance portability calculation of the relevant applications will be automatically updated. Such an automatic update is possible if dynamic web pages are used such as those of a spreadsheet that enables automatic update of the calculation of a given function if one of its variables changes its value. Such a solution allows for a common performance reference in the repository at any point in time for all applications and benchmark suites. In this way, the database of performance portability reports will remain uniform and consistent while allowing an objective comparison between applications with the possibility of reproducing the various results. A restricted definition of application efficiency was first introduced in [12] and was used to calculate the performance portability of portable programming models. The definition was formulated after a survey based on hundreds of case studies which showed that most researchers use this measure in practice. This measure reflects how far the performance of a given portable application is from the peak practical performance possible, or in other words, the cost in performance that a portable application sacrifices to be portable. Therefore, in order to integrate this measure into the SPEC framework, the best performance of a low-level, unportable, and optimized implementation that appears in the SPEC repository needs to be selected as the baseline performance. In addition, it is required that if a faster unportable and optimized implementation will appear in the future in SPEC, an automatic update of the performance portability scores of all relevant applications in the SPEC's repository will be updated accordingly. We define three types of performance efficiency according to a reference application whose performance is used as the baseline performance. In each of the efficiency types, the reference application has a different level of abstraction, so its performance is directly derived from its ability to utilize the hardware resources of the platform effectively. The following application efficiency types are described in increasing order of the peak achievable performance of the reference application. **Definition: application efficiency-Type 0 (SPEC efficiency)** _The achieved SPEC's base metrics of a given portable application-platform pair normalized relative to the SPEC's peak metrics on the same application-platform pair._ All SPEC's run-rules and guidelines apply for measuring this type of performance efficiency. It yields high values since the optimization level of the SPEC's peak metrics is usually restricted to choose different compiler options for better performance tuning or by making changes to the compiler directives. In the next section we will demonstrate, using SPEC efficiency, how to calculate the performance portability of applications and benchmarks from data taken from the current SPEC repository. **Definition: application efficiency-Type 1** _The achieved performance of a given portable application-platform pair, normalized relative to the best-known performance of any portable application on the same platform in the SPEC repository._ Here the baseline performance is the performance of any implementation of the application that uses another performance portability framework that achieved the best performance on the same platform within SPEC repository. For example, the performance of an OpenACC implementation, on an NVIDIA V100 GPU, normalized relative to the performance of a Kokkos implementation that outperforms the OpenACC implementation on NVIDIA V100. This type of application efficiency expands the space of the application's implementations from which the best baseline performance can be chosen. This space includes all the implementations of the application in any performance portability framework on the same platform within SPEC repository. **Definition: application efficiency-Type 2** _The achieved performance of a given portable application-platform pair, normalized relative to the best-known performance of any unportable application on the same platform in the SPEC repository._ This type of application efficiency expands the space of the application's implementations even further. Here the baseline performance can be the performance of any application's implementation on the same platform, and not necessarily a portable one. For example, the performance of an OpenACC implementation on an NVIDIA V100 normalized against a CUDA implementation that outperforms the implementation of OpenACC on NVIDIA V100. ### _Architectural efficiency approach_ Architectural efficiency measures the extent to which the application utilizes the resources of the platform on which it is implemented in relation to two reference levels of performance: one is the peak theoretically possible performance level, that is, an unattainable upper-bound performance level, and the other is the practical peak performance level, that is, a performance level which can be achieved through the optimization of all platform resources. Therefore, we distinguish between two types of performance efficiency measures accordingly to the peak performance reference used: theoretical peak throughput or practical peak throughput. **Definition: architectural efficiency-Type 0** _The achieved throughput of a given portable application-platform pair, normalized relative to the peak theoretical throughput of the given platform._ Architectural efficiency is relatively simple to measure. All that needs to be done is to measure the throughput, in GFLOP/s or GB/s, of the application using a profiling tool and then calculate its fraction relative to the theoretical performance published by the vendor. Practitioners do not like this measure because its results yield a theoretical score. Therefore, they prefer more practical measure such as using the Roofline model. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{2}{c|}{} & \multicolumn{4}{c|}{Performance Efficiency Approach} \\ \hline & \multicolumn{2}{c|}{Application Efficiency} & \multicolumn{2}{c}{Architectural Efficiency} \\ \hline & \multicolumn{2}{c|}{Relative Baseline Application} & \multicolumn{2}{c}{Relative Baseline Application} \\ \hline Type & Application & Platform & Performance & Application & Platform & Performance \\ \hline \hline 0 & same & same & peak metrics & same & same & peak theoretical \\ \hline 1 & any portable & same & best-known & same & same & peak Roofline \\ \hline 2 & any unportable & same & best-known & - & - & - \\ \hline \end{tabular} \end{table} TABLE II: Summary of the different types of the performance efficiency approaches. \begin{table} \begin{tabular}{c|c|c} \hline Benchmark & Language & Application domain \\ \hline 350.md & Fortran & Molecular Dynamics \\ \hline 351.bwaves & Fortran & Fluid Dynamics \\ \hline 352.nab & C & Molecular Modeling \\ \hline 357.bt331 & Fortran & Fluid Dynamics \\ \hline 358.botsalgn & C & Protein Alignment \\ \hline 359.botsspar & C & Sparse LU \\ \hline 360.ilbdc & Fortran & Lattice Boltzmann \\ \hline 362.tma3d & Fortran & Mechanical Simulation \\ \hline 363.swim & Fortran & Weather Prediction \\ \hline 367.imagick & C & Image Processing \\ \hline 370.mgrd3311 & Fortran & Fluid Dynamics \\ \hline 371.napplu331 & Fortran & Fluid Dynamics \\ \hline 372.smithwa & C & Pattern Matching \\ \hline 376.kdtree & C++ & Sorting and Searching \\ \hline \end{tabular} \end{table} TABLE III: The list of SPEC OMP2012 applications. \begin{table} \begin{tabular}{c|c|c} \hline Platform No. & Platform & Configuration \\ \hline 1 & Intel Xeon E5-2670 & 16 cores, 2 chips, 8 cores/chip \\ \hline 2 & Intel Xeon E5-2697 v2 & 24 cores, 2 chips, 12 cores/chip \\ \hline 3 & Intel Xeon E7-8890 v3 & 72 cores, 4 chips, 18 cores/chip \\ \hline 4 & Intel Xeon E7-8890 v3 & 288 cores, 16 chips, 18 cores/chip \\ \hline 5 & Intel Xeon Phi 7210 & 64 cores, 1 chip, 64 cores/chip \\ \hline 6 & Intel Xeon Gold 6154 & 576 cores, 32 chips, 18 cores/chip \\ \hline 7 & Intel Xeon Platinum 8260L & 48 cores, 2 chips, 24 cores/chip \\ \hline 8 & Intel Xeon Platinum 9242 & 96 cores, 2 chips, 48 cores/chip \\ \hline 9 & AMD EPYC 9654 & 192 cores, 2 chips, 96 cores/chip \\ \hline 10 & SPARC T7-4 & 128 cores, 4 chips, 32 cores/chip \\ \hline \end{tabular} \end{table} TABLE IV: The list of platforms used for the case study and their configuration. **Definition: architectural efficiency-Type 1** _The achieved throughput of a given portable application-platform pair, normalized relative to the peak Roofline throughput of the given platform._ The Roofline model is a visualization tool that shows the type of peak throughput that might be expected for an application with a given arithmetic intensity. The Roofline graph is a line whose slope is associated with the peak memory bandwidth throughput (GB/s), and then a flat part that is associated with peak flop throughput (GFLOP/s). Unfortunately, it is a time-consuming and challenging task to estimate the platform features needed for a Roofline analysis [18]. Moreover, due to the lack of standardization of the profile tools and the progressively optimized micro-benchmarks used for generating Roofline graphs, multiple graphs tend to be created with different properties for the same platform [8]. This problem can be solved by very rigorous rules and guidelines that will dictate how Roofline graphs should be created. These rules will determine in detail which profiling tools and progressively optimized micro-benchmarks to use for generating Roofline graphs and which platform features are needed for a Roofline analysis. There are tools on the market that can greatly facilitate the process of creating a Roofline graph, for example Intel Vtune [24] or Empirical Roofline Tool (ERT) [25]. At the end of the process, SPEC committee members will approve which Roofline graph to use for measuring the Roofline efficiency for all SPEC applications. **Bottom line**. The performance efficiency types presented in this section enlighten different and complementary perspectives of an application's performance portability. At the same time, multiple types can sometimes be confusing rather than helpful. It is certainly possible to choose fewer performance efficiency types or to decide that some of them will be mandatory and others optional. We will leave this decision to the SPEC committee as part of the drafting of the final document. Table II presents a concise summary and comparison of the different types of the performance efficiency approaches. In the next section we show and demonstrate how to calculate the performance portability of applications and benchmark suites using the SPEC efficiency and \(\overline{\mathbf{\Phi}}\) metric. ## VI Examples based on SPEC Repository In this section we present examples based on the performance of applications of SPEC OMP 2012 benchmark that appear within the current SPEC repository. We calculate the performance portability score of three applications and of the whole benchmark itself. We used SPEC's performance efficiency and the \(\overline{\mathbf{\Phi}}\) metric for calculating the performance portability scores. Since the current SPEC repository is performance oriented and not performance portability, we were forced to present fewer examples than we would like. For example, the performance reporting of the most of the platforms in the current SPEC repository does not include the _SPEC's peak metrics_ since the reporting of this performance score is optional. Therefore, we were unable to present examples of additional programming models, such as OpenACC, on state-of-the-art platforms. Furthermore, the performance portability scores presented in this paper were calculated only for the SPEC Efficiency since the current SPEC repository lacks the data needed to calculate the performance portability based on all the performance efficiency approaches and their types. However, the purpose of the examples is primarily to demonstrate the ideas presented in this paper. Table III shows the 14 applications of the SPEC OMP 2012 benchmark suite written using OpenMP 3.1 with a short description of the domain of each one of the applications. Table IV shows the 10 platforms that were used for our examples and their configurations. It can be observed that all the platforms are SMP machines with 16 cores and up to 576 cores. Tables V, VI, and VII show the SPEC performance efficiencies measured for molecular dynamics, protein alignment, and weather prediction applications, respectively. The performance portability scores that were obtained are 85.5%, 96.9%, and 93.7%, respectively, which are considered high scores but quite expected because the reference performance of the SPEC efficiency was not achieved after aggressive optimizations. Table VIII shows the performance portability score, 91.4%, of the whole SPEC OMP 2012 suite on the \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multicolumn{6}{c}{350.md Molecular Dynamics} \\ \hline Platform No. & \multicolumn{2}{c|}{Base} & \multicolumn{2}{c|}{Feak} & \multicolumn{2}{c}{Efficiency} \\ \hline & threads & seconds & thread & seconds & \% \\ \hline 1 & 32 & 975 & 32 & 803 & 82 \\ \hline 2 & 48 & 585 & 48 & 483 & 83 \\ \hline 3 & 144 & 197 & 144 & 161 & 82 \\ \hline 4 & 576 & 59.5 & 576 & 38.6 & 65 \\ \hline 5 & 256 & 537 & 256 & 434 & 81 \\ \hline 6 & 513 & 5.6 & 576 & 5.33 & 95 \\ \hline 7 & 96 & 33.4 & 96 & 33.3 & 99 \\ \hline 8 & 192 & 16.9 & 192 & 16.8 & 99 \\ \hline 9 & 384 & 31.3 & 192 & 30.5 & 97 \\ \hline 10 & 256 & 153 & 768 & 111 & 72 \\ \hline \multicolumn{6}{c}{\(\overline{\mathbf{\Phi}}\) = 85.5\%} \\ \hline \end{tabular} \end{table} TABLE V: Performance Portability of the Molecular Dynamics application. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multicolumn{6}{c}{358.botsalgn Protein Alignment} \\ \hline Platform No. & \multicolumn{2}{c|}{Base} & \multicolumn{2}{c|}{Feak} & \multicolumn{2}{c}{Efficiency} \\ \hline & threads & seconds & thread & seconds & \% \\ \hline 1 & 32 & 1276 & 32 & 1235 & 97 \\ \hline 2 & 48 & 808 & 48 & 779 & 96 \\ \hline 3 & 144 & 287 & 144 & 280 & 98 \\ \hline 4 & 576 & 74.6 & 576 & 74.5 & 99 \\ \hline 5 & 256 & 1133 & 256 & 1136 & 100 \\ \hline 6 & 513 & 29.5 & 576 & 26.7 & 90 \\ \hline 7 & 96 & 304 & 96 & 286 & 94 \\ \hline 8 & 192 & 141 & 192 & 136 & 96 \\ \hline 9 & 384 & 49.8 & 384 & 49.8 & 100 \\ \hline 10 & 256 & 166 & 256 & 165 & 99 \\ \hline \multicolumn{6}{c}{\(\overline{\mathbf{\Phi}}\) = 96.9\%} \\ \hline \end{tabular} \end{table} TABLE VI: Performance Portability of the Protein Alignment application. given platforms. ## VII Conclusions The extensive collection of independent studies done in recent years to study the performance portability of applications is not based on common rules and guidelines. As a result, there is great difficulty in comparing the findings of the various studies in order to reach informed conclusions and insights that will allow software and hardware architects to improve the performance portability and productivity of scientific applications in the future in light of the constant acceleration in technological innovations and the design of heterogeneous systems. In this paper we have presented a proposal for building an appropriate repository for performance portability within an existing SPEC framework. Such a repository will be standardized, objective, and based on strict operating and reporting guidelines. Such guidelines will ensure a fair, comparable and meaningful measure of the performance portability while the requirement for a detailed disclosure of the obtained results and the configuration settings will ensure the reproducibility of the reported results. We also demonstrated how to calculate the performance portability of applications and of an entire benchmark suite that are currently available in SPEC repository. In our future work, we plan to develop a series of benchmarks in order to present an effective comparison of different performance efficiency approaches to calculate the performance portability of applications, benchmarks and models based on the definitions presented in this paper.
2308.10229
Coloring Torus Knots by Conjugation Quandles
In the first part of this paper, we present general results concerning the colorability of torus knots using conjugation quandles over any abstract group. Subsequently, we offer a numerical characterization for the colorability of torus knots using conjugation quandles over some particular groups, such as the matrix groups $GL(2,q)$ and $SL(2,q)$, the dihedral group, and the symmetric group.
Filippo Spaggiari
2023-08-20T11:09:49Z
http://arxiv.org/abs/2308.10229v1
# Coloring Torus Knots by Conjugation Quandles ###### Abstract In the first part of this paper, we present general results concerning the colorability of torus knots using conjugation quandles over any abstract group. Subsequently, we offer a numerical characterization for the colorability of torus knots using conjugation quandles over some particular groups, such as the matrix groups \(\mathsf{GL}(2,q)\) and \(\mathsf{SL}(2,q)\), the dihedral group, and the symmetric group. ###### Contents * 1 Introduction * 2 Fundamentals * 3 Conjugation quandle coloring of torus knots * 3.1 First characterization * 3.2 Properties of conjugation quandle coloring * 3.3 The main characterization * 4 Coloring with particular groups * 4.1 General linear group * 4.2 Special linear group * 4.3 Dihedral group * 4.4 Symmetric group * 5 Conclusions ## 1 Introduction Coloring invariants serve as valuable computational tools in various contexts (see [11, 12], [13], and [14]). In addition, Kuperberg's NP certificate of knottedness can be effectively interpreted using coloring techniques (see [15]). A significant class of examples related to coloring is provided by conjugation quandles. Notably, a nontrivial coloring achieved with a quandle \(Q\) implies the existence of a nontrivial coloring with \(Q/\lambda\), where \(\lambda\) represents the Cayley kernel. This quotient quandle can be embedded into \(\mathsf{Conj}(\mathsf{Aut}(Q))\). Matrix groups hold particular interest due to Kuperberg's certificate involving coloring by \(\mathsf{Conj}(\mathsf{GL}(2,q))\). His proof also suggests that the problem of coloring by \(\mathsf{Conj}(\mathsf{GL}(2,q))\) is challenging in general, prompting us to begin with a simpler class: the torus knots. By investigating coloring invariants in this specific context, we can gain valuable insights and potentially extend our understanding to more complex classes of knots. Coloring of torus knots has garnered considerable attention and has been explored from various perspectives, as evidenced by the works [10, 11], [12], [13], and [1]. Notably, the concept of coloring by Alexander Quandles has been thoroughly examined in the work of Asami and Kuga ([1]). Building upon these contributions, this research paper delves into the analysis of coloring torus knots using conjugation quandles. The organization of this work is structured as follows. In Section 2, we lay the groundwork by introducing fundamental notions related to torus knots and quandles, thereby providing a necessary foundation for subsequent discussions. In Section 3, we present general results pertaining to the conjugation quandle colorings of torus knots over abstract groups. This includes our main theorem (see Theorem 3.19), which characterizes the colorability of torus knots, as well as exploring various properties of colorings. Subsequently, in Section 4, we apply the results obtained in Section 3 to establish characterization theorems for specific groups, such as the matrix groups \(\mathsf{GL}(2,q)\) and \(\mathsf{SL}(2,q)\), the dihedral group, and the symmetric group. The selection of these particular groups is twofold in purpose: firstly, they provide a manageable context for formulating elementary statements and characterization theorems, although the proofs might be notably technical. Secondly, these groups have been under investigation in other contexts and papers, as evidenced by relevant references in the bibliography. Section 5 concludes this work, by discussing and presenting some questions that remained open about using conjugation quandles to understand the colorability of torus knots. ## 2 Fundamentals We begin by introducing and reviewing the mathematical tool necessary for developing the upcoming theory. Firstly, we provide the definition of a quandle, which is the algebraic structure playing a fundamental role in this paper. Subsequently, we proceed to the geometric counterpart, where we introduce and construct torus knots, along with their representations and colorings. Finally, we conclude by presenting some well-known knot-theoretical results that serve as the basis for making additional assumptions on the parameters. For a comprehensive understanding of Knot and Quandle Theory, we recommend consulting the books by Murasugi [10] and Elhamdadi [1]. **Definition 2.1**.: A **(right) quandle** is a binar \((Q,\rhd)\) satisfying the following axioms: 1. \(\forall x,y,z\in Q:\ (x\rhd y)\rhd z=(x\rhd z)\rhd(y\rhd z)\). 2. \(\forall x,y\in Q\ \exists 1z\in Q:\ z\rhd x=y\). 3. \(\forall x\in Q:\ x\rhd x=x\). Out of a given group, we can construct an important class of quandles. **Definition 2.2**.: Let \(G\) be a group. The **(right) conjugation quandle over**\(G\) is the quandle obtained by taking \(Q=G\) as the underlying set and \(x\rhd y=yxy^{-1}\) as the binary operation. We denote it by \(\mathsf{Conj}(G)\). The quandle operation exhibits nice features in conjugation quandles. In the following lemma, we introduce several properties that will be consistently utilized throughout the entire paper, without mentioning them explicitly. **Lemma 2.3**.: _Let \(G\) be a group. In \(\mathsf{Conj}(G)\), for every \(x,y,x_{i},y_{i}\in G\) and \(k\in\mathds{N}\) we have_ 1. \((x_{1}x_{2})\rhd y=(x_{1}\rhd y)(x_{2}\rhd y)\)_._ 2. \(x^{k}\rhd y=(x\rhd y)^{k}\)_._ 3. \(x\rhd y_{1}y_{2})=(x\rhd y_{2})\rhd y_{1}\)_._ 4. \(x\rhd y^{k}=(\ldots((x\rhd y)\rhd y)\rhd\ldots)\rhd y\) _(_\(k\) _times)._ Proof.: All of them are straightforward computations. We now proceed to introduce some fundamental notions of Knot Theory. All the knots in this paper are intended to be oriented. **Definition 2.4**.: Let \(K\) be a regular diagram of an oriented knot, or simply, a knot. We denote by \(\mathsf{Arcs}(K)\) the set of connected strands of (the diagram) \(K\). By a **(positive) crossing** in \(K\) we mean a triple \((x,y,z)\in\mathsf{Arcs}(K)^{3}\) such that \(x\)_passes under \(y\) producing \(z\)_ (see Figure 1). In this case, we denote \(x\rhd y=z\) and we call it **crossing relation** of \(K\). **Definition 2.5**.: Let \(K\) be a knot and \((Q,\rhd)\) a quandle. A \(Q\)**-coloring** of \(K\) is a mapping \(c\colon\mathsf{Arcs}(K)\to Q\) such that for every crossing \((x,y,z)\) of \(K\) the equation \(c(x)\rhd c(y)=c(z)\) holds in \(Q\). If \(c\) is the constant function, we say that it is the **trivial coloring**, every other coloring is called **non-trivial**. A knot \(K\) is said to be \(Q\)**-colorable** if there exists a non-trivial coloring of \(K\). **Remark 2.6**.: It is evident that every knot can be trivially colored; that is, a trivial coloring always exists. Therefore, our focus lies solely on the existence of non-trivial colorings. Consequently, we have chosen to define the \(Q\)-coloring in the manner described earlier, excluding trivial colorings. **Definition 2.7**.: Let \(m,n\) be two positive integers and consider the braid with \(n\) strands and \(m\) twists as in Figure 2. The \(n\) leftmost strands are called **initial arcs**, the \(n\) rightmost strands are called **terminal arcs**, and the \(m\) diagonal strands are called **bridges**. The closure of such a braid, identifying each initial arc with the corresponding terminal arc, is called \((m,n)\)**-torus knot (link)**, and denoted by \(\mathsf{K}(m,n)\) (see Figure 3). Figure 1: Crossing relation. **Remark 2.8**.: Every torus knot (torus link) has the remarkable property of being embeddable on the surface of the trivial torus, without any points of self-intersection. Conversely, any knot lying on the surface of the trivial torus can be shown to be equivalent to \(\mathsf{K}(m,n)\), for some integers \(m\) and \(n\). This explains the terminology used in Knot Theory. The depiction of this knot (link) on the trivial torus is shown in Figure 3. However, the diagram in Figure 3 contains excessive information. Therefore, like many other authors, we prefer to use a more concise and schematic braid representation, as shown in Figure 2. This simplified representation is known as the _braid diagram_ (or _standard diagram_), and it disregards the specific identification presented in the knot definition. Certainly, when one is working with a torus knot, referring either to the initial arc or to the corresponding terminal arc naturally imparts the same information. When coloring is involved, we find the braid diagram particularly useful: it allows us to label the initial and terminal arcs with their respective colors easily. This labeling simplifies the coloring process and enhances our understanding of the knot's properties. There are two significant, well-known results that govern the overall behavior of a torus knot. Specifically, we have precise knowledge regarding how to set the parameters to achieve a pair of equivalent links or to cause the torus link to collapse into a knot (proofs and details can be found in Murasugi's book [12]). **Theorem 2.9** (Classification of torus links).: _Let \(m,n,r,s\geq 2\). Then \(\mathsf{K}(m,n)\) is equivalent to \(\mathsf{K}(r,s)\) if and only if \(\{m,n\}=\{r,s\}\)._ **Theorem 2.10** (Torus knots and links).: _Let \(m,n\geq 1\). Then \(\mathsf{K}(m,n)\) is a knot if and only if \(\mathsf{gcd}(m,n)=1\)._ ## 3 Conjugation quandle coloring of torus knots ### First characterization The first result is a characterization of coloring a torus knot using a conjugation quandle: coloring a knot with \(\mathsf{Conj}(G)\) is equivalent to finding a tuple of elements in \(G\) that satisfy certain group term equations, or alternatively, certain quandle term equations. **Definition 3.1**.: Let \(m,n\in\mathds{N}\) be such that \(\mathsf{gcd}(m,n)=1\), let \(G\) be a group, and let \(x_{0},\ldots,x_{n-1}\in G\). We say that the tuple \((x_{0},\ldots,x_{n-1})\)**extends to a coloring** of \(\mathsf{K}(m,n)\) if there exists a unique \(\mathsf{Conj}(G)\)-coloring of \(\mathsf{K}(m,n)\) of which \((x_{0},\ldots,x_{n-1})\) are the colors of the initial arcs, respectively. **Remark 3.2**.: Given a coloring, we may always extract the tuple of the colors of the initial arcs. That tuple naturally extends to the given coloring. Moreover, observe that the constant tuple extends to the trivial coloring. **Theorem 3.3**.: _Let \(m,n\in\mathds{N}\) be such that \(\mathsf{gcd}(m,n)=1\), let \(G\) be a group, and let \(x_{0},\ldots,x_{n-1}\in G\) be not all equal. The following conditions are equivalent._ 1. _The tuple_ \((x_{0},\ldots,x_{n-1})\) _extends to a (non-trivial) coloring for_ \(\mathsf{K}(m,n)\)_._ 2. \(|\{x_{0+i\pmod{n}}x_{1+i\pmod{n}}\cdots x_{m-1+i\pmod{n}}\colon i=0,\ldots, n-1\}|=1\)_._ Figure 3: Knot diagram for \(\mathsf{K}(5,4)\). Figure 2: Braid diagram for \(\mathsf{K}(m,n)\). _._ 3. _For_ \(u=\prod_{j=0}^{m-1}x_{n-m+j\pmod{n}}\)_, we have_ \[x_{i}\rhd u=x_{i-m\pmod{n}}\qquad\forall\,i=0,\ldots,n-1.\] _Moreover, in this case, the element \(u\) of (iii) is the common value in the set of (ii)._ Proof.: Prove the implications separately. All the indices of symbols \(x\) are assumed to be computed modulo \(n\) and all the indices of symbols \(y\) are assumed to be computed modulo \(m\). \((i)\implies(iii)\): Denote by \(y_{0},\ldots,y_{m-1}\) the colors of the bridges, as in the Figure 4. Observe that \(x_{0}=y_{0}\). By the definition of coloring, we have the following relations \[(((y_{j}\rhd y_{j+1})\rhd y_{j+2})\rhd\ldots)\rhd y_{m-1}=x_{j+m-n}\qquad \forall\,j=0,\ldots,m-1. \tag{1}\] Moreover, by the geometry of the torus knot, because of the \(m\) twists, we also have \[(((x_{i}\rhd y_{0})\rhd y_{1})\rhd\ldots)\rhd y_{m-1}=x_{i-m}\qquad\forall\,i =0,\ldots,n-1. \tag{2}\] Now, if we expand equations (2), using the quandle axioms and equations (1), we obtain \[x_{i-m} =(((x_{i}\rhd y_{0})\rhd y_{1})\rhd\ldots)\rhd y_{m-1}\] \[=(((x_{i}\rhd y_{1})\rhd y_{2})\rhd\ldots)\rhd y_{m-1})\rhd((((y _{0}\rhd y_{1})\rhd y_{2})\rhd\ldots)\rhd y_{m-1})\] \[=((((x_{i}\rhd y_{1})\rhd y_{2})\rhd\ldots)\rhd y_{m-1})\rhd x_{n-m}\] \[=((((x_{i}\rhd y_{2})\rhd y_{3})\rhd\ldots)\rhd y_{m-1})\rhd((((y _{1}\rhd y_{2})\rhd y_{3})\rhd\ldots)\rhd y_{m-1}))\rhd x_{n-m}\] \[=(((((x_{i}\rhd y_{2})\rhd y_{3})\rhd\ldots)\rhd y_{m-1})\rhd x_{n -m+1})\rhd x_{n-m}\] \[=\ldots\] \[=((((x_{i}\rhd x_{n-1})\rhd x_{n-2})\rhd\ldots)\rhd x_{n-m+2}) \rhd x_{n-m+1})\rhd x_{n-m}\] \[=(x_{n-m}x_{n-m+1}\ldots x_{n-2}x_{n-1})x_{i}(x_{n-1}^{-1}x_{n-2}^ {-1}\ldots x_{n-m+1}^{-1}x_{n-m}^{-1})\] \[=x_{i}\rhd u.\] \((iii)\implies(i)\): Associate the elements \(x_{0},\ldots,x_{n-1}\) to the initial arcs, and compute the colors of the bridges \(y_{0},\ldots,y_{m-1}\) recursively as follows: \[y_{0} =x_{0}\] \[y_{j} =(((x_{j}\rhd y_{0})\rhd y_{1})\rhd\ldots)\rhd y_{j-1}\qquad \forall\,j=0,\ldots,m-1.\] This allows computing the color of all other elements in \(\mathsf{Arcs}(\mathsf{K}(m,n))\) and the conditions \(x_{i}\rhd u=x_{i-m}\) guarantee the good definition of colors \(x_{i}\)'s for the diagram of \(\mathsf{K}(m,n)\). \((ii)\implies(iii)\): Note that (ii) can be seen as a chain of equations, where the terms are the product of the colors whose indices are consecutive and shifted by the same constant, possibly reduced modulo \(n\). Fix \(i\in\{0,\ldots,n-1\}\). Then, because of (ii), we have \[x_{i}\rhd u=ux_{i}u^{-1} =(x_{n-m}x_{n-m+1}\ldots x_{n-2}x_{n-1})x_{i}(x_{n-m}x_{n-m+1} \ldots x_{n-2}x_{n-1})^{-1}\] \[=(x_{n-m+i}x_{n-m+1+i}\ldots x_{n-2+i}x_{n-1+i})x_{i}(x_{n-m+i+1 }x_{n-m+1+i+1}\ldots x_{n-2+i+1}x_{n-1+i+1})^{-1}\] \[=(x_{i-m}x_{i-m+1}\ldots x_{i-2}x_{i-1})x_{i}(x_{i}^{-1}x_{i-1}^ {-1}\ldots x_{i-m+2}^{-1}x_{i-m+1}^{-1})=x_{i-m}.\] \((iii)\implies(ii)\): Expand the equations as follows \[x_{i}\rhd u=x_{i-m}\iff(x_{n-m}x_{n-m+1}\ldots x_{n-2}x_{n-1})x_{i}=x_{i-m}(x _{n-m}x_{n-m+1}\ldots x_{n-2}x_{n-1}). \tag{3}\] Figure 4: Colors in the proof of Theorem 3.3 Set \(i=0=n\) in (3), and cancel out the term \(x_{n-m}\) on the left obtaining \[x_{n-m+1}x_{n-m+2}\ldots x_{n-2}x_{n-1}x_{i}=x_{n-m}x_{n-m+1}\ldots x_{n-2}x_{n- 1}, \tag{4}\] which is one of the equations in (ii). Substitute (4) in (3), set \(i=n-m+1\), and cancel out the first term again to obtain another of the equations. Proceeding this way, since \(\gcd(m,n)=1\) we obtain all the equations in \((ii)\). **Definition 3.4**.: Let \(m,n\in\mathbb{N}\) be such that \(\gcd(m,n)=1\), let \(G\) be a group, and let \(x_{0},\ldots,x_{n-1}\in G\). Let \((x_{0},\ldots,x_{n-1})\) extend to a coloring of \(\mathsf{K}(m,n)\). We refer to the element \(u=x_{n-m}x_{n-m+1}\ldots x_{n-2}x_{n-1}\in G\) (as in Theorem 3.3) as the **harlequin** of \((x_{0},\ldots,x_{n-1})\). ### Properties of conjugation quandle coloring It is natural to inquire whether a given knot coloring can be used to create a coloring for another knot. We observe that the answer to this question is frequently affirmative, and it involves certain divisibility conditions on the parameters of the torus knot. Throughout this subsection, \(G\) denotes any fixed group. **Proposition 3.5**.: _Let \(m,n,t\in\mathbb{N}\) be such that \(\gcd(m,n)=1\) and \(\gcd(tm,n)=1\). If \(\mathsf{K}(m,n)\) is \(\mathsf{Conj}(G)\)-colorable, then also \(\mathsf{K}(tm,n)\) is \(\mathsf{Conj}(G)\)-colorable._ Proof.: The diagram of \(\mathsf{K}(tm,n)\) can be obtained by gluing \(t\) copies of the braid diagram of \(\mathsf{K}(m,n)\), so we may use the given non-trivial coloring of \(\mathsf{K}(m,n)\) to obtain a non-trivial coloring of \(\mathsf{K}(tm,n)\). **Remark 3.6**.: In the previous proposition, the condition \(\gcd(tm,n)=1\) is required only for \(\mathsf{K}(tm,n)\) not to be a link (see Theorem 2.10), and it is not directly required in the proof. While many results presented here could be extended to links, we adhere to the convention of exclusively focusing on knots throughout this paper. This approach also involves assuming the greatest common condition divisor on the parameters. **Proposition 3.7**.: _Let \(m,n,t\in\mathbb{N}\) be such that \(\gcd(m,n)=1\) and \(\gcd(m,tn)=1\). If \(\mathsf{K}(m,n)\) is \(\mathsf{Conj}(G)\)-colorable, then also \(\mathsf{K}(m,tn)\) is \(\mathsf{Conj}(G)\)-colorable._ Proof.: From Theorem 2.9 and Proposition 3.5, we can infer that \[\mathsf{K}(m,n)\text{ is colorable}\ \implies\mathsf{K}(n,m)\text{ is colorable}\ \implies\mathsf{K}(tn,m)\text{ is colorable}\ \implies\mathsf{K}(m,tn)\text{ is colorable}.\qed\] **Proposition 3.8**.: _Let \(m,n,t\in\mathbb{N}\) be such that \(\gcd(m,n)=1\), and \(\gcd(m,tn)=1\). Let \((y_{0},\ldots,y_{tn-1})\) extend to a coloring of \(\mathsf{K}(m,tn)\), and define_ \[x_{i}=\prod_{j=0}^{t-1}y_{it+j},\qquad\text{for $i=0,\ldots,n-1$}.\] _Then \((x_{0},\ldots,x_{n-1})\) extends to a (possibly trivial) coloring of \(\mathsf{K}(m,n)\)._ Proof.: Let \(v\) be the harlequin of \((y_{0},\ldots,y_{tn-1})\) in \(\mathsf{K}(m,tn)\), and define \(u=v^{t}\). We want to prove that \((x_{0},\ldots,x_{n-1})\) extends to a coloring of \(\mathsf{K}(m,n)\) with harlequin \(u\). Using the fact that \(y_{j}\triangleright v=y_{j-m\pmod{tn}}\), we have \[x_{i}\triangleright u=\left(\prod_{j=0}^{t-1}y_{it+j}\right)\triangleright v^{t}= \prod_{j=0}^{t-1}\left(y_{it+j}\triangleright v^{t}\right)=\prod_{j=0}^{t-1}y_{it+ j-tm\pmod{tn}}=\prod_{j=0}^{t-1}y_{(i-m)t+j\pmod{tn}}=x_{i-m\pmod{n}},\] therefore Theorem 3.3 applies. **Remark 3.9**.: The construction in the proof of Proposition 3.8 has the possibility of producing a trivial coloring. However, this is not considered a drawback. In fact, we will utilize this feature in various proofs by contradiction in the subsequent discussions. We show this behavior in the following example. **Example 3.10**.: \(\mathsf{K}(2,3)\) is \(\mathsf{Conj}(\mathsf{S}_{3})\)-colorable. In fact, \((x_{0},x_{1},x_{2})\) extends to a coloring of \(\mathsf{K}(2,3)\), where \(x_{0}=(2\ 3),x_{1}=(1\ 2)\) and \(x_{2}=(1\ 3)\). By Proposition 3.7, define \[y_{0}=y_{3}=y_{6}=y_{9}=y_{12}=x_{0}=(2\ 3)\] \[y_{1}=y_{4}=y_{7}=y_{10}=y_{13}=x_{1}=(1\ 2)\] \[y_{2}=y_{5}=y_{8}=y_{11}=y_{14}=x_{2}=(1\ 3)\] and observe that \((y_{0},\ldots,y_{14})\) extends to a coloring of \(\mathsf{K}(2,15)\). However, if we apply Proposition 3.8 to the previous (non-trivial) coloring of \(\mathsf{K}(2,15)\), we obtain a tuple which extends to a trivial coloring of \(\mathsf{K}(2,5)\), indeed: \[z_{0}=y_{0}y_{1}y_{2}=(1\ 2)\] \[z_{1}=y_{3}y_{4}y_{5}=(1\ 2)\] \[z_{2}=y_{6}y_{7}y_{8}=(1\ 2)\] \[z_{3}=y_{9}y_{10}y_{11}=(1\ 2)\] \[z_{4}=y_{12}y_{13}y_{14}=(1\ 2).\] A direct computation with \(\mathsf{GAP}\) shows, in fact, that \(\mathsf{K}(2,5)\) is not \(\mathsf{Conj}(\mathsf{S}_{3})\)-colorable. **Lemma 3.11**.: _Let \(m,n\in\mathds{N}\) be such that \(\mathsf{gcd}(m,n)=1\), and let \((x_{0},\ldots,x_{n-1})\) extend to a coloring of \(\mathsf{K}(m,n)\). If \(x_{i}=x_{j}\) for some colors \(x_{i},x_{j}\) with \(\mathsf{gcd}(j-i,n)=1\), then \((x_{0},\ldots,x_{n-1})\) extends the trivial coloring. In particular, we have \(x_{0}=x_{1}=\cdots=x_{n-1}\)._ Proof.: Since \(\mathsf{gcd}(m,n)=\mathsf{gcd}(j-i,n)=1\), both \(m\) and \(j-i\) are invertible modulo \(n\). Let \(u\) be the harlequin of \((x_{0},\ldots,x_{n-1})\) and consider \(k=m^{-1}(j-i)\). Then, for every \(t\in\mathbb{Z}\), we have \[x_{i}=x_{j}\implies x_{i}\rhd u^{ik}=x_{j}\rhd u^{ik}\implies x_{i-t(j-i)}=x_{ j-t(j-i)}.\] In particular, \(x_{i}=x_{i-t(j-i)}\) for all \(t\in\mathbb{Z}\). Let \(h\in\{0,\ldots,n-1\}\) and define \(\bar{t}=(j-i)^{-1}(i-h)\). Then \(x_{h}=x_{i-t(j-i)}\). Now, the choice of \(h\) was arbitrary, therefore all the colors in the tuple are equal, hence, the coloring is trivial. **Corollary 3.12**.: _Let \(m\in\mathds{N}\) and \(p\) be a prime such that \(p\nmid m\). Let \((x_{0},\ldots,x_{p-1})\) extend to a coloring of \(\mathsf{K}(m,p)\). Then, either it extends to the trivial coloring, or the colors in the tuple \((x_{0},\ldots,x_{p-1})\) are all distinct._ Proof.: Let \((x_{0},\ldots,x_{p-1})\) extend to a coloring of \(\mathsf{K}(m,p)\), and assume that \(x_{i}=x_{j}\) for some indices \(0\leq i<j\leq p-1\). Since \(p\) is prime, the condition \(\mathsf{gcd}(j-i,p)=1\) trivially holds, so Lemma 3.11 applies. ### The main characterization The importance of the divisors of the parameters becomes immediately apparent in light of the propositions presented in the preceding subsection. In this investigation, we shall first conclude our analysis of the relationship between these divisors and coloring. Our inquiry reveals that the colorability of a knot can be attributed to (at the very least) one of the prime divisors of the parameters, as demonstrated in Theorem 3.17. Following this, we proceed to establish a specific characterization (see Theorem 3.19) concerning the coloring of \(\mathsf{K}(m,n)\), which relies only on a single element in the group, in contrast to the dependence on \(n\) elements as proven in Theorem 3.3. This simplifies the process of verifying the colorability of a specific torus knot, owing to the involvement of fewer elements and the group-theoretical nature of the provided equivalent condition. Throughout this subsection, \(G\) denotes any fixed group. We start with the following arithmetic lemma. **Lemma 3.13**.: _Let \(a,b\in\mathbb{Z}\) be such that \(\mathsf{gcd}(a,b)=1\). Then \(\mathsf{gcd}(a-b,ab)=1\)._ Proof.: By contradiction, let \(p\) be a prime such that \(p\mid a-b\) and \(p\mid ab\). Then, without loss of generality, \(p\mid b\), which, together with the condition \(p\mid a-b\), implies that \(p\mid a\), against the assumption of \(\mathsf{gcd}(a,b)=1\). **Proposition 3.14**.: _Let \(m,p,q\in\mathds{N}\) be such that \(\mathsf{gcd}(m,pq)=\mathsf{gcd}(p,q)=1\). Then, \(\mathsf{K}(m,pq)\) is \(\mathsf{Conj}(G)\)-colorable if and only if either \(\mathsf{K}(m,p)\) or \(\mathsf{K}(m,q)\) is \(\mathsf{Conj}(G)\)-colorable._ Proof.: One implication follows directly from Proposition 3.7. Conversely, let \((x_{0},\ldots,x_{pq-1})\) extend to a non-trivial coloring of \(\mathsf{K}(m,pq)\). By contradiction, assume that both \(\mathsf{K}(m,p)\) and \(\mathsf{K}(m,q)\) are not non-trivially colorable, and define \[y_{i} =\prod_{k=0}^{q-1}x_{iq+k},\qquad\text{for $i=0,\ldots,p-1$},\] \[z_{j} =\prod_{k=0}^{p-1}x_{iq+k},\qquad\text{for $j=0,\ldots,q-1$}.\] By the assumption and Proposition 3.8, the tuples \((y_{0},\ldots,y_{p-1})\) and \((z_{0},\ldots,z_{q-1})\) extend to trivial colorings of \(\mathsf{K}(m,p)\) and \(\mathsf{K}(m,q)\), respectively, in particular this implies that \(y_{0}=y_{1}=\cdots=y_{p-1}\) and \(z_{0}=z_{1}=\cdots=z_{q-1}\), or equivalently \[x_{0}x_{1}\ldots x_{q-1} =x_{q}x_{q+1}\ldots x_{2q-1}=\cdots=x_{(p-1)q}x_{(p-1)q+1}\ldots x _{pq-1}, \tag{5}\] \[x_{0}x_{1}\ldots x_{p-1} =x_{p}x_{p+1}\ldots x_{2p-1}=\cdots=x_{(q-1)p}x_{(q-1)p+1}\ldots x _{pq-1}. \tag{6}\] Proceed by case analysis, distinguishing among the possible values of \(m\). **Case \(m=2\):**: From Theorem 3.3, we have that \[x_{0}x_{1}=x_{1}x_{2}=\cdots=x_{pq-1}x_{0}. \tag{7}\] Thus, for every \(i,j=0,\ldots,p-1\), we have \[y_{i}=y_{j} \implies\prod_{k=0}^{q-1}x_{iq+k}=\prod_{k=0}^{q-1}x_{jq+k}\] \[\implies\begin{cases}x_{iq+0}\prod_{k=1}^{q-1}x_{iq+k}=x_{jq+0} \prod_{k=1}^{q-1}x_{jq+k}\\ \prod_{k=0}^{q-2}x_{iq+k}x_{(i+1)-1}=\prod_{k=0}^{q-2}x_{jq+k}x_{(j+1)-1}\end{cases} \implies\begin{cases}x_{iq}=x_{jq}\\ x_{(i+1)-1}=x_{(j+1)-1}\end{cases}\] where we have canceled the product out because it is made by an even number of factors with consecutive indices, exploiting the equations of Theorem 3.3(ii), namely equations (7). Now, deleting the first and the last term of each of the equations (5) and iterating this procedure, we obtain that \[x_{iq+k}=x_{jq+k}\qquad\forall i,j=0,\ldots,p-1,\,\forall k=0,\ldots,q-1. \tag{8}\] Proceeding in the same way, using \((z_{0},\ldots,z_{q-1})\), we get \[x_{sp+h}=x_{tp+h}\qquad\forall s,t=0,\ldots,q-1,\,\forall h=0,\ldots,p-1. \tag{9}\] We may rewrite equations (8) and (9) in a more compact and equivalent form \[x_{iq+k}=x_{k}\qquad\forall i=0,\ldots,p-1,\,\forall k=0,\ldots,q-1, \tag{10}\] \[x_{jq+h}=x_{h}\qquad\forall j=0,\ldots,q-1,\,\forall h=0,\ldots, p-1. \tag{11}\] Now, consider any \(c\in\{0,\ldots pq-1\}\). By the conditions on \(m,p\) and \(q\), we may assume that \(2<p<q\). By the division with reminder theorem we have \(c=aq+t,t=bp+r\) for some unique \(a,b,t,r\in\mathbb{Z}\) with \(0\leq t<q\) and \(0\leq r\leq p-1<q-1\), hence \(c=aq+bp+r\). Consider the Diophantine equation \(jp+iq=r\), which has solution \((i,j)\in\mathbb{Z}^{2}\) because \(1=\gcd(p,q)\mid r\). Then we have \(c=(b+j)p+(a+i)q\), and, computing indices modulo \(pq\) together with equations (10) and (11), we have \[x_{c}=x_{(b+j)p+(a+i)q}=x_{(b+j)p+(a+i)q+0}=x_{(a+i)q+0}=x_{0}.\] Now, the choice of \(c\) was arbitrary, therefore all the colors \(x_{0},\ldots,x_{pq-1}\) are equal, which is a contradiction. **Case \(m>2\):**: Since \(\gcd(q,m)=1\), consider \(t=q^{-1}\pmod{m}\). By the equations (5), multiplying \(t\) elements with consecutive indices (possibly with repetitions), we have \[y_{0}y_{1}\ldots y_{t-1}=y_{1}y_{2}\ldots y_{t}\implies(x_{0}x_{1}\ldots x_{q- 1})y_{1}\ldots y_{t-1}=(x_{q}x_{q+1}\ldots x_{2q-1})y_{2}\ldots y_{t}\implies x _{0}=x_{q},\] where the last implication holds because the products are made by \(tq-1\) factors with consecutive indices, which is divisible by \(m\), thus Theorem 3.3(ii) holds. Proceeding in the same way, using \((z_{0},\ldots,z_{q-1})\), we obtain that \(x_{0}=x_{p}\). Thus, we have \(x_{p}=x_{q}\) and \(\gcd(p,q)=1\) by assumption. Because of Lemma 3.13 we also get \(\gcd(p-q,pq)=1\), which implies that all the colors are equal because of Lemma 3.11, and this is a contradiction. **Proposition 3.15**.: _Let \(m,p,e\in\mathbb{N}\) such that \(p\) is prime and \(p\nmid m\). Then, \(\mathsf{K}(m,p^{c})\) is \(\mathsf{Conj}(G)\)-colorable if and only if \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(G)\)-colorable._ Proof.: One implication follows directly from Proposition 3.7. Conversely, let \((x_{0},\ldots,x_{p^{c}-1})\) extend to a non-trivial coloring of \(\mathsf{K}(m,p^{c})\). There are three possible cases: * \(x_{i}=x_{j}\) if and only if \(i\equiv j\pmod{p}\). * There are \(i,j\in\{0,\ldots,p^{c-1}-1\}\) such that \(x_{i}=x_{j}\) if and only if \(i\not\equiv j\pmod{p}\). * All the \(x_{i}\)'s are different colors. Observe that case (a) provides a non-trivial coloring for \(\mathsf{K}(m,p)\), because the equations in Theorem 3.3 hold already. Case (b) is impossible because it would contradict Lemma 3.11, being \(\gcd(j-i,p^{c})=1\). Assume case (c). Let \(u\) be the harlequin of \((x_{0},\ldots,x_{p^{c}-1})\), and define \[y_{i}=\prod_{k=0}^{p^{c-1}-1}x_{ip^{c-1}+k},\qquad\text{for $i=0,\ldots,p-1$.}\] From Proposition 3.8 we know that it extends to a coloring of \(\mathsf{K}(m,p)\) with harlequin \(v=u^{p^{c-1}}\). Proceed by case analysis, distinguishing among the possible values of \(m\). **Case \(m=2\):**: If the colors \(y_{0},\ldots,y_{p-1}\) are all different, the proof is completed. Assume that two of them are equal, say \(y_{i}=y_{j}\) for some distinct \(i,j\in\{0,\ldots,p-1\}\). Then \[y_{i}=y_{j} \implies\prod_{k=0}^{p^{c-1}-1}x_{ip^{c-1}+k}=\prod_{k=0}^{p^{c-1} -1}x_{jp^{c-1}+k}\] \[\implies x_{ip^{c-1}}\prod_{k=1}^{p^{c-1}-1}x_{ip^{c-1}+k}=x_{jp^{c-1}} \prod_{k=1}^{p^{c-1}-1}x_{jp^{c-1}+k}\implies x_{ip^{c-1}}=x_{jp^{c-1}}\] where we have canceled the product out because it is made by an even number of factors with consecutive indices, exploiting the equations of Theorem 3.3(ii). This leads to a contradiction because we are assuming that all the colors are different, hence this sub-case is indeed impossible. **Case \(m>2\):**: From Corollary 3.12 we know that either the elements of the tuple \((y_{0},\ldots,y_{p-1})\) are all distinct, or \((y_{0},\ldots,y_{p-1})\) extends to the trivial coloring. In the first case, the proof is completed. Assume that \(y_{0}=y_{1}=\cdots=y_{p-1}\). Since \(\mathsf{gcd}(m,p)=1\), consider \(t=p^{-1}\pmod{m}\). Multiplying \(t\) elements with consecutive indices (possibly with repetitions), we have \[y_{0}y_{1}\ldots y_{t-1}=y_{1}y_{2}\ldots y_{t}\implies(x_{0}x_{1}\ldots x_{p- 1})y_{1}\ldots y_{t-1}=(x_{p}x_{p+1}\ldots x_{2p-1})y_{2}\ldots y_{t}\implies x _{0}=x_{p},\] where the last implication holds because the products are made by \(tp-1\) factors, which is divisible by \(m\), thus Theorem 3.3(ii) holds. This leads to a contradiction because we are assuming that all the colors of \((x_{0},\ldots,x_{p^{e}-1})\) are different, hence also this sub-case is impossible. **Proposition 3.16**.: _Let \(m,n\in\mathbb{N}\) be such that \(\mathsf{gcd}(m,n)=1\). Then, \(\mathsf{K}(m,n)\) is \(\mathsf{Conj}(G)\)-colorable if and only if there is a prime factor \(q\) of \(n\) such that \(\mathsf{K}(m,q)\) is \(\mathsf{Conj}(G)\)-colorable._ Proof.: One direction follows from Proposition 3.7. For the other, write the prime factorization of \(n=\prod_{i=1}^{k}p_{i}^{e_{i}}\) and conclude with a simple induction argument using Propositions 3.14 and 3.15. **Theorem 3.17**.: _Let \(m,n\in\mathbb{N}\) be such that \(\mathsf{gcd}(m,n)=1\). Then, \(\mathsf{K}(m,n)\) is \(\mathsf{Conj}(G)\)-colorable if and only if there is a prime factor \(p\) of \(m\) and a prime factor \(q\) of \(n\) such that \(\mathsf{K}(p,q)\) is \(\mathsf{Conj}(G)\)-colorable._ Proof.: One direction follows from Propositions 3.5 and 3.7. For the converse, we can use Proposition 3.16, together with Theorem 2.9, to infer that \[\mathsf{K}(m,n)\text{ is colorable} \implies\mathsf{K}(m,q)\text{ is colorable for some }q\mid n\] \[\implies\mathsf{K}(q,m)\text{ is colorable for some }q\mid n\] \[\implies\mathsf{K}(q,p)\text{ is colorable for some }q\mid n\text{ and }p\mid m\] \[\implies\mathsf{K}(p,q)\text{ is colorable for some }p\mid m\text{ and }q\mid n.\qed\] **Remark 3.18**.: Due to Theorem 3.17, when examining the colorability of \(\mathsf{K}(m,n)\) we can assume that \(n\) is prime and that \(m\nmid n\). We will see that assuming \(m\) to be prime as well is inconsequential. It is important to note that if \(m=1\) then \(\mathsf{K}(1,n)\) is only trivially colorable, so we can also assume \(m\neq 1\). We present now the main results of this paper, related to the study of conjugation quandle coloring of torus knots. **Theorem 3.19**.: _Let \(G\) be a group, and let \(m,p\in\mathbb{N}\) be such that \(m\geq 2\) and \(p\) is prime with \(p\nmid m\). Then \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(G)\)-colorable if and only if there is \(u\in G\) such that the centralizers \(\mathsf{C}_{G}(u^{p})\setminus\mathsf{C}_{G}(u)\neq\emptyset\)._ Proof.: We prove that \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(G)\)-colorable if and only if there are \(x_{0},u\in G\) such that \[\begin{cases}ux_{0}u^{-1}\neq x_{0}\\ u^{p}x_{0}u^{-p}=x_{0},\end{cases}\] which is equivalent to the condition on the centralizers in the statement. Let \((x_{0},\ldots x_{p-1})\) extend to a non-trivial coloring of \(\mathsf{K}(m,p)\) with harlequin \(u\). Because of Corollary 3.12, all the colors must be distinct. Because of Theorem 3.3, for every \(t\in\{0,\ldots,p-1\}\) we have \(x_{0}\rhd u^{k}=x_{t}\) for \(k=-m^{-1}t\pmod{p}\), that is, all the colors can be obtained from \(x_{0}\) and (a suitable power of) \(u\). Therefore, we have \(x_{0},u\in G\) such that \[\begin{cases}ux_{0}u^{-1}=x_{-m}\\ ux_{-m}u^{-1}=x_{-2m}\\ \vdots\\ ux_{-(p-1)}u^{-1}=x_{-p}=x_{0}\end{cases}\] which is equivalent to \[\begin{cases}ux_{0}u^{-1}=x_{-m}\neq x_{0}\\ u^{p}x_{0}u^{-p}=x_{0}\end{cases}\] where the second equation is obtained by combining all the previous ones. Note that, since all the colors are different, in this setting, we do not need to require also \(u^{i}x_{0}u^{-i}\neq x_{0}\) for all \(i=0,\ldots,p-1\), because if we had both \(u^{i},u^{p}\in\mathsf{C}_{G}(x_{0})\), being the centralizer a subgroup of \(G\), we would also have \(u^{\mathsf{gcd}(i,p)}=u^{1}=u\in\mathsf{C}_{G}(x_{0})\), which is forbidden by the first equation. Conversely, let \(x_{0},u\in G\) as in the statement, and define \(x_{-im}=u^{i}x_{0}u^{-i}\) for \(i=\{0,\ldots,p-1\}\). This is indeed a coloring because, by definition of conjugation quandle, we have \(x_{-im}\rhd u=x_{-im-m}\), and \(x_{-pm}=x_{0}\). Moreover, since \(\mathsf{gcd}(m,p)=1\), this is enough to define all the colors. **Remark 3.20**.: Observe that the colorability of \(\mathsf{K}(m,p)\) is independent on \(m\). This allows us to designate the initial entry as any convenient number, provided that the second parameter is assumed to be prime. **Remark 3.21**.: In the notation of Theorem 3.19, note that if there is an element \(u\in G\) of order \(p\) such that \(u\not\in\mathbb{Z}(G)\), then \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(G)\)-colorable. Coloring with particular groups We now apply the theorems mentioned above to specific cases. We initiate the analysis by considering matrix groups, and then proceed with dihedral and symmetric groups. ### General linear group In this subsection, with \(p\) and \(q\) we denote two prime numbers, and the group \(\mathsf{GL}(2,q)\) is denoted by \(G\). Our objective is to derive a numerical characterization for the colorability of \(\mathsf{K}(m,p)\) solely in terms of \(p\) and \(q\), as we have determined that the parameter \(m\) is irrelevant (see Remark 3.20). **Remark 4.1**.: The following table displays the representatives of the conjugacy classes of \(\mathsf{GL}(2,q)\), together with their centralizers. In virtue of Theorem 3.19, for a representative \(u\) of each conjugacy class, we want to compute when the condition \(\mathsf{C}_{\mathsf{GL}(2,q)}(u^{p})\setminus\mathsf{C}_{\mathsf{GL}(2,q)}(u)\neq\emptyset\) holds, so we aim to recreate the table with \(u^{p}\) instead of \(u\) and compare the results. \begin{tabular}{l|l|l|l} Type & \(u\) & \(\mathsf{C}_{\mathsf{GL}(2,q)}(u)\) \\ \hline Type 1 & \(\begin{pmatrix}a&0\\ 0&a\end{pmatrix}\) & \(a\neq 0\) & \(\mathsf{GL}(2,q)\) \\ Type 2 & \(\begin{pmatrix}a&0\\ 0&b\end{pmatrix}\) & \(0<a<b\) & \(\left\{\begin{pmatrix}u&0\\ 0&v\end{pmatrix}\in\mathsf{GL}(2,q)\colon u,v\neq 0\right\}\) \\ Type 3 & \(\begin{pmatrix}a&1\\ 0&a\end{pmatrix}\) & \(a\neq 0\) & \(\left\{\begin{pmatrix}u&v\\ 0&u\end{pmatrix}\in\mathsf{GL}(2,q)\colon u\neq 0\right\}\) \\ Type 4 & \(\begin{pmatrix}0&1\\ a&b\end{pmatrix}\) & \(x^{2}-bx-a\) irreducible & \(\left\{\begin{pmatrix}u&v\\ au&u+bv\end{pmatrix}\in\mathsf{GL}(2,q)\colon u\neq 0\text{ or }v\neq 0\right\}\) \\ \end{tabular} **Proposition 4.2**.: _Let \(u\in G\) be a matrix of type 1. Then \(\mathsf{C}_{G}(u)=\mathsf{C}_{G}(u^{p})=G\)._ Proof.: The matrix power \(u^{p}\) is still a scalar matrix, hence its centralizer is again maximal. **Proposition 4.3**.: _Let \(u\in G\) be a matrix of type 2. Then \(\mathsf{C}_{G}(u^{p})\setminus\mathsf{C}_{G}(u)\neq\emptyset\) if and only if \(p\mid q-1\)._ Proof.: The matrix power \(u^{p}=\left(\begin{smallmatrix}a^{p}&0\\ 0&b^{p}\end{smallmatrix}\right)\) is still diagonal, so its centralizer is strictly larger if and only if \(u^{p}\) is a scalar matrix, that is when \(a^{p}\equiv b^{p}\pmod{q}\). This happens if and only if \((ab^{-1})^{p}\equiv 1\pmod{q}\) that is when \(\mathbb{F}_{q}^{\times}\) has an element \(u\) of order \(p\) of the form \(u=ab^{-1}\), or equivalently when \(p\mid q-1\), by Cauchy's Theorem. **Proposition 4.4**.: _Let \(u\in G\) be a matrix of type 3. Then \(\mathsf{C}_{G}(u^{p})\setminus\mathsf{C}_{G}(u)\neq\emptyset\) if and only if \(p=q\)._ Proof.: A direct computation shows that the matrix power \[u^{p}=\begin{pmatrix}a&1\\ 0&a\end{pmatrix}^{p}=\begin{pmatrix}a^{p}&pa^{p-1}\\ 0&a^{p}\end{pmatrix}\] has the same centralizer as the matrix \(u\), unless \(u^{p}\) is a scalar matrix. This happens when \(pa^{p-1}\equiv 0\pmod{q}\) that is, when \(p=q\), being both primes. **Lemma 4.5**.: _Let \(u=\left(\begin{smallmatrix}0&1\\ a&b\end{smallmatrix}\right)\in G\) be a matrix of type 4. Then for every \(n\geq 1\) we have_ \[\begin{pmatrix}0&1\\ a&b\end{pmatrix}^{n}=\begin{pmatrix}x_{n-1}&y_{n-1}\\ x_{n}&y_{n}\end{pmatrix}\] _where_ \[\begin{cases}x_{0}=0\\ y_{0}=1\end{cases},\qquad\begin{cases}x_{n}=ay_{n-1}&n\geq 1.\\ y_{n}=x_{n-1}+by_{n-1}.\end{cases}\qquad n\geq 1.\] Proof.: It follows easily by induction. **Lemma 4.6**.: _In the notation of Lemma 4.5, assuming \(q\neq 2\), we have_ \[y_{n}=\frac{d^{-1}}{2}\left((d+c)^{n+1}+(d-c)^{n+1}\right)\] _where \(c=\frac{b}{2}\) and \(d=\frac{\sqrt{b^{2}+4a}}{2}\)._ Proof.: By Lemma 4.5 we have for every \(n\in\mathbb{N}\) \[\begin{pmatrix}x_{n}\\ y_{n}\end{pmatrix}=\begin{pmatrix}0&a\\ 1&b\end{pmatrix}^{n}\begin{pmatrix}0\\ 1\end{pmatrix}.\] Let \(A=\left(\begin{smallmatrix}0&a\\ 1&b\end{smallmatrix}\right)\). Compute the matrix power \(A^{n}\) using the diagonalization technique. Let \(\lambda_{1}=c+d,\lambda_{2}=c-d\in\mathbb{F}_{q^{2}}\) be the eigenvalues of \(A\), and let \(U\in\mathsf{GL}(2,q^{2})\) be the matrix such that \(U^{-1}AU=\left(\begin{smallmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{smallmatrix}\right)\). Hence \(A^{n}=U\left(\begin{smallmatrix}\lambda_{1}^{n}&0\\ 0&\lambda_{2}\end{smallmatrix}\right)U^{-1}\) and \[\begin{pmatrix}x_{n}\\ y_{n}\end{pmatrix}=\begin{pmatrix}k&l\\ m&r\end{pmatrix}\begin{pmatrix}\lambda_{1}^{n}\\ \lambda_{2}^{n}\end{pmatrix}\] for some \(k,l,m,r\in\mathbb{F}_{q^{2}}\). Using the known conditions \(x_{0}=0,y_{0}=1,x_{1}=a,y_{1}=b\) we obtain \[k=\frac{a}{\lambda_{1}-\lambda_{2}}=\frac{a}{2d},\qquad l=\frac{-a}{\lambda_{ 1}-\lambda_{2}}=\frac{-a}{2d},\qquad m=\frac{b-\lambda_{2}}{\lambda_{1}- \lambda_{2}}=\frac{d+c}{2d},\qquad r=\frac{\lambda_{1}-b}{\lambda_{1}-\lambda_ {2}}=\frac{d-c}{2d}.\] Now compute the powers \(\lambda_{1}^{n}\) and \(\lambda_{2}^{n}\) using the Binomial Theorem: \[\lambda_{1}^{n}=(c+d)^{n}=\sum_{k=0}^{\left\lfloor\frac{n}{2} \right\rfloor}\binom{n}{2k}d^{2k}c^{n-2k}+\sum_{k=0}^{\left\lfloor\frac{n}{2} \right\rfloor}\binom{n}{2k+1}d^{2k+1}c^{n-2k-1}\] \[\lambda_{2}^{n}=(c-d)^{n}=\sum_{k=0}^{\left\lfloor\frac{n}{2} \right\rfloor}\binom{n}{2k}d^{2k}c^{n-2k}-\sum_{k=0}^{\left\lfloor\frac{n}{2} \right\rfloor}\binom{n}{2k+1}d^{2k+1}c^{n-2k-1}\] Therefore \[y_{n}=m\lambda_{1}^{n}+r\lambda_{2}^{n}=\ \ldots\ =d^{-1}\sum_{\begin{subarray}{ c=0}\\ i\text{ odd}\end{subarray}}^{n+1}\binom{n+1}{i}d^{i}c^{(n+1)-i}=\frac{d^{-1}}{2} \left((d+c)^{n+1}+(d-c)^{n+1}\right).\] **Lemma 4.7**.: _The condition \(p\mid q+1\) holds if and only if there are \(p-1\) elements \(u\) of multiplicative order \(p\) in \(\mathbb{F}_{q^{2}}\), all belonging to \(\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), and each of those can be expressed as_ \[u=\frac{c+d}{c-d}\] _where \(c\in\mathbb{F}_{q}\) and \(d\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\)._ Proof.: Assume that \(p\mid q+1\), thus \(p\mid q^{2}-1=\left\lvert\mathbb{F}_{q^{2}}\right\rvert\). Under this assumption, it is known that in \(\mathbb{F}_{q^{2}}\) there are exactly \(p-1>0\) elements of order \(p\). Assume, by contradiction, that one of those was in \(\mathbb{F}_{q}\). Then it would generate a multiplicative cyclic subgroup of \(\mathbb{F}_{q}^{\times}\) of order \(p\) containing all such elements, and implying that \(p\mid q-1\) which is impossible, being \(p\neq 2\). Then every element of order \(p\) must belong to \(\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\). Consider now, for every \(c\in\mathbb{F}_{q^{2}}^{\times}\) the map \(\varphi_{c}\colon\mathbb{F}_{q^{2}}\to\mathbb{F}_{q^{2}}\) defined by \[\varphi_{c}(x)=\begin{cases}\frac{c+x}{c-x}&x\neq c\\ -1&x=c\end{cases}.\] It is easy to see that \(\varphi_{c}\) is bijective. Since \(p>2\), the element \(-1\) does not have order \(p\). Therefore all the \(p-1\) elements of order \(p\) are contained in \(\varphi_{c}\left(\mathbb{F}_{q^{2}}\setminus\{c\}\right)\). Note that, if for an element \(u\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) of order \(p\) we have \(u=\varphi_{c}(x)\), for some \(c\in\mathbb{F}_{q}\), then, necessarily, we need to have \(x\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), otherwise, we would get that \(u\in\mathbb{F}_{q}\). Conversely, the existence of elements of order \(p\) in \(u\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) implies that \(p\mid q^{2}-1=\left\lvert\mathbb{F}_{q^{2}}^{\times}\right\rvert\). If we had \(p\mid q-1=\left\lvert\mathbb{F}_{q}^{\times}\right\rvert\) then by Cauchy's theorem, \(\mathbb{F}_{q}^{\times}\) would contain one (hence, all) of them, which is against the assumptions. It follows that \(p\mid q+1\). **Lemma 4.8**.: _In the notation of Lemma 4.5 and Lemma 4.6, we have \(y_{p-1}\equiv 0\pmod{q}\) if and only if \(p\mid q+1\)._ Proof.: Analyze first when \(q=2\). The only case in which the polynomial \(x^{2}-bx-a\) is irreducible in \(\mathbb{F}_{2}\) is when \(a=b=1\), and its splitting field is \(\mathbb{F}_{2}[x]/(x^{2}+x+1)\cong\mathbb{F}_{4}\). Assuming \(a=b=1\), a simple induction argument shows that \(y_{n}\equiv 0\pmod{2}\) if and only if \(n\equiv 2\pmod{3}\), thus, \(y_{p-1}\equiv 0\) if and only if \(p\equiv 0\pmod{3}\), that is \(p=3\). Therefore, the claim holds for \(q=2\). Assume now \(q\neq 2\), then Lemma 4.6 applies and we have \[y_{p-1}\equiv 0\pmod{q}\iff\frac{d^{-1}}{2}\left((d+c)^{n+1}+(d-c)^{n+1} \right)\equiv 0\pmod{q}\iff\left((c+d)(c-d)^{-1}\right)^{p}\equiv 1 \pmod{q}\] which holds if and only if the equation \(u^{p}=1\) has solution in \(\mathbb{F}_{q^{2}}\) for some \(u\) of the form \(u=\frac{c+d}{c-d}\) with \(c\in\mathbb{F}_{q}\) and \(d\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\). The conclusion now follows from Lemma 4.7. **Proposition 4.9**.: _Let \(u\in G\) be a matrix of type 4. Then \(\mathsf{C}_{G}(u^{p})\setminus\mathsf{C}_{G}(u)\neq\emptyset\) if and only if \(p\mid q+1\)._ Proof.: A direct computation shows that the matrix power \(u^{p}\) has the same centralizer as the matrix \(u\) unless \(u^{p}\) is a scalar matrix, and this happens if and only if \(y_{p-1}\equiv 0\pmod{q}\). The conclusion now follows from Lemma 4.8. We summarize Propositions 4.2, 4.3, 4.4, 4.9 in the following table. \begin{tabular}{c|c|c|c} Type & \(u^{p}\) & & \(\mathsf{C}_{\mathsf{GL}(2,q)}(u^{p})\setminus\mathsf{C}_{\mathsf{GL}(2,q)}(u)\neq\emptyset\) \\ \hline Type 1 & \(\begin{pmatrix}a^{p}&0\\ 0&a^{p}\end{pmatrix}\) & \(a\neq 0\) & Never \\ Type 2 & \(\begin{pmatrix}a^{p}&0\\ 0&b^{p}\end{pmatrix}\) & \(0<a<b\) & \(p\mid q-1\) \\ Type 3 & \(\begin{pmatrix}a^{p}&pa^{p-1}\\ 0&a^{p}\end{pmatrix}\) & \(a\neq 0\) & \(p=q\) \\ Type 4 & \(\begin{pmatrix}x_{p-1}&y_{p-1}\\ ay_{p-1}&x_{p-1}+by_{p-1}\end{pmatrix}\) & \(x^{2}-bx-a\) irreducible & \(p\mid q+1\) \\ \end{tabular} In conclusion, by combining all the information acquired from the previous results with the fact that conjugacy classes form a partition of \(G\), we obtain the main result of this subsection. **Theorem 4.10** (\(\mathsf{Conj}(\mathsf{GL}(2,q))\)-coloring of Torus Knots).: _Let \(m,p\in\mathbb{N}\) be such that \(m\geq 2\) and \(p\) is prime with \(p\nmid m\). The torus knot \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(\mathsf{GL}(2,q))\)-colorable if and only if \(p\mid q(q+1)(q-1)\)._ ### Special linear group We proceed in a similar manner as in the case of \(\mathsf{GL}(2,q)\), distinguishing among representatives of conjugacy classes and their centralizers. Once again, throughout this subsection, with \(p\) and \(q\) we denote two prime numbers, and the special linear group \(\mathsf{SL}(2,q)\) is denoted by \(G\). Our objective is to obtain a numerical characterization for the colorability of \(\mathsf{K}(m,p)\) solely in terms of \(p\) and \(q\). \begin{tabular}{c|c|c} Type & \(u\) & \(\mathsf{C}_{\mathsf{SL}(2,q)}(u)\) \\ \hline Type 1 & \(\begin{pmatrix}a&0\\ 0&a\end{pmatrix}\) & \(a^{2}=1\) & \(\mathsf{SL}(2,q)\) \\ Type 2 & \(\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}\) & \(a\neq 0\) & \(\begin{cases}\begin{pmatrix}u&0\\ 0&u^{-1}\end{pmatrix}\in\mathsf{SL}(2,q)\colon u\neq 0\end{cases}\) \\ Type 3 & \(\begin{pmatrix}a&b\\ 0&a\end{pmatrix}\) & \(a^{2}=1,b=1\) or \(b\) non-square & \(\begin{cases}\begin{pmatrix}u&v\\ 0&u\end{pmatrix}\in\mathsf{SL}(2,q)\colon u^{2}=1\end{cases}\) \\ Type 4 & \(\begin{pmatrix}0&1\\ -1&a\end{pmatrix}\) & \(a=r+r^{q},r\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q},r^{q+1}=1\) & \(\begin{cases}\begin{pmatrix}u&v\\ -v&u+av\end{pmatrix}\in\mathsf{SL}(2,q)\colon u(u+av)+v^{2}=1\end{cases}\) \\ \end{tabular} **Remark 4.11**.: The subsequent propositions can be proven using the same techniques presented for the case of \(\mathsf{GL}(2,q)\) above. In many cases, the computations turn out to be exactly the same; however, in some instances, we encounter certain refinements due to the fewer parameters involved. The proofs are substantially identical to those presented in Section 4.1. Therefore, in the following discussion, we simply state the results for the case where \(G=\mathsf{SL}(2,q)\). **Proposition 4.12**.: _Let \(u\in G\) be a matrix of type 1. Then \(\mathsf{C}_{G}(u)=\mathsf{C}_{G}(u^{p})=G\)._ **Proposition 4.13**.: _Let \(u\in G\) be a matrix of type 2. Then \(\mathsf{C}_{G}(u^{p})\setminus\mathsf{C}_{G}(u)\neq\emptyset\) if and only if \(p\mid q-1\)._ **Proposition 4.14**.: _Let \(u\in G\) be a matrix of type 3. Then \(\mathsf{C}_{G}(u^{p})\setminus\mathsf{C}_{G}(u)\neq\emptyset\) if and only if \(p=q\)._ **Proposition 4.15**.: _Let \(u\in G\) be a matrix of type 4. Then \(\mathsf{C}_{G}(u^{p})\setminus\mathsf{C}_{G}(u)\neq\emptyset\) if and only if \(p\mid q+1\)._ Combining the previous results, we obtain exactly the same numeric condition as in Theorem 4.10. We may therefore strengthen the statement, adjoining the claim related to the special linear group. **Theorem 4.16**.: _Let \(m,p\in\mathbb{N}\) be such that \(m\geq 2\) and \(p\) is prime with \(p\nmid m\). The following are equivalent._ 1. _The torus knot_ \(\mathsf{K}(m,p)\) _is_ \(\mathsf{Conj}(\mathsf{GL}(2,q))\)_-colorable._ 2. _The torus knot_ \(\mathsf{K}(m,p)\) _is_ \(\mathsf{Conj}(\mathsf{SL}(2,q))\)_-colorable._ 3. \(p\mid q(q+1)(q-1)\)_._ ### Dihedral group We now proceed to discuss the dihedral groups. We denote the dihedral group of the \(n\)-gon as \(\mathsf{D}_{n}\), and employ the following presentation: \[\mathsf{D}_{n}=\left\langle r,s\colon r^{n}=s^{2}=1,\ srs=r^{-1}\right\rangle.\] Our objective is to derive a numerical characterization for the colorability of \(\mathsf{K}(m,p)\) solely in terms of \(p\) and \(n\). **Theorem 4.17** (\(\mathsf{Conj}(\mathsf{D}_{n})\)-coloring of Torus Knots).: _Let \(m,p\in\mathbb{N}\) be such that \(m\geq 2\) and \(p\) is prime with \(p\nmid m\). The torus knot \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(\mathsf{D}_{n})\)-colorable if and only if \(p\mid n\)._ Proof.: Assume \(p\nmid n\). Then there's an element \(u=r^{\frac{n}{p}}\in\mathsf{D}_{n}\) of order \(p\). It is well known that \[\mathsf{Z}(\mathsf{D}_{n})=\left\langle\begin{array}{cc}\{1\}&n\text{ odd}\\ \{1,r^{\frac{n}{2}}\}&n\text{ even}\end{array}\right..\] If \(n\) is odd, then trivially \(u\not\in\mathsf{Z}(\mathsf{D}_{n})\). If \(n\) is even, we also have \(u\not\in\mathsf{Z}(\mathsf{D}_{n})\), because if that was the case, we would have \(\frac{n}{2}=\frac{n}{p}\), implying \(p=2\), which is excluded. By Remark 3.21, we conclude that \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(\mathsf{D}_{n})\)-colorable. Conversely, we assume that \(p\nmid n\) and prove that \(\mathsf{K}(m,p)\) is not \(\mathsf{Conj}(\mathsf{D}_{n})\)-colorable. By Theorem 3.19, it is enough to check that the condition \(\mathsf{C}_{\mathsf{D}_{n}}(u^{p})\setminus\mathsf{C}_{\mathsf{D}_{n}}(u)\neq\emptyset\) never holds. Recall that every element of \(u\in\mathsf{D}_{n}\) may be uniquely expressed as \(u=s^{t}r^{k}\) for some \(t\in\{0,1\}\) and \(k\in\{0,\ldots,n-1\}\). For \(t=1\), then \(u\) is an involution, hence \(\mathsf{C}_{\mathsf{D}_{n}}(u^{p})=\mathsf{C}_{\mathsf{D}_{n}}(u)\). Assume \(t=0\), that is \(u=r^{k}\). A direct computation shows that \[\mathsf{C}_{\mathsf{D}_{n}}(r^{k})=\mathsf{C}_{\mathsf{D}_{n}}(r^{pk})= \left\langle\begin{array}{cc}\langle r\rangle&k\neq\frac{n}{2}\\ \mathsf{D}_{n}&k=\frac{n}{2}\end{array}\right.\] therefore, also in this case, \(\mathsf{C}_{\mathsf{D}_{n}}(u^{p})\setminus\mathsf{C}_{\mathsf{D}_{n}}(u)\neq\emptyset\) does not hold. ### Symmetric group We conclude this section and the paper with a discussion on the symmetric groups. We denote the symmetric group over \(n\) letters as \(\mathsf{S}_{n}\). Our goal is, again, to derive a numerical characterization for the colorability of \(\mathsf{K}(m,p)\) solely in terms of \(p\) and \(n\). **Theorem 4.18** (\(\mathsf{Conj}(\mathsf{S}_{n})\)-coloring of Torus Knots).: _Let \(m,p\in\mathbb{N}\) be such that \(m\geq 2\) and \(p\) is prime with \(p\nmid m\). The torus knot \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(\mathsf{S}_{n})\)-colorable if and only if \(p\leq n\)._ Proof.: If \(p\leq n\), then \(p\mid n!=|\mathsf{S}_{n}|\), hence \(\mathsf{S}_{n}\) has an element of order \(p\), which is not in the centre because \(\mathsf{Z}(\mathsf{S}_{n})=\{1\}\). Remark 3.21 allows us to conclude that \(\mathsf{K}(m,p)\) is \(\mathsf{Conj}(\mathsf{S}_{n})\)-colorable. Conversely, we assume that \(p<n\) and prove that \(\mathsf{K}(m,p)\) is not \(\mathsf{Conj}(\mathsf{D}_{n})\)-colorable. By Theorem 3.19, it is enough to check that the condition \(\mathsf{C}_{\mathsf{S}_{n}}(u^{p})\setminus\mathsf{C}_{\mathsf{S}_{n}}(u)\neq\emptyset\) never holds. Let \(u\in\mathsf{S}_{n}\) and consider its complete factorization in disjoint cycles \(u=\sigma_{1}\ldots\sigma_{t}\), for some \(t\geq 1\). Since disjoint cycles commute, we have \(u^{p}=\sigma_{1}^{p}\ldots\sigma_{t}^{p}\). In particular, since \(p<n\) prime, if \(\sigma\) is an \(r\)-cycle, then also \(\sigma^{p}\) is an \(r\)-cycle. This implies that \(u\) and \(u^{p}\) have the same cycle structure, hence they are conjugate in \(\mathsf{S}_{n}\). Consider \(\tau\in\sigma_{n}\) such that \(u^{p}=\tau u\tau^{-1}\). Then \(\mathsf{C}_{\mathsf{S}_{n}}(u^{p})=\mathsf{C}_{\mathsf{S}_{n}}(\tau u\tau^{-1} )=\tau\mathsf{C}_{\mathsf{S}_{n}}(u)\tau^{-1}\), hence \(|\mathsf{C}_{\mathsf{S}_{n}}(u^{p})|=|\mathsf{C}_{\mathsf{S}_{n}}(u)|\). Since it always hold that \(\mathsf{C}_{\mathsf{S}_{n}}(u)\leq\mathsf{C}_{\mathsf{S}_{n}}(u^{p})\), the two centralisers must be equal, hence \(\mathsf{K}(m,p)\) is not \(\mathsf{Conj}(\mathsf{S}_{n})\)-colorable if \(p<n\). ## 5 Conclusions We end this paper by posing a few questions for potential future research. Through applying the broad description of torus knots, we have derived characterization theorems for coloring torus knots by employing conjugation quandles with specific groups. Is it possible to extend this approach to additional small groups? **Problem 1**.: Characterize the conjugation quandle coloring of \(\mathsf{K}(m,n)\) using other small groups. Moreover, there exists a knot-theoretical tool that enables the association of a polynomial with any given knot, encoding certain properties. This technique is known as the Alexander polynomial (see [11]). The Alexander polynomials of torus knots are well-understood and easy to manipulate, involving specific divisibility conditions on their parameters. It's only natural to inquire whether there is a correlation between Alexander polynomials and colorings. **Problem 2**.: What are the relations (if any) between the conjugation quandle coloring of \(\mathsf{K}(m,n)\) and its Alexander polynomial? Furthermore, a well-known family of satellite knots is the one consisting of Whitehead doubles. For a given knot \(K\), its Whitehead double \(\mathsf{W}(K)\) is constructed by duplicating its arcs and introducing two additional crossings (see [10]). Is it feasible to formulate a characterization theorem for the quandle colorability of \(\mathsf{W}(\mathsf{K}(m,n))\) using a strategy akin to what we achieved in Theorem 3.19? Ideally, this approach would begin by simplifying the task, initially omitting divisors and following a similar pattern as seen in Theorem 3.17. **Problem 3**.: Characterize the conjugation quandle colorability of the Whitehead double of \(\mathsf{K}(m,n)\). ## Acknowledgements This paper is built upon research carried out during the Ph.D. studies of the author, who received partial support from both the GAUK grant (301-10/252012) and Z. Patakova's Primus grant (301-45/247107). Furthermore, the author wishes to express gratitude to their Ph.D. advisor, David Stanovsky, for his patient guidance and inquisitive approach. Additionally, appreciation is extended to Petr Vojtechovsky for offering valuable insights that have enriched specific prior results.
2303.05230
Classifying the universal coarsening dynamics of a quenched ferromagnetic condensate
Scale invariance and self-similarity in physics provide a unified framework to classify phases of matter and dynamical properties of near-equilibrium systems. However, extending this framework to far-from-equilibrium quantum many-body systems and categorizing their dynamics have remained a major challenge in physics. Here, we report on the first classification of universal coarsening dynamics in a quenched two-dimensional ferromagnetic spinor Bose gas. We observe spatiotemporal scaling of spin correlation functions with distinguishable scaling exponents, $1/z=0.58(2)$ and $1/z=0.43(2)$, characteristic, respectively, of binary and diffusive fluids. We find the universality class of the coarsening dynamics are determined by the symmetry of the order parameters and the annihilation dynamics of the topological defects. These observations are in excellent agreement with many-body simulations. Our results represent a paradigmatic example of categorizing far-from-equilibrium dynamics in quantum many-body systems.
SeungJung Huh, Koushik Mukherjee, Kiryang Kwon, Jihoon Seo, Simeon I. Mistakidis, H. R. Sadeghpour, Jae-yoon Choi
2023-03-09T13:08:38Z
http://arxiv.org/abs/2303.05230v1
# Classifying the universal coarsening dynamics of a quenched ferromagnetic condensate ###### Abstract Scale invariance and self-similarity in physics provide a unified framework to classify phases of matter and dynamical properties of near-equilibrium systems. However, extending this framework to far-from-equilibrium quantum many-body systems and categorizing their dynamics have remained a major challenge in physics. Here, we report on the first classification of universal coarsening dynamics in a quenched two-dimensional ferromagnetic spinor Bose gas. We observe spatiotemporal scaling of spin correlation functions with distinguishable scaling exponents, \(1/z=0.58(2)\) and \(1/z=0.43(2)\), characteristic, respectively, of binary and diffusive fluids. We find the universality class of the coarsening dynamics are determined by the symmetry of the order parameters and the annihilation dynamics of the topological defects. These observations are in excellent agreement with many-body simulations. Our results represent a paradigmatic example of categorizing far-from-equilibrium dynamics in quantum many-body systems. + Footnote †: preprint: APS/123-QED Critical phenomena occur at the points of second order phase transitions in classical and quantum systems, where the correlation length, which determines the fluctuations in the size boundaries between different phases, diverges [1; 2]. Critical opalescence, first observed in the gas-liquid transition in carbon dioxide, renders the transparent liquid murky because the density fluctuations in the liquid become of the order of the wavelength of light. In quantum systems, examples are the Ising magnets, Curie transitions in ferromagnets, and Bose-Einstein condensates. Such static critical phenomena can be divided into universality classes, each class described by same set of exponents. It was realized that two systems belonging to the same static universality class may belong to different dynamic classes [3]. The first direct measurement of the divergence of the correlation length in an inhomogeneous BEC in the thermodynamic limit was performed in [4]. Understanding and classifying the universality classes in the far from equilibrium dynamics of closed quantum many-body systems remain outstanding challenges in physics [5; 6; 7]. Although many aspects of isolated quantum systems are fundamentally different from those of traditional systems with thermal baths [3; 8], far out-of-equilibrium quantum systems can display universal behavior on approach to thermal equilibrium [9; 10; 11; 12; 13; 14; 15]. Recent theories [16; 17; 18; 19; 20; 21] have proposed a comprehensive picture of the emerging universal dynamics, where a system driven far from equilibrium undergoes critical slowing down and displays self-similar time evolution associated with nonthermal fixed points. Ultracold atomic quantum simulators have been ideal platforms for studying universal dynamics because of their high degree of isolation from the environment and exquisite parameters and interactions controllability [5; 6; 7]. Recent experiments in one-dimensional Bose gas have observed spatiotemporal scaling of the structure factor of the spin correlation functions [11] and the momentum distribution of the atomic cloud [12]. In a three-dimensional homogeneous system, dynamic scaling is observed in both the infrared and ultraviolet regimes [13]. Despite these findings, the scaling exponents do not agree with the nonthermal universality classes [21; 22; 23; 24], leaving open the question of what determines the universal scaling exponents in nonequilibrium quantum systems and how to classify the far from equilibrium dynamics in these systems. Here, we observe universal coarsening dynamics in a quenched strongly ferromagnetic superfluid in two dimensions (2D). We demonstrate that universality can be classified by: i) the symmetry of the order parameter in the post-quench phase and ii) the merging and annihilation dynamics of the associated topological defects, such as domain walls and vortices. Quenching the quadratic Zeeman energy (QZE), magnetic domains of relatively small size are spontaneously generated and subsequently merge entering the coarsening stage in the long time-evolution. Monitoring the spin correlation functions at various hold times, we confirm that the dynamics is self-similar regardless of varying experimental conditions. Specifically, when the ground state after the quench has \(\mathbb{Z}_{2}\) (spin inversion) symmetry, the domain growth dynamics can be described by the universal scaling exponent \(1/z\simeq 0.58(2)\). At high-momentum - the so-called "Porod tail" [8] - is also observed in the structure factor as an imprint of the universal character of the dynamics, associated with magnetic domain formation with sharp edges. The results show that the emergent dynamics belongs to a binary fluid universality class in the inertial hydrodynamic regime [3; 25; 26]. Tuning the Hamiltonian to have SO(3), i.e. spin rotation symmetry, the characteristics of the ensuing magnetic domain coarsening are modified. In the diffusive growth dynamics of domain length [27], the scaling exponent is \(1/z\simeq 0.43(3)\), which be longs to the nonthermal universality class of O(\(N\)) symmetric Hamiltonian [21; 24]. Utilizing matter-wave interferometry, we identify the formation of spin vortices and argue that their annihilation is closely related to the observed diffusive dynamics. Our experiments begin with preparing a 2D degenerate spin-1 Bose gas of \({}^{7}\)Li atoms in an optical dipole trap [28] experiencing a finite magnetic field. As such the condensate is in the polar phase, where all atoms reside in the \(m_{z}=0\) hyperfine level [29]. To initiate the nonequilibrium dynamics, we switch on the microwave field that quenches the quadratic Zeeman energy from \(q/h=510\) Hz (polar phase) to a final value (Fig. 1a). This allows to dynamically cross the phase boundaries and thus renders the initial polar state unstable, forming magnetic domains. [30]. After a hold time \(t\), we measure the _in-situ_ atomic density for each spin state and record the magnetization either along the vertical \(M_{z}\) or the horizontal spin axis \(M_{x}\)[29]. A key feature of our system is the strongly ferromagnetic spin interactions [28], such that the characteristic time (length) scale is much shorter (smaller) compared to other alkali atomic systems. For instance, the spin interaction energy is \(c\simeq-h\times 160\) Hz and characteristic time scale for domain formation is \(t_{s}=\hbar/2|c|\simeq 0.5\) ms [30]. Such strong interaction makes it possible to monitor the spinor gas for evolution times \(t\simeq 2\times 10^{3}\)\(t_{s}\simeq 1\) s, being long enough to study the emergent universal coarsening dynamics [25; 26; 27; 31]. To validate the experimental observations, we perform extensive numerical simulations of the underlying Gross-Pitaevskii equations tailored to the experimental setup. The truncated Wigner approximation is employed [32] accounting for quantum and thermal fluctuations in the initial polar state [29]. We first investigate the nonequilibrium dynamics in the easy-axis ferromagnetic phase, \(q_{\rm EA}/h=-200\) Hz. The order parameter in the easy-axis has \(U(1)\times\mathbb{Z}_{2}\) symmetry supporting the formation of magnetic domain walls as topological defects (Fig. 1b). After the quench, the polar phase is dynamically unstable and atom pairs with \(|F=1,m_{z}=\pm 1\rangle\) (\(|\pm 1\rangle\)) spin states and opposite momenta are generated. The kinetic energy, \(\epsilon_{k}\), of the created spin states stems from the post-quench QZE and the associated spin interaction energy, \(\epsilon_{k}=-q_{\rm EA}-c\)[33; 34]. Since the kinetic energy is comparable to the condensate chemical potential \(\mu/h=310\) Hz, we can assure that the spinor gas is driven far from equilibrium. At early times, \(t<10\) ms, spin-mixing takes place and the populations of the spin \(|\pm 1\rangle\) states increase exponentially reaching a steady value after 100 ms. In the course of the spin-mixing process gauge vortices appear in the \(|\pm 1\rangle\) states, which either annihilate or drift out of the condensate, giving their place to magnetic domains [29]. Afterward, the number of spin domains decreases and their size increases, resulting in a process known as coarsening dynamics (Fig 1d-f). During the coarsening dynamics, the time-evolution displays a self-similar behavior characterized by a universal scaling law where the condensate is away from both its initial and equilibrium state. For longer evolution times (\(t\sim 2\) s), only a few domains are left, and coarsening is terminated [29]. The scaling behavior can be understood by analyzing the equal time correlation function of the longitudinal magnetization [26], \(G_{z}(\mathbf{r},t)=\frac{1}{\mathcal{N}}\int d^{2}\mathbf{r}^{\prime}\langle M _{z}(\mathbf{r}+\mathbf{r}^{\prime},t)M_{z}(\mathbf{r}^{\prime},t)\rangle\) depicted in Fig. 1g. Here, \(\mathbf{r}=(x,y)\) and \(\mathcal{N}=\int d^{2}\mathbf{r}^{\prime}\langle M_{z}(\mathbf{r}^{\prime},t) ^{2}\rangle\) is the normalization factor. In the inset of Fig. 2a, we present the radial profile of the spin correlation functions \(G_{z}(r,t)\) at various hold times. The anti-correlation captured by \(G_{z}(r,t)\) indicates the creation of magnetic domains in opposite spin states. We quantify, both in theory and experiment, the average domain size \(L(t)\) as the first zero of the correlation function, \(G_{z}(L,t)=0\)[26]. Indeed, upon rescaling the radial distance, \(r\to r/L(t)\), the correlation function at various hold times collapses onto a single curve, \(\mathcal{G}[r/L(t)]\) (Fig. 2a), indicating the self-similar character of the universal dynamics. The universal growth dynamics is characterized by the power law increase of the domain length \(L(t)\sim t^{1/z}\) (Fig. 2b), where the dynamical critical exponent \(1/z\) determines the universality class of the emergent coarsening dynamics. Since in the easy-axis phase, the spinor gas reduces to a binary superfluid system consisting of only the \(m_{z}=\pm 1\) components, the coarsening dynamics belongs to a binary fluid universality class or Model H [3]. Previous numerical studies operating in the thermodynamic limit indeed confirmed this argument and predicted the scaling exponent to be \(1/z=2/3\)[25; 26; 31]. Figure 2b shows the power law growth of \(L(t)\) as extracted from both experiment and theory. The scaling exponent in the experiment (open circles) is \(1/z=0.57(2)\), which is in excellent agreement with our mean-field simulations Figure 1: **Universal coarsening dynamics and topological defects.****a,** Schematic diagram of the experimental sequence. Ramping the quadratic Zeeman energy \(q\), the initially prepared polar condensate is quenched to a magnetic phase (\(q<q_{c}\)). Universal coarsening dynamics are investigated at (i) \(q_{\rm EA}<0\) easy-axis phases with \(\mathbb{Z}_{2}\) spin symmetry and (ii) \(q_{\rm Iso}=0\) isotropic ferromagnetic phase with SO(3) symmetry. **b,** Magnetic domain in the easy-axis ferromagnetic phase. **c,** Spin texture of the spin vortex in the isotropic ferromagnetic phase. The magnetization vector in each regime are shown on the left side of the defects in terms of spin sphere. **d-f,** Snap shot images of magnetization at various hold times after quenching the polar condensate to easy-axis phase, \(q_{\rm EA}/h=-200\) Hz. **g,** Correlation function of longitudinal magnetization \(G_{z}(x,y)\) at \(t=200\) ms. The data is averaged over 100 experimental realizations. \(0.59(1)\) using the experimental parameters. Here, the time interval for the scaling regime is set to \(t\in[0.2\text{ s},0.8\text{ s}]\), and the independency of the scaling exponent on the time interval is presented [29]. The exponents as found both experimentally and theoretically, however, are smaller than the predicted thermodynamic limit value \(1/z=2/3\)[25; 26; 31], and we attribute this discrepancy to the finite size of our system enforced by the external trap. While the universal scaling arguments are strictly valid in the thermodynamic limit, finite size corrections should reduce the exponent as, \(1/z\simeq 2/3(1-\mathcal{O}(\xi_{s}/L))\)[35; 31], where \(\xi_{s}=\hbar/\sqrt{2m|c|}\simeq 2.2\)\(\mu m\) is the spin healing length. Furthermore, our imaging system has an effective resolution of \(5\)\(\mu m\) that could increase the domain length. Employing the Weiner deconvolution method, we re-calibrate the domain size and obtain \(1/z=0.61(3)\). Similar universal behavior is observed in counting the magnetic domain number after the quench [29]. Dynamical scaling is also represented in the structure factor, \(S_{z}(k,t)=L(t)^{2}\tilde{S}_{u}(kL(t))\), which is the fourier transformation of the spin correlation function, with a scaling function \(S_{u}\)[26]. The scaling form is identical to the nonthermal fixed theory that suggests \(S_{z}(k,t)=t^{d/z}f_{S}(t^{1/z}k)\) in \(d\) spatial dimensions, and a scaling function \(f_{S}\)[16; 17; 18; 19; 20; 21]. In Fig. 3, we provide the rescaled structure factors within the time interval \(t\in[0.2\text{ s},0.8\text{ s}]\). A universal scaling of the Porod tail, \(S_{z}(k)\sim k^{-3}\) is observed [8]. At early times (\(t<100\text{ ms}\)), we observe that the structure factor monotonically decreases (not shown), and only after the system enters the coarsening stage the characteristic "knee" shape and the universal high-momentum tail are revealed. The \(k^{-3}\) scaling behavior originates from a linear decay of the correlation function with sharp domain wall edges among the \(m_{z}=\pm 1\) states [8], which is confirmed in our experiment by imaging the \(M_{z}\) and \(M_{x}\)[29]. Figure 3: **Dynamic scaling of the spin structure factor in the easy-axis quench.** The structure factor of longitudinal magnetization \(S_{z}(k,t)\) is rescaled by the domain length: \(S_{z}\to S_{z}/L^{2}\) and \(k\to kL\). The dashed line is the universal Porod tail \(S_{z}(k,t)\sim k^{-3}\) with an offset for clarity. The vertical line (red) represents the momentum resolution for \(t=0.2\text{ s}\). Inset shows the structure factor with compensation, \(\tilde{S}_{z}=\xi_{z}(k,t)Lk^{3}\). The solid line is the numerically calculated structure factor after rescaling. Figure 2: **Dynamic scaling and power law growth of the domain length.****a,** Scaled correlation function \(\mathcal{G}(r/L)\) at various hold times, \(t\in[0.2\text{ s},0.8\text{ s}]\). Longitudinal spin correlation functions at various hold times (inset) collapse onto a single function after rescaling the radial position by a characteristic length \(L(t)\). The domain size \(L(t)\) is set by a distance with \(G_{z}(r,t)=0\). The solid line represents the numerical result when using the experimental parameters. **b,** Power law growth of the domain length \(L(t)\). Data with closed (open) circles represent rescaled domain length after (without) deconvolution. The solid line is the theory line. The oscillatory behavior comes from the breathing motion of the condensates. Small deviations observed at long evolution times (\(t\sim 1\text{ s}\)) between the theory and experiment are attributed to atom losses by microwave dressing. The dashed line represents a power law function \(L(t)\sim t^{1/z}\) with \(1/z=0.61\), which is obtained from a linear fit in the log-log plot of the domain growth dynamics (inset). Each data point is obtained with more than 100 independent experimental runs, and one standard error of the mean (s.e.m) is smaller than the data point. To demonstrate universality, we further investigate the quench dynamics with three different experimental configurations (Fig. 4a): (i) We take an equal superposition of \(|\pm 1\rangle\) states at \(q/h=510\) Hz as the initial state and study the quench dynamics at \(q_{\mathrm{EA}}/h=-200\) Hz. The initial state has different dynamical instability from the polar condensate [33; 36], and we observe domain separation [37] instead of spin pair generation. (ii) Many vortices and anti-vortices are imprinted in the polar condensate by dragging a repulsive barrier before the quench [29], and we investigate the effect of vortices on the coarsening dynamics. (iii) We prepare the polar condensate and quench QZE to \(q_{\mathrm{EA}}/h=-120\) Hz, which is smaller than the reference experiment (\(q_{\mathrm{EA}}/h=-200\) Hz) but still in the easy-axis phase. In this case, the decay time of \(L(t)\) from the microwave dressing is increased from \(7\) s to \(40\) s. Even with such different experimental configurations, we obtain the same universal curve upon rescaling the spin correlation function (Fig. 4a), and the dynamical scaling exponents are all approximately \(1/z\simeq 0.58\) (Fig. 4b). This highlights the insensitivity of the universal coarsening dynamics to experimental details, which contrasts with the near equilibrium critical phenomena [3] that require a fine tuning of system parameters. Furthermore, the scaling exponent is far different from that of other universality classes of binary fluids, such as viscous hydrodynamics with \(z=1\) or diffusive dynamics with \(z=3\)[8]. We reaffirm that the coarsening dynamics of the 2D ferromagnetic superfluid in the easy-axis phase belongs to the binary fluid universality class in the inertial hydrodynamic regime [3]. We now turn our attention to examine the coarsening dynamics at \(q_{\mathrm{Iso}}=0\) (Fig. 1c), where in contrast to the easy-axis phase the ground state is invariant under spin rotations. Therefore, we aim to investigate the impact of the symmetry of the order parameter, here obeying SO(3) rotational symmetry, and topological defects on the universal behavior of the spinor system. Since the first homotopy group of the SO(3) is \(\pi_{\mathrm{I}}\left[\mathrm{SO}(3)\right]=\mathbb{Z}_{2}\), the condensates support \(\mathbb{Z}_{2}\) spin vortices as a topological defect [33]. A recent study shows that universal coarsening dynamics could also occur at the spin isotropic point like in the easy-axis phase, but with different exponent \(1/z=0.5\)[27]. The result is consistent with the theory of non-thermal fixed point that predicts the dynamical scaling exponent \(\beta\simeq 0.5\) for a bosonic scalar model with O(\(N\)) or U(\(N\)) symmetry in dimensions \(d\geq 2\)[24; 21; 38]. The numerical study [27] also reports that the domain growth dynamics could be associated with the annihilation of the \(\mathbb{Z}_{2}\) spin vortices. Figure 5 summarizes the experimental results on the coarsening dynamics under the spin isotropic Hamiltonian. Since the spin vectors can point in an arbitrary direction, domain coarsening is observed in both \(M_{\mathrm{x}}\) and \(M_{z}\)[29]. Following the same analysis as in the easy-axis quench experiment, we rescale the correlation functions by \(L(t)\) and observe their collapse into a single curve (Fig. 5a), in line with the mean-field analysis. The newly obtained universal curves are similar in each axis measurement but are distinctive from those of the easy-axis quench experiment (Fig. 5a inset), implying that the dynamics at the spin isotropic point belong to different universality classes. This can be further supported by the scaling exponent in the domain length \(L(t)\sim t^{1/z}\) (Fig. 5b). We find the scaling exponents to be \(1/z\simeq 0.45(3)\) for \(M_{x}\) and \(1/z\simeq 0.41(2)\) for \(M_{z}\). Here, the exponents are close to the prediction \(1/z=0.5\) and show good agreement with the finite size numerical simulations, \(1/z_{\mathrm{sim}}\simeq 0.40(1)\). To identify the underlying mechanism responsible for the coarsening dynamics in the SO(3) phase, we monitor the spin vortices and study their decay dynamics during the coarsening stage. For this reason, matter-wave interferometry is adopted that can identify the position of the vortex cores by reading out the relative phase winding between spin states [29]. Fig. 5c Figure 4: **Universal coarsening dynamics in the easy-axis ferromagnetic phase.****a,** Scaled spin correlation functions \(\mathcal{G}(r/L)\) in the coarsening stage regarding four different experimental configurations (see main text). The reference experiment refers to the coarsening dynamics at \(q_{\mathrm{EA}}/h=-200\) Hz with the polar phase as initial state (Fig. 2a.) The inset shows spin-resolved absorption images for the various initial conditions. The vortices in (ii) can be identified after \(6\) ms of short time-of-flight. **b,** Dynamical scaling exponents under various configurations. Data with closed (open) circles represent the exponent after (without) deconvolution. Error bars denote the fit errors (resampling error is smaller than the fit errors). The shaded line is the dynamic exponent obtained from our finite-sized simulations, and the solid line indicates the dynamic exponents for the inertial hydrodynamic regime in the thermodynamic limit \(1/z=2/3\). The experimental results are distinguished from other dynamic exponents in a binary fluid universality class in the viscous \(1/z=1\) (dashed line) and diffusive \(1/z=1/3\) (dashed-dot lines) regimes. shows interference patterns at \(t=1\) s after quenching, where the fork-shaped fringes are well represented in all three spin components. We also observe images with closely bounded vortex and anti-vortex pairs (Fig. 5e). The existence of these spin vortices and vortex pairs is well reproduced in the simulated interference images (Fig. 5d,f). Assigning the position of the spin vortex (vortex pairs) to the joint point in the fork-shaped (\(H\)-shaped) patterns, we count the spin vortex number \(N_{S}\) and calculate the average distance between vortices \(l_{S}\) at various hold times (Fig. 5g). The vortex number gradually decays, while the mean distance increases as time evolves. Since the imaging resolution is larger than the spin healing length, we underestimate the vortex number when the condensate contains many vortices. Nevertheless, the vortex number scales with the domain size, such that \(N_{S}\sim 1/L(t)^{2}\) and \(l_{S}\sim L(t)\). The decay process of spin vortex pairs occurs at a similar timescale [29], hinting to its intricate connection with the universal coarsening dynamics in the isotropic SO(3) symmetric phase. This is further supported from our numeric simulation, where we are able to calculate the argument of the transverse spin-vector and track the respective phase jumps [29]. In conclusion, we observe universal coarsening dynamics in two dimensions utilizing a strongly ferromagnetic spinor condensate. We find that the universal dynamics can be categorized into a well-defined universality class based on the symmetry of the order parameter and the dynamics of topological defects, such as domain walls and spin vortices. Our research demonstrates diverse capabilities of cold atom quantum simulators in characterizing nonequilibrium quantum dy Figure 5: **Coarsening dynamics in the isotropic ferromagnetic phase.****a,** Scaled correlation functions of magnetization along the \(x\) (red square) and \(z\) axes (blue circles) in the experiment as well as from theory (solid line). The domain length \(L\) is characterized by the first-zero of the correlation function for both axes, \(G_{x,z}(L,t)=0\). Inset: differences in scaled correlation functions. The solid line is \(\Delta\mathcal{G}=\mathcal{G}_{\text{q}_{\text{m}}}-\mathcal{G}_{\text{q}_{ \text{EA}}}\), where \(\mathcal{G}_{\text{q}_{\text{m}}}=(\mathcal{G}_{x}+\mathcal{G}_{z})/2\) and \(\mathcal{G}_{\text{q}_{\text{EA}}}\) are obtained from interpolating data in Fig. 2a. The dashed line is \(\mathcal{G}_{x}-\mathcal{G}_{z}\). **b,** Growth dynamics of the spin domain in each axis. The solid line is the numerical result using our experimental parameters. The inset shows a log-log plot of the domain length dynamics, where we extract the scaling exponent by a linear fit (dashed lines). **c,** Matter-wave interference images during the coarsening dynamics (\(t=1\) s). The two-to-one (three-to-one) fork-shape fringes in the spin \(|\pm 1\rangle\) (\(|0\rangle\)) state represent phase windings of \(2\pi\) (\(4\pi\)) around the vortex core. **d,** Simulated interference pattern with a spin vortex at the trap center. **e,** Interference images with spin vortex and anti-vortex pairs at \(t=1\) s (orange circle). **f,** Numerical simulations with the vortex pair located at the trap center (\(H\)-shaped pattern, orange box). **g,** Number of spin vortices \(N_{S}\) as a function of time. Inset: the intervortex distance \(l_{S}\) during time evolution. Solid lines are power-law guidelines, \(N_{S}(t)\sim 1/L(t)^{2}\) and \(l_{S}\sim L(t)\). The vortex number is the average over 40 independent experiments, and the error bars indicate 1 standard error of mean. namics, thus providing a steppingstone to a comprehensive understanding of quantum thermalization process in multi-disciplinary research fields. Further extensions include the investigation of the universal dynamics mediated by other types of excitations, such as vortices in a two-dimensional superfluid [38; 39; 14], solitons in one dimension [23; 40], magnons in the Heisenberg spin model [41; 42], and chiral quantum magnetization with spin-orbit interaction [43; 44]. Moreover, our strongly interacting platform offers new opportunities to explore long-time thermalization dynamics in two dimensions, where the long-lived topological defects can slow down equilibration [45]. ###### Acknowledgements. We acknowledge discussions with Immanuel Bloch, Suk-Bum Chung, Fang Fang, Timon Hilker, Garyfallia Katsimiga, Panayotis Kevrekidis, Kyungtae Kim, Se Kwon Kim, Sonjoy Majumder, Stephine M. Reimann, and Yong-il Shin. J. C. is supported by the Samsung Science and Technology Foundation BA1702-06, National Research Foundation of Korea (NRF) Grant under Projects No. RS-2023-00207974, and KAIST UP program. S. I. M and H. R. S. gratefully acknowledge financial support from the NSF through a grant for ITAMP at Harvard University. K.M. is financially supported by Knut and Alice Wallenberg Foundation (KAW No. 2018.0217) and the Swedish Research Council. K.M. also acknowledges MHRD, Govt. of India for a research fellowship at the early stage of this work.
2302.12465
PaGE-Link: Path-based Graph Neural Network Explanation for Heterogeneous Link Prediction
Transparency and accountability have become major concerns for black-box machine learning (ML) models. Proper explanations for the model behavior increase model transparency and help researchers develop more accountable models. Graph neural networks (GNN) have recently shown superior performance in many graph ML problems than traditional methods, and explaining them has attracted increased interest. However, GNN explanation for link prediction (LP) is lacking in the literature. LP is an essential GNN task and corresponds to web applications like recommendation and sponsored search on web. Given existing GNN explanation methods only address node/graph-level tasks, we propose Path-based GNN Explanation for heterogeneous Link prediction (PaGE-Link) that generates explanations with connection interpretability, enjoys model scalability, and handles graph heterogeneity. Qualitatively, PaGE-Link can generate explanations as paths connecting a node pair, which naturally captures connections between the two nodes and easily transfer to human-interpretable explanations. Quantitatively, explanations generated by PaGE-Link improve AUC for recommendation on citation and user-item graphs by 9 - 35% and are chosen as better by 78.79% of responses in human evaluation.
Shichang Zhang, Jiani Zhang, Xiang Song, Soji Adeshina, Da Zheng, Christos Faloutsos, Yizhou Sun
2023-02-24T05:43:47Z
http://arxiv.org/abs/2302.12465v3
# PaGE-Link: Path-based Graph Neural Network Explanation for Heterogeneous Link Prediction ###### Abstract. Transparency and accountability have become major concerns for black-box machine learning (ML) models. Proper explanations for the model behavior increase model transparency and help researchers develop more accountable models. Graph neural networks (GNN) have recently shown superior performance in many graph ML problems than traditional methods, and explaining them has attracted increased interest. However, GNN explanation for link prediction (LP) is lacking in the literature. LP is an essential GNN task and corresponds to web applications like recommendation and sponsored search on web. Given existing GNN explanation methods only address node/graph-level tasks, we propose Path-based GNN Explanation for heterogeneous Link prediction (_PaGE-Link_) that generates explanations with _connection interpretability_, enjoys model _scalability_, and handles graph _heterogeneity_. Qualitatively, PaGE-Link can generate explanations as paths connecting a node pair, which naturally captures connections between the two nodes and easily transfer to human-interpretable explanations. Quantitatively, explanations generated by PaGE-Link improve AUC for recommendation on citation and user-item graphs by _9 - 35%_ and are chosen as better by _78.79%_ of responses in human evaluation. Model Transparency, Model Explanation, Graph Neural Networks, Link Prediction + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + grows from \(m\) to \(\sim\)\(2m\) compared to the node prediction task because neighbors of both the source and the target are involved. Since most existing methods consider all (edge-induced) subgraphs, the increased edges will scale the number of subgraph candidates by a factor of \(O(2^{m})\), which makes finding the optimal subgraph explanation much harder. 3) _Heterogeneity:_ Practical LP is often on heterogeneous graphs with rich node and edge types, e.g., a graph for recommendations can have user->buys->item edges and item->has->attribute edges, but existing methods only work for homogeneous graphs. In light of the importance and challenges of GNN explanation for LP, we formulate it as a post hoc and instance-level explanation problem and generate explanations for it in the form of important paths connecting the source node and the target node. Paths have played substantial roles in graph ML and are the core of many non-GNNLP methods (Gan et al., 2015; Liu et al., 2016; Wang et al., 2017; Wang et al., 2018). Paths as explanations can solve the connection interpretability and scalability challenges. Firstly, paths connecting two nodes naturally explain connections between them. Figure 1 shows an example on a graph for recommendations. Given a GNN and a predicted link between user \(u_{1}\) and item \(i_{1}\), human-interpretable explanations may be based on the user's preference of attributes (e.g., user \(u_{1}\) bought item \(i2\) that shared the same attribute \(a_{1}\) as item \(i_{1}\)) or collaborative filtering (e.g, user \(u_{1}\) had a similar preference as user \(u_{2}\) because they both bought item \(i_{3}\) and user \(u_{2}\) bought item \(i_{1}\), so that user \(u_{1}\) would like item \(i_{1}\)). Both explanations boil down to paths. Secondly, paths have a considerably smaller search space than general subgraphs. As we will see in Proposition 4.1, compared to the expected number of edge-induced subgraphs, the expected number of paths grows strictly slower and becomes negligible. Therefore, path explanations exclude many less-meaningful subgraph candidates, making the explanation generation much more straightforward and accurate. To this end, we propose Path-based GNN Explanation for heterogeneous Link prediction (PaGE-Link), which achieves a better explanation AUC and scales linearly in the number of edges (see Figure 2). We first perform _k-core pruning_(Beng et al., 2015) to help find paths and improve scalability. Then we do _heterogeneous path-enforcing_ mask learning to determine important paths, which handles heterogeneity and enforces the explanation edges to form paths connecting source to target. In summary, the contributions of our method are: * **Connection Interpretability:** PaGE-Link produces more interpretable explanations in path forms and quantitatively improves explanation AUC over baselines. * **Scalability:** PaGE-Link reduces the explanation search space by magnitudes from subgraph finding to path finding and scales linearly in the number of graph edges. * **Heterogeneity:** PaGE-Link works on heterogeneous graphs and leverages edge-type information to generate better explanations. ## 2. Related Work We review relevant research on (a) GNNs (b) GNN explanation (c) recommendation explanation and (d) paths for LP. We summarize the properties of PaGE-Link vs. representative methods in Table 1. GNNs.GNNs are a family of ML models on graphs (Gan et al., 2015; Wang et al., 2017; Wang et al., 2018). They take graph structure and node/edge features as input and output node representations by transforming and aggregating features of nodes' (multi-hop) neighbors. The node representations can be used for LP and achieved great results on LP applications (Gan et al., 2015; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). We review GNN-based LP models in Section 3. GNN explanation.GNN explanation was studied for node and graph classification, where the explanation is defined as an important subgraph. Existing methods majorly differ in their definition of importance and subgraph selection methods. GNNExplainer (GNNExplainer, 2018) selects edge-induced subgraphs by learning fully parameterized masks on graph edges and node features, where the mutual information (MI) between the masked graph and the prediction made with the original graph is maximized. PGEExplainer (GNNExplainer, 2018) adopts the same MI importance but trains a mask predictor to generate a discrete mask instead. Other popular importance measures are game theory values. SubgraphX (Wang et al., 2018) uses the Shapley value (Shapiro et al., 2018) and performs Monte Carlo Tree Search (MCTS) on subgraphs. GStarX (GStarX, 2018) uses a structure-aware HN value (Gan et al., 2015) to measure the importance of nodes and generates the important-node-induced subgraph. There are more studies from other perspectives that are less related to this work, i.e., surrogate models (Gan et al., 2015; Wang et al., 2018), counterfactual explanations (Wang et al., 2018), and causality (Wang et al., 2018; Wang et al., 2018), for which (Wang et al., 2018) provides a good review. While these methods produce subgraphs as explanations, what makes a good explanation is a complex topic, especially how to meet "stakeholders' desiderata" (Gan et al., 2015). Our work differs from all above since we focus on a new task of explaining heterogeneous LP, and we generate paths instead of unrestricted subgraphs as explanations. The interpretability of paths makes our method advantaged especially when stakeholders have less ML background. Figure 1. Given a GNN model and a predicted link \((u_{1},i_{1})\) (dashed red) on a heterogeneous graph of user \(u\), item \(i\), and attribute \(a\) (left). PaGE-Link generates two path explanations (green arrows). Interpretations illustrated on the right. Figure 2. (a) PaGE-Link outperforms GNNExplainer and PGExplainer in terms of explanation AUC on the citation graph and the user-item graph. (b) The running time of PaGE-Link scales linearly in the number of graph edges. _Recommendation explanation._ This line of works explains why a recommendation is made (Sohn et al., 2017). J-RECS (Kang et al., 2018) generates recommendation explanations on product graphs using a justification score that balances item relevance and diversity. PRINCE (Bordes and McAllester, 2016) produces end-user explanations as a set of minimal actions performed by the user on graphs with users, items, reviews, and categories. The set of actions is selected using counterfactual evidence. Typically, recommendations on graphs can be formalized as an LP task. However, the recommendation explanation problem differs from explaining GNNs for LP because the recommendation data may not be graphs, and the models to be explained are primarily not GNN-based (Sohn et al., 2017). GNNs have their unique message passing procedure, and GNN-based LP corresponds to more general applications beyond recommendation, e.g., drug repurposing (Kipf and Welling, 2017), and knowledge graph completion (Bordes and McAllester, 2016; Kang et al., 2018). Thus, recommendation explanation is related to but not directly comparable to GNN explanation. _Paths._ Paths are important in graph ML, and many LP methods are path-based, such as graph distance (Kang et al., 2018), Katz index (Katz, 1976), SimRank (Katz, 1976), and PathSim (Sohn et al., 2017). Paths have also been used to capture the relationship between a pair of nodes. For example, the "connection subgraphs" (Bordes and McAllester, 2016) find paths between the source and the target based on electricity analogs. In general, although black-box GNNs recently outperform path-based methods in LP accuracy, we embrace paths for their interpretability for LP explanation. ## 3. Notations and preliminary In this section, we define necessary notations, summarize them in Table 2, and review the GNN-based LP models. **Definition 3.1**.: A heterogeneous graph is defined as a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) associated with a node type mapping function \(\phi:\mathcal{V}\rightarrow\mathcal{A}\) and an edge type mapping function \(\tau:\mathcal{E}\rightarrow\mathcal{R}\). Each node \(v\in\mathcal{V}\) belongs to one node type \(\phi(v)\in\mathcal{A}\) and each edge \(e\in\mathcal{E}\) belongs to one edge type \(\tau(e)\in\mathcal{R}\). Let \(\Phi(\cdot,\cdot)\) denote a trained GNN-based model for predicting the missing links in \(\mathcal{G}\), where a prediction \(Y=\Phi(\mathcal{G},(s,t))\) denotes the predicted link between a source node \(s\) and a target node \(t\). The model \(\Phi\) learns a conditional distribution \(P_{\Phi}(Y|\mathcal{G},(s,t))\) of the binary random variable \(Y\). The commonly used GNN-based LP models (Sohn et al., 2017; Sohn et al., 2017; Sohn et al., 2017) involve two steps. The first step is to generate node representations \((\mathbf{h}_{s},\mathbf{h}_{t})\) of \((s,t)\) with an \(L\)-hop GNN encoder. The second step is to apply a prediction head on \((\mathbf{h}_{s},\mathbf{h}_{t})\) to get the prediction of \(Y\). An example prediction head is an inner product. To explain \(\Phi(\mathcal{G},(s,t))\) with an \(L\)-Layer GNN encoder, we restrict to the _computation graph_\(\mathcal{G}_{c}=(\mathcal{V}_{c},\mathcal{E}_{c})\). \(\mathcal{G}_{c}\) is the \(L\)-hop ego-graph of the predicted pair \((s,t)\), i.e., the subgraph with node set \(\mathcal{V}_{c}=\{v\in V|dist(v,s)\leq L\text{ or }dist(v,t)\leq L\}\). It is called a computation graph because the \(L\)-layer GNN only collects messages from the \(L\)-hop neighbors of \(s\) and \(t\) to compute \(\mathbf{h}_{s}\) and \(\mathbf{h}_{t}\). The LP result is thus fully determined by \(\mathcal{G}_{c}\), i.e., \(\Phi(\mathcal{G},(s,t))\equiv\Phi(\mathcal{G}_{c},(s,t))\). Figure 2(b) shows a 2-hop ego-graph of \(u_{1}\) and \(i_{1}\), where \(u_{3}\) and \(a_{3}^{3}\) are excluded since they are more than 2 hops away from either \(u_{1}\) or \(i_{1}\). ## 4. Proposed problem formulation: link-prediction explanation In this work, we address a _post hoc_ and _instance-level_ GNN explanation problem. The post hoc means the model \(\Phi(\cdot,\cdot)\) has been trained. To generate explanations, we won't change its architecture or parameters. The instance level means we generate an explanation for the prediction of each instance \((s,t)\). Specifically, the explanation method answers the question of why a missing link is predicted by \(\Phi(\cdot,\cdot)\). In a practical web recommendation system, this question can be "_why an item is recommended to a user by the model_". An explanation for a GNN prediction should be some substructure in \(\mathcal{G}_{c}\), and it should also be concise, i.e., limited by a size budget \(B\). This is because an explanation with a large size is often neither informative nor interpretable, for example, an extreme case is that \(\mathcal{G}_{c}\) could be a non-informative explanation for itself. Also, a fair comparison between different explanations should consume the same budget. In the following, we define budget \(B\) as the maximum number of edges included in the explanation. We list three desirable properties for a GNN explanation method on heterogeneous LP: capturing the connection between the source node and the target node, scalable to large graphs, and addressing graph heterogeneity. Using a path-based method inherently possesses all the properties. Paths capture the connection between a pair of nodes and can be transferred to human-interpretable explanations. Besides, the search space of paths with the fixed source node and the target node is greatly reduced compared to edge-induced subgraphs. Given the ego-graph \(\mathcal{G}_{c}\) of \(s\) and \(t\), the number of paths between \(s\) and \(t\) and the number of edge-induced subgraphs in \(\mathcal{G}_{c}\) both rely on the structure of \(\mathcal{G}_{c}\). However, they can \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{\(\mathcal{G}=(\mathcal{V},\mathcal{E})\)} & \multirow{2}{*}{a heterogeneous graph \(\mathcal{G}\), node set \(\mathcal{V}\), and edge set \(\mathcal{E}\)} \\ & & & & & & & & \\ \hline On Graphs & ✓ & ✓ & ✓ & ✓ & ✓ &? & ✓ \\ Explains GNN & ✓ & ✓ & ✓ & ✓ & & & \\ Explains LP &? &? &? &? & ✓ & ✓ & ✓ \\ Connection & & & &? &? &? &? & ✓ \\ Scalability & ✓ & ✓ & & &? &? &? & ✓ \\ Heterogeneity & & & & ✓ & ✓ & ✓ &? & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1. Methods and desired explanation properties. A question mark (?) means “unclear”, or “maybe, after non-trivial extensions”. “Rec. Exp.” stands for the general recommendation explanation methods. \begin{table} \begin{tabular}{l|l} \hline \hline Notation & Definition and description \\ \hline \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) & a heterogeneous graph \(\mathcal{G}\), node set \(\mathcal{V}\), and edge set \(\mathcal{E}\) \\ \(\phi:\mathcal{V}\rightarrow\mathcal{A}\) & a node type mapping function \\ \(\tau:\mathcal{E}\rightarrow\mathcal{R}\) & an edge type mapping function \\ \(D_{u}\) & the degree of node \(v\in\mathcal{V}\) \\ \(\mathcal{E}^{r}\) & edges with type \(r\in\mathcal{R}\), i.e., \(\mathcal{E}^{r}=\{e\in\mathcal{E}|\tau(e)=r\}\) \\ \((\Phi,\cdot,\cdot)\) & the GNN-based LP model to explain \\ \((s,t)\) & the source and target node for the predicted link \\ \(\mathbf{h}_{s}\triangleq\mathbf{h}_{t}\) & the node representations for \(s\triangleq t\) \\ \(Y=\Phi(\mathcal{G},(s,t))\) & the link prediction of the node pair \((s,t)\) \\ \(\mathcal{G}_{c}=(\mathcal{V}_{c},\mathcal{E}_{c})\) & the computation graph, i.e., L-hop ego-graph of \((s,t)\) \\ \hline \hline \end{tabular} \end{table} Table 2. Notation table be estimated using random graph approximations. The next proposition on random graphs shows that the expected number of paths grows strictly slower than the expected number of edge-induced subgraphs as the random graph grows. Also, the expected number of paths becomes insignificant for large graphs. **Proposition 4.1**.: _Let \(\mathcal{G}(n,d)\) be a random graph with \(n\) nodes and density \(d\), i.e., there are \(m=dbinom{n}{2}\) edges chosen uniformly randomly from all node pairs. Let \(Z_{n,d}\) be the expected number of paths between any pair of nodes. Let \(S_{n,d}\) be the expected number of edge-induced subgraphs. Then \(Z_{n,d}=o(S_{n,d})\), i.e., \(\lim_{n\to\infty}\frac{Z_{n,d}}{S_{n,d}}=0\)._ Proof.: In Appendix A. Paths are also a natural choice for LP explanations on heterogeneous graphs. On homogeneous graphs, features are important for prediction and explanation. A \(s\)-\(t\) link may be predicted because of the feature similarity of node \(s\) and node \(t\). However, the heterogeneous graphs we focus on, as defined in Definition 3.1, often do not store feature information but explicitly model it using new node and edge types. For example, for the heterogeneous graph in Figure 2(a), instead of making it a user-item graph and assigning each item node a two-dimensional feature with attributes \(a^{1}\) and \(a^{2}\), the attribute nodes are explicitly created and connected to the item nodes. Then an explanation like "\(i_{1}\) and \(i_{2}\) share node feature \(a_{1}^{1_{*}}\) on a homogeneous graph is transferred to "\(i_{1}\) and \(i_{2}\) are connected through the attribute node \(a_{1}^{1_{*}}\) on a heterogeneous graph. Given the advantages of paths over general subgraphs on connection interpretability, scalability, and their capability to capture feature similarity on heterogeneous graphs, we use paths to explain GNNs for heterogeneous LP. Our design principle is that a good explanation should be concise and informative, so we define the explanation to contain only _short_ paths _without high-degree_ nodes. Long paths are less desirable since they could correspond to unnecessarily complicated connections, making the explanation neither concise nor convincing. For example, in Figure 2(c), the long path \((u_{1},i_{3},a_{2}^{1},i_{2},a_{1}^{1},i_{1})\) is not ideal since it takes four hops to go from item \(i_{3}\) to the item \(i_{1}\), making it less persuasive to be interpreted as "item1 and item3 are similar so item1 should be recommended". Paths containing high-degree nodes are also less desirable because high-degree nodes are often generic, and a path going through them is not as informative. In the same figure, all paths containing node \(a_{2}^{1}\) are less desirable because \(a_{2}^{1}\) has a high degree and connects to all the items in the graph. A real example of a generic attribute is the attribute "grocery" connecting to both "vanilla ice cream" and "vanilla cookie". When "vanilla ice cream" is recommended to a person who bought "vanilla cookie", explaining this recommendation with a path going through "grocery" is not very informative since "grocery" connects many items. In contrast, a good informative path explanation should go through the attribute "vanilla", which only connects to vanilla-flavored items and has a much lower degree. We formalize the GNN explanation for heterogeneous LP as: **Problem 4.2**.: Generating path-based explanations for a predicted link between node \(s\) and \(t\): * a trained GNN-based LP model \(\Phi(\cdot,\cdot)\), * a heterogeneous computation graph \(\mathcal{G}_{c}\) of \(s\) and \(t\), * a budget \(B\) of the maximum number of edges in the explanation, * **Find** an explanation \(\mathcal{P}=\{p|p\) is a \(s\)-\(t\) path with maximum length \(l_{max}\) and degree of each node less than \(D_{max}\}\), \(|\mathcal{P}|l_{max}\leq B\), * **By optimizing \(p\in\mathcal{P}\)** to be influential to the prediction, concise, and informative. ## 5. Proposed Method: Page-Link This section details PaGE-Link. PaGE-Link has two modules: (i) a \(k\)-core pruning module to eliminate spurious neighbors and improve speed, and (ii) a heterogeneous path-enforcing mask learning module to identify important paths. An illustration is in Figure 3. ### The k-core Pruning The _\(k\)-core pruning_ module of PaGE-Link reduces the complexity of \(\mathcal{G}_{c}\). The \(k\)-core of a graph is defined as the unique maximal subgraph with a minimum node degree \(k\)((k)\)(Barb et al., 2016). We use the superscript \(k\) to denote the \(k\)-core, i.e., \(\mathcal{G}_{c}^{k}=(\mathcal{G}_{c}^{k},\mathcal{V}_{c}^{k})\) for the \(k\)-core of \(\mathcal{G}_{c}\). The \(k\)-core pruning is a recursive algorithm that removes nodes \(v\in\mathcal{V}\) such that their degrees \(D_{v}<k\), until the remaining subgraph only has nodes with \(D_{v}\geq k\), which gives the \(k\)-core. The difference in nodes between a (\(k+1\))-core and a \(k\)-core is called the \(k\)-shell. The nodes in the orange box of Figure 2(b) is an example of a \(2\)-core pruned from the \(2\)-hop ego-graph, where node \(a_{1}^{2}\) and \(a_{2}^{2}\) are pruned in the first iteration because they are degree one. Node \(i\)s is recursively pruned because it becomes degree one after node Figure 3. PaGE-Link on a graph with user nodes \(u\), item nodes \(i\), and two attribute types \(a^{1}\) and \(a^{2}\). (Best viewed in color.) is pruned. All those three nodes belong to the 1-shell. We perform \(k\)-core pruning to help path finding because the pruned \(k\)-shell nodes are unlikely to be part of meaningful paths when \(k\) is small. For example, the 1-shell nodes are either leaf nodes or will become leaf nodes during the recursive pruning, which will never be part of a path unless \(s\) or \(t\) are one of these 1-shell nodes. The \(k\)-core pruning module in PaGE-Link is modified from the standard \(k\)-core pruning by adding a condition of never pruning \(s\) and \(t\). The following theorem shows that for a random graph \(\mathcal{G}(n,d)\), \(k\)-core will reduce the expected number of nodes by a factor of \(\delta_{\mathcal{V}}(n,d,k)\) and reduce the expected number of edges by a factor of \(\delta_{\mathcal{E}}(n,d,k)\). Both factors are functions of \(n\), \(d\), and \(k\). We defer the exact expressions of these two factors in Appendix B, since they are only implicitly defined based on Poisson distribution. Numerically, for a random \(\mathcal{G}(n,d)\) with average node degree \(d(n-1)=7\), its 5-core has \(\delta_{\mathcal{V}}(n,d,5)\) and \(\delta_{\mathcal{E}}(n,d,5)\) both \(\approx 0.69\). **Theorem 5.1** (Pittel, Spencer and Wormald (Pittel, Spencer and Wormald, 2015)).: _Let \(\mathcal{G}(n,d)\) be a random graph with \(m\) edges as in Proposition 4.1. Let \(\mathcal{G}^{k}(n,d)=(\mathcal{V}^{k}(n,d),\mathcal{E}^{k}(n,d))\) be the nonempty \(k\)-core of \(\mathcal{G}(n,d)\). Then \(\mathcal{G}^{k}(n,d)\) contain \(\delta_{\mathcal{V}}(n,d,k)n\) nodes and \(\delta_{\mathcal{E}}(n,d,k)m\) edges with high probability for large \(n\), i.e., \(|\mathcal{V}^{k}(n,d)|/n\stackrel{{ p}}{{\rightarrow}}\delta_{ \mathcal{V}}(n,d,k)\) and \(|\mathcal{E}^{k}(n,d)|/m\stackrel{{ p}}{{\rightarrow}}\delta_{ \mathcal{E}}(n,d,k)\) (\(\stackrel{{ p}}{{\rightarrow}}\) stands for convergence in probability)._ Proof.: Please refer to Appendix B and (Pittel, Spencer and Wormald, 2015). The \(k\)-core pruning helps reduce the graph complexity and accelerates path finding. One concern is whether it prunes too much and disconnects \(s\) and \(t\). We found that such a situation is very unlikely to happen in practice. To be specific, we focus on explaining positively predicted links, e.g. why an item is recommended to a user by the model. Negative predictions, e.g., why an arbitrary item is not recommended to a user by the model, are less useful in practice and thus not in the scope of our explanation. \((s,t)\) node pairs are usually connected by many paths in a practical \(\mathcal{G}\)(Kolmogorov, 2008), and positive link predictions are rarely made between disconnected or weakly-connected \((s,t)\). Empirically, we observe that there are usually too many paths connecting a positively predicted \((s,t)\) instead of no paths, even in the \(k\)-core. Therefore, an optional step to enhance pruning is to remove nodes with super-high degrees. As we discussed in Section 4, high-degree nodes are often generic and less informative. Removing them can be a complement to k-core to further reduce complexity and improve path quality. ### Heterogeneous Path-Enforcing Mask Learning The second module of PaGE-Link learns heterogeneous masks to find important path-forming edges. We perform mask learning to select edges from the \(k\)-core-pruned computation graph. For notation simplicity in this section, we use \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) to denote the graph for mask learning to save superscripts and subscripts, and \(\mathcal{G}^{k}_{\mathcal{E}}\) is the actual graph in the complete version of our algorithm. The idea is to learn a mask over all edges of all edge types to select the important edges. Let \(\mathcal{E}^{r}=\{e\in\mathcal{E}|\tau(e)=r\}\) be edges with type \(r\in\mathcal{R}\). Let \(\mathcal{M}=\{\mathcal{M}^{r}\}_{r=1}^{|\mathcal{R}|}\) be learnable masks of all edge types, with \(\mathcal{M}^{r}\in\mathbb{R}^{|\mathcal{E}^{r}|}\) corresponds type \(r\). We denote applying \(\mathcal{M}^{r}\) on its corresponding edge type by \(\mathcal{E}^{r}\odot\sigma(\mathcal{M}^{r})\), where \(\sigma\) is the sigmoid function, and \(\odot\) is the element-wise product. Similarly, we also overload the notation \(\odot\) to indicate applying the set of masks on all types of edges, i.e., \(\mathcal{E}\odot\sigma(\mathcal{M})=\cup_{r\in\mathcal{R}}\{\mathcal{E}^{r} \odot\sigma(\mathcal{M}^{r})\}\). We call the graph with the edge set \(\mathcal{E}\odot\sigma(\mathcal{M})\) a _masked graph_. Applying a mask on graph edges will change the edge weights, which makes GNNs pass more information between nodes connected by highly-weighted edges and less on others. The general idea of mask learning is to learn an \(\mathcal{M}\) that produces high weights for important edges and low weights for others. To learn an \(\mathcal{M}\) that better fits the LP explanation, we measure edge importance from two perspectives: important edges should be both influential for the model prediction and form meaningful paths. Below, we introduce two loss terms \(\mathcal{L}_{pred}\) and \(\mathcal{L}_{path}\) for achieving these two measurements. \(\mathcal{L}_{pred}\) is to learn to select influential edges for model prediction. The idea is to do a perturbation-based explanation, where parts of the input are considered important if perturbing them changes the model prediction significantly. In the graph sense, if removing an edge \(e\) significantly influences the prediction, then \(e\) is a critical counterfactual edge that should be part of the explanation. This idea can be formalized as maximizing the mutual information between the masked graph and the original graph prediction \(Y\), which is equivalent to minimizing the prediction loss \[\mathcal{L}_{pred}(\mathcal{M})=-\log P_{\Phi}(Y=1|\mathcal{G}=(\mathcal{V}, \mathcal{E}\odot\sigma(\mathcal{M})),(s,t)). \tag{1}\] \(\mathcal{L}_{pred}(\mathcal{M})\) has a straightforward meaning, which says the masked subgraph should provide enough information for predicting the missing link \((s,t)\) as the whole graph. Since the original prediction is a constant, \(\mathcal{L}_{pred}(\mathcal{M})\) can also be interpreted as the performance drop after the mask is applied to the graph. A well-masked graph should give a minimum performance drop. Regularizations of the mask entropy and mask norm are often included in \(\mathcal{L}_{pred}(\mathcal{M})\) to encourage the mask to be discrete and sparse. \(\mathcal{L}_{path}\) is the loss term for \(\mathcal{M}\) to learn to select path-forming edges. The idea is to first identify a set of candidate edges denoted by \(\mathcal{E}_{path}\) (specified below), where these edges can form concise and informative paths, and then optimize \(\mathcal{L}_{path}(\mathcal{M})\) to enforce the mask weights for \(e\in\mathcal{E}_{path}\) to increase and mask weights for \(e\notin\mathcal{E}_{path}\) to decrease. We considered a weighted average of these two forces balanced by hyperparameters \(\alpha\) and \(\beta\), \[\mathcal{L}_{path}(\mathcal{M})=-\sum_{r\in\mathcal{R}}(\alpha\sum_{ \begin{subarray}{c}e\in\mathcal{E}_{path}\\ \tau(e)=r\end{subarray}}\mathcal{M}^{r}_{e}-\beta\sum_{\begin{subarray}{c}e \in\mathcal{E},\mathcal{E}\notin\mathcal{E}_{path}\\ \tau(e)=r\end{subarray}}\mathcal{M}^{r}_{e}). \tag{2}\] The key question for computing \(\mathcal{L}_{path}(\mathcal{M})\) is to find a good \(\mathcal{E}_{path}\) containing edges of concise and informative paths. As in Section 4, paths with these two desired properties should be short and without high-degree generic nodes. We thus define a score function of a path \(p\) reflecting these two properties as below \[Score(p) =\log\prod_{\begin{subarray}{c}e\in p\\ \mathbf{e}=(u,\sigma)\end{subarray}}\frac{P(e)}{D_{o}}=\sum_{\begin{subarray}{c}e \in p\\ e=(u,\sigma)\end{subarray}}Score(e), \tag{4}\] \[Score(e) =\log\sigma(\mathcal{M}^{r}_{e}(e))-\log(D_{b}). \tag{3}\] In this score function, \(\mathcal{M}\) gives the probability of \(e\) to be included in the explanation, i.e., \(P(e)=\sigma(\mathcal{M}^{r}_{e}(e))\). To get the importance of a path, we first use a mean-field approximation for the joint probability by multiplying \(P(e)\) together, and we normalize each \(P(\epsilon)\) for edge \(e=(u,e)\) by its target node degree \(D_{v}\). Then, we perform log transformation, which improves numerical stability for multiplying many edges with small \(P(e)\) or large \(D_{v}\) and break down a path score to a summation of edge scores \(Score(e)\) that are easier to work with. This path score function captures both desired properties mentioned above. A path score will be high if the edges on it have high probabilities and these edges are linked to nodes with low degrees. Finding paths with the highest \(Score(p)\) can be implemented using Dijkstra's shortest path algorithm (Dijkstra, 2007), where the distance represented by each edge is set to be the negative score of the edge, i.e., \(-Score(e)\). We let \(\mathcal{E}_{path}\) be the set of edges in the top five shortest paths found by Dijkstra's algorithm. ### Mask Optimization and Path Generation We optimize \(\mathcal{M}\) with both \(\mathcal{L}_{pred}\) and \(\mathcal{L}_{path}\). \(\mathcal{L}_{pred}\) will increase the weights of the prediction-influential edges. \(\mathcal{L}_{path}\) will further increase the weights of the path-forming edges that are also highly weighted by the current \(\mathcal{M}\) and decrease other weights. Finally, after the mask learning converges, we run one more shortest-path algorithm to generate paths from the final \(\mathcal{M}\) and select the top paths according to budget \(B\) to get the explanation \(\mathcal{P}\) defined in Section 4. A pseudo-code of PaGE-Link is shown in Algorithm 1. ``` Input: heterogeneous graph \(\mathcal{G}\), trained GNN-based LP model \(\Phi(\cdot,\cdot)\), predicted link \((s,t)\), size budget \(B\), k for k-core, hyperparameters \(\alpha\) and \(\beta\), learning rate \(\eta\), maximum iterations \(T\). Output: Explanation as a set of paths \(\mathcal{P}\). Extract the computation graph \(\mathcal{G}_{c}\); Prune \(\mathcal{G}_{c}\) for the k-core \(\mathcal{G}_{c}^{k}\); Initialize \(\mathcal{M}^{(0)}\); \(t=0\); while\(\mathcal{M}^{(t)}\) not converge and \(t<T\)do Compute \(\mathcal{L}_{pred}(\mathcal{M}^{(t)})\); \(\triangleright\) Eq.(1) Compute \(Score(e)\) for each edge \(e\); \(\triangleright\) Eq.(4) Construct \(\mathcal{E}_{path}\) by finding shortest paths on \(\mathcal{G}_{c}^{k}\) with edge distance \(-Score(e)\); Compute \(\mathcal{L}_{path}(\mathcal{M}^{(t)})\) according to \(\mathcal{E}_{path}\); \(\triangleright\) Eq.(2) \(\mathcal{M}^{(t+1)}=\mathcal{M}^{(t)}-\eta\nabla(\mathcal{L}_{pred}(\mathcal{M}^ {(t)})+\mathcal{L}_{path}(\mathcal{M}^{(t)}))\); t \(\leftarrow\) 1; endwhile \(\mathcal{P}\) = Under budget \(B\), the top shortest paths on \(\mathcal{G}_{c}^{k}\) with edge distance \(-Score(e)\); Return:\(\mathcal{P}\). ``` **Algorithm 1** PaGE-Link ### Complexity Analysis In Table 3, we summarize the time complexity of PaGE-Link and representative existing methods for explaining a prediction with computation graph \(\mathcal{G}_{c}=(\mathcal{V}_{c},\mathcal{E}_{c})\) on a full graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Let \(T\) be the mask learning epochs. GNNExplainer has complexity \(|\mathcal{E}_{c}|T\) as it learns a mask on \(\mathcal{E}_{c}\). PGExplainer has a training stage and an inference stage (separated by \(/\) in the table). The inference stage is linear in \(|\mathcal{E}_{c}|\), but the training stage covers edges in the entire graph and thus scales in \(O(|\mathcal{E}|T)\). SubgraphX has a much higher time complexity exponential in \(|\mathcal{V}_{c}|\), so a size budget of \(B_{node}\) nodes is forced to replace \(|\mathcal{V}_{c}|\), and \(\hat{D}=\max_{q\in\mathcal{V}}D_{q}\) denotes the maximum degree (derivation in Appendix). For PaGE-Link, the k-core pruning step is linear in \(|\mathcal{E}_{c}|\). The mask learning with Dijkstra's algorithm has complexity \(|\mathcal{E}_{c}^{k}|T\). PaGE-Link has a better complexity than existing methods since \(|\mathcal{E}_{c}^{k}|\) is usually smaller than \(|\mathcal{E}_{c}|\) (see Theorem 5.1), and PaGE-Link often converges faster, i.e., has a smaller \(T\), as the space of candidate explanations is smaller (see Proposition 4.1) and noisy nodes are pruned. ## 6. Experiments In this section, we conduct empirical studies to evaluate explanations generated by PaGE-Link. Evaluation is a general challenge when studying model explainability since standard datasets do not have ground truth explanations. Many works (Zhu et al., 2017; Wang et al., 2018) use synthetic benchmarks, but no benchmarks are available for evaluating GNN explanations for heterogeneous LP. Therefore, we generate an augmented graph and a synthetic graph to evaluate explanations. They allow us to generate ground truth explanation patterns and evaluate explainers quantitatively. ### Datasets _The augmented graph._AugCitation is constructed by augmenting the AMiner citation network (Zhu et al., 2017). A graph schema is shown in Figure 3(a). The original AMiner graph contains four node types: author, paper, reference (ref), and field of study (fos), and edge types "cites", "writes", and "in". We construct AugCitation by augmenting the original graph with new (author, paper) edges typed "likes" and define a paper recommendation task on AugCitation for predicting the "like" edges. A new edge \((s,t)\) is augmented if there is at least one concise and informative path \(p\) between them. In our augmentation process, we require the paths \(p\) to have lengths shorter than a hyperparameter \(l_{max}\) and with degrees of nodes on \(p\) (excluding \(s\)\(\&\)\(t\)) bounded by a hyperparameter \(D_{max}\). We highlight these two hyperparameters because of the conciseness and informativeness principles discussed in Section 4. The augmented edge \((s,t)\) is used for prediction. The ground truth explanation is the set of paths satisfying the two hyperparameter requirements. We only take the top \(P_{max}\) paths with the smallest degree sums if there are many qualified paths. We train a GNN-based LP model to predict these new "likes" edges and evaluate explainers by comparing their output explanations with these path patterns as ground truth. _The synthetic graph._UserItemAttr is generated to mimic graphs with users, items, and attributes for recommendations. Figure 3(b) shows the graph schema and illustrates the generation process. We include three node types: "user", "item", and item attributes ("attr") in the synthetic graph, and we build different types of edges step by step. Firstly, the "has" edges are created by randomly connecting items to attrs, and the "hidden prefers" edges are created by randomly connecting users to attrs. These edges represent items having attributes and user preferences for these attributes. Next, \begin{table} \begin{tabular}{l c c|c} \hline \hline GNNExp (Zhu et al., 2017) & PGExp (Zhu et al., 2017) & SubgraphX (Wang et al., 2018) & PaGE-Link (ours) \\ \hline \(O(|\mathcal{E}_{c}|T)\) & \(O(|\mathcal{E}|T)\) / \(O(|\mathcal{E}_{c}|)\) & \(\Theta(|\mathcal{V}_{c}|\hat{D}^{B_{node}-e})\) & \(O(|\mathcal{E}_{c}|+|\mathcal{E}_{c}^{k}|T)\) \\ \hline \hline \end{tabular} \end{table} Table 3. Time complexity of PaGE-Link and other methods. we randomly sample a set of items for each user, and we connect a (user, item) pair by a "buys" edge, if the user "hidden prefers" any attr the item "has". The "hidden prefers" edges correspond to an intermediate step for generating the observable "buys" edges. We remove the "hidden prefers" edges after "buys" edges are generated since we cannot observe "hidden prefers" information in reality. An example of the rationale behind this generation step is that items have certain attributes, like the item "ice cream" with the attribute "vanilla". Then given that a user likes the attribute "vanilla" as hidden information, we observe that the user buys "vanilla ice cream". The next step is to generate more 'buys" edges between randomly picked (user, item) pairs if a similar user (two users with many shared item neighbors) buys this item. The idea is like collaborative filtering, which says similar users tend to buy similar items. The final step is generating edges for prediction and their corresponding ground truth explanations, which follows the same augmentation process described above for AugCitation. For UserItemAttr, we have "has" and "buys" as base edges to construct the ground truth, and we create "likes" edges between users and items for prediction. ### Experiment Settings The GNN-based LP modelAs described in Section 3, the LP model involves a GNN encoder and a prediction head. We use RGCN (Zhu et al., 2017) as the encoder to learn node representations on heterogeneous graphs and the inner product as the prediction head. We train the model using the cross-entropy loss. On each dataset, our prediction task covers one edge type \(r\). We randomly split the observed edges of type \(r\) into train:validation:test = 7:1:2 as positive samples and draw negative samples from the unobserved edges of type \(r\). Edges of other types are used for GNN message passing but not prediction. Explainer baselinesExisting GNN explanation methods cannot be directly applied to heterogeneous LP. Thus, we extend the popular GNNExplainer (Zhu et al., 2017) and PGExplainer (Zhu et al., 2017) as our baselines. We re-implement a heterogeneous version of their mask matrix and mask predictor similar to the heterogeneous mask learning module in PaGE-Link. For these baselines, we perform mask learning using their original objectives, and we generate edge-induced subgraph explanations from their learned mask. We refer to these two adapted explainers as GNNExp-Link and PGExp-Link below. We do not compare to other search-based explainers like SubgraphX (Zhu et al., 2017) because of their high computational complexity (see Section 5.4). They work well on small graphs as in the original papers, but they are hard to scale to large and dense graphs we consider for LP. ### Evaluation Results Quantitative evaluationBoth the ground truth and the final explanation output of PaGE-Link are sets of paths. In contrast, the baseline explainers generate edge masks \(\mathcal{M}\). For a fair comparison, we take the intermediate result PaGE-Link learned, also the mask \(\mathcal{M}\), and we follow (Zhu et al., 2017) to compare explainers by their masks. Specifically for each computation graph, edges in the ground truth paths are treated as positive, and other edges are treated as negative. Then weights in \(\mathcal{M}\) are treated as the prediction scores of edges and are evaluated with the ROC-AUC metric. A high ROC-AUC score reflects that edges in ground truth are precisely captured by the mask. The results are shown in Table 4, where PaGE-Link outperforms both baseline explainers. For scalability, we showed PaGE-Link scales linearly in \(O(|\mathcal{E}_{c}^{k}|)\) in Section 5.4. Here we evaluate its scalability empirically by generating ten synthetic graphs with various sizes from 20 to 5,500 edges in \(\mathcal{G}_{c}\). The results are shown in Figure 1(b), which suggests the computation time scales linearly in the number of edges. Qualitative evaluationA critical advantage of PaGE-Link is that it generates path explanations, which can capture the connections between node pairs and enjoy better interpretability. In contrast, the top important edges found by baseline methods are often disconnected from the source, the target, or both, which makes their explanations hard for humans to interpret and investigate. We conduct case studies to visualize explanations generated by PaGE-Link on the paper recommendation task on AugCitation. Figure 5 shows a case in which the model recommends the source author "Vipin Kumar" the recommended target paper titled "Fast \begin{table} \begin{tabular}{l c c|c} \hline \hline & GNNExp-Link & PGExp-Link & PaGE-Link (ours) \\ \hline AugCitation & 0.829 & 0.586 & **0.928** \\ UserItemAttr & 0.608 & 0.578 & **0.954** \\ \hline \hline \end{tabular} \end{table} Table 4. ROC-AUC scores on learned masks. PaGE-Link outperforms baselines. Figure 4. The proposed augmented graph AugCitation and the synthetic graph UserItemAttr. and exact network trajectory similarity computation: a case-study on bicycle corridor planning". The top path explanation generated by PaGE-Link goes through the coauthor "Shashi Shekhar", which explains the recommendation as Vipin Kumar and Shashi Shekhar coauthored the paper "Correlation analysis of spatial time series datasets: a filter-and-refine approach", and Shashi Shekhar wrote the recommended paper. Given the same budget of three edges, explanations generated by baselines are less interpretable. Figure 6 shows another example with the source author "Huan Liu" and the recommended target paper titled "Using association rules to solve the cold-start problem in recommender systems". PaGE-Link generates paths going through the common fos of the recommended paper and three other papers written by Huan Liu: _p22646_, _p25160_, and _p35294_. We show the PaGE-Link explanation with the top three paths in green. We also show other unexpected fos shared by the _p22646_, _p25160_, and _p35294_ and the target paper. Note that the explanation paths all have length three, even though there are many paths with length five or longer, e.g., \((a328,p22646,f4,p25260,f4134,p5670)\). Also, the explanation paths go through the fos "Redundancy (engineering)" and "User profile" instead of generic fos like "Artificial intelligence" and "Computer science". This case demonstrates that explanation paths selected by PaGE-Link are more concise and informative. ## 7. Human Evaluation The ultimate goal of model explanation is to improve model transparency and help human decision-making. Human evaluation is thus the best way to evaluate the effectiveness of an explainer, which has been a standard evaluation approach in previous works (Granter et al., 2017; Wang et al., 2018; Wang et al., 2018). We conduct a human evaluation by randomly picking 100 predicted links from the test set of AugCitation and generate explanations for each link using GNNExp-Link, PGExp-Link, and PaGE-Link. We design a survey with single-choice questions. In each question, we show respondents the predicted link and those three explanations with both the graph structure and the node/edge type information, similarly as in Figure 5 but excluding method names. The survey is sent to people across graduate students, postdocs, engineers, research scientists, and professors, including people with and without background knowledge about GNNs. We ask respondents to "please select the best explanation of '_why the model predicts this author will like the recommended paper?_". At least three answers from different people are collected for each question. In total, 340 evaluations are collected and 78.79% of them selected explanations by PaGE-Link as the best. ## 8. Conclusion In this work, we study model transparency and accountability on graphs. We investigate a new task: GNN explanation for heterogeneous LP. We identify three challenges for the task and propose a new path-based method, i.e. PaGE-Link, that produces explanations with _interpretable connections_, is _scalable_, and handles graph _heterogeneity_. PaGE-Link explanations quantitatively improve ROC-AUC by 9 - 35% over baselines and are chosen by 78.79% responses as qualitatively more interpretable in human evaluation. ###### Acknowledgements. We thank Ziniu Hu for the helpful discussions on this work. This work is partially supported by NSF (2211557, 1937599, 2119643), NASA, SRC, Okawa Foundation Grant, Amazon Research Awards, Cisco Research Grant, Picsart Gifts, and Snapchat Gifts. Figure 5. Explanations (green arrows) by different explainers for the predicted link (\(a2367,p16200\)) (dashed red). PaGE-Link explanation explains the recommendation by co-authorship, whereas baseline explanations are less interpretable. Figure 6. Top three paths (green arrows) selected by PaGE-Link for explaining the predicted link (\(a328,p5670\)) (dashed red). The selected paths are short and do not go through a generic field of study like “Computer Science”.
2308.07755
Deformations and extensions of modified $λ$-differential $3$-Lie Algebras
In this paper, we introduce the representation of modified $\lambda$-differential $3$-Lie algebras and define the cohomology of modified $\lambda$-differential $3$-Lie algebras with coefficients in a representation. As applications of the proposed cohomology theory, we study linear deformations, abelian extensions and $T^*$-extensions of modified $\lambda$-differential $3$-Lie algebras.
Wen Teng, Hui Zhang
2023-08-14T07:34:26Z
http://arxiv.org/abs/2308.07755v1
# Deformations and extensions of modified \(\lambda\)-differential \(3\)-Lie Algebras # Deformations and extensions of modified \(\lambda\)-differential \(3\)-Lie Algebras **Wen Teng\({}^{1}\), Hui Zhang\({}^{2,3,*}\)** 1. School of Mathematics and Statistics, Guizhou University of Finance and Economics, Guiyang 550025, P. R. of China Email: [email protected] (Wen Teng) 2. School of Information, Guizhou University of Finance and Economics, Guiyang 550005, P. R. of China 3. Postdoctoral Scientific Research Station, ShijiHengtong Technology Co., Ltd, Guiyang, 550014, P. R. of China * Corresponding author: [email protected] (Hui Zhang) **Abstract** In this paper, we introduce the representation of modified \(\lambda\)-differential \(3\)-Lie algebras and define the cohomology of modified \(\lambda\)-differential \(3\)-Lie algebras with coefficients in a representation. As applications of the proposed cohomology theory, we study linear deformations, abelian extensions and \(T^{*}\)-extensions of modified \(\lambda\)-differential \(3\)-Lie algebras. **Key words:** \(3\)-Lie Algebras, modified \(\lambda\)-differential operator, representation, cohomology, deformation, extension. **2020 MSC:**17A40, 17B10, 17B40, 17B56 ## 1 Introduction \(3\)-Lie algebra plays an important role in string theory, and it is also used to study supersymmetry and gauge symmetry transformation of the world-volume theory of multiple coincident M2-branes [4, 12]. The concept of \(3\)-Lie algebra, general \(n\)-Lie algebra, was first introduced by Filippov [11], which can be regarded as a generalization of Lie algebra to higher-order algebra. \(3\)-Lie algebra has attracted the attention of scholars from mathematics and physics [5, 19, 30]. Representation theory, cohomology theory, deformations, Nijenhuis operators and extension theory of \(n\)-Lie algebras have been widely studied by scholars [1, 21, 23, 24, 25, 28, 36, 37, 43, 45, 47]. Derivations play an important roles in the study of homotopy Lie algebras [39], differential Galois theory [27], control theory and gauge theories of quantum field theory [2]. Recently, authors have studied algebras with derivations in [9, 26] from the operadic point of view. In [38], authors have studied the deformation and extension of Lie algebras with derivations from the cohomological point of view. The results of [38] have been extended to 3-Lie algebras with derivations [18, 44]. More research on algebraic structures with derivations have been developed, see [6, 17, 33, 34, 41, 42] and references cited therein. In recent years, due to the important work of [3, 7, 13, 14, 15, 40], scholars pay attention to the structures with arbitrary weights. For \(\lambda\in\mathbf{k}\), the cohomology, extensions and deformations of Rota-Baxter 3-Lie algebras with any weight \(\lambda\) and differential 3-Lie algebras with any weight \(\lambda\) were established in [16, 20, 35]. In addition, in [8], author has studied the cohomology and deformation of modified Rota-Baxter algebras. The cohomology and deformation of modified Rota-Baxter Leibniz algebras with weight \(\lambda\) are given in [22, 29]. In [31], the concept of modified \(\lambda\)-differential Lie algebras has been introduced. Motivated by the mentioned work on the modified \(\lambda\)-differential operator on Lie algebras, in this paper, our main objective is to study modified \(\lambda\)-differential 3-Lie algebras. We develop a cohomology theory of modified \(\lambda\)-differential 3-Lie algebras that controls the deformations and extensions of modified \(\lambda\)-differential 3-Lie algebras. The paper is organized as follows. In Section 2, we introduce the representation and cohomology of modified \(\lambda\)-differential 3-Lie algebras. In Section 3, we consider linear deformations of modified \(\lambda\)-differential 3-Lie algebras. In Section 4, we study abelian extensions of modified \(\lambda\)-differential 3-Lie algebras. In Section 5, we study \(T^{*}\)-extensions of modified \(\lambda\)-differential 3-Lie algebras. Throughout this paper, \(\mathbf{k}\) denotes a field of characteristic zero. All the vector spaces and linear maps are taken over \(\mathbf{k}\). ## 2 Representations and cohomologies of modified \(\lambda\)-differential \(3\)-Lie algebras In this section, first, we introduce the concept of a modified \(\lambda\)-differential 3-Lie algebras. Then, we give representations and cohomologies of modified \(\lambda\)-differential 3-Lie algebras. **Definition 2.1**.: [11] (i) A 3-Lie algebra is a tuple \((\mathfrak{A},[-,-,-])\) in which \(\mathfrak{A}\) is a vector space together with a skew-symmetric ternary operation \([-,-,-]:\wedge^{3}\mathfrak{A}\to\mathfrak{A}\) such that \[[a_{1},a_{2},[a_{3},a_{4},a_{5}]]=[[a_{1},a_{2},a_{3}],a_{4},a_{5}]+[a_{3},[a _{1},a_{2},a_{4}],a_{5}]+[a_{3},a_{3},[a_{1},a_{2},a_{5}]], \tag{2.1}\] for all \(a_{1},a_{2},a_{3},a_{4},a_{5}\in\mathfrak{A}\). (ii) A homomorphism between two 3-Lie algebras \((\mathfrak{A}_{1},[-,-,-]_{1})\) and \((\mathfrak{A}_{2},[-,-,-]_{2})\) is a linear map \(\eta:\mathfrak{A}_{1}\to\mathfrak{A}_{2}\) satisfying \(\eta([a_{1},a_{2},a_{3}]_{1})=[\eta(a_{1}),\eta(a_{2}),\eta(a_{3})]_{2},\ \ \forall\ a_{1},a_{2},a_{3}\in\mathfrak{A}_{1}\). **Definition 2.2**.: (i) Let \(\lambda\in\mathbf{k}\) and \((\mathfrak{A},[-,-,-])\) be a 3-Lie algebra. A modified \(\lambda\)-differential operator on 3-Lie algebra \(\mathfrak{A}\) is a linear operator \(\mathrm{d}:\mathfrak{A}\to\mathfrak{A}\), such that \[\mathrm{d}[a_{1},a_{2},a_{3}]=[\mathrm{d}(a_{1}),a_{2},a_{3}]+[a_{1},\mathrm{ d}(a_{2}),a_{3}]+[a_{1},a_{2},\mathrm{d}(a_{3})]+\lambda[a_{1},a_{2},a_{3}], \forall a_{1},a_{2},a_{3}\in\mathfrak{A}. \tag{2.2}\] (ii) A modified \(\lambda\)-differential 3-Lie algebra is a triple \((\mathfrak{A},[-,-,-],\mathrm{d})\) consisting of a 3-Lie algebra \((\mathfrak{A},[-,-,-])\) and a modified \(\lambda\)-differential operator \(\mathrm{d}\). (iii) A homomorphism between two modified \(\lambda\)-differential 3-Lie algebras \((\mathfrak{A}_{1},[-,-,-]_{1},\mathrm{d}_{1})\) and \((\mathfrak{A}_{2},[-,-,-]_{2},\mathrm{d}_{2})\) is a 3-Lie algebra homomorphism \(\eta:(\mathfrak{A}_{1},[-,-,-]_{1})\to(\mathfrak{A}_{2},[-,-,-]_{2})\) such that \(\eta\circ\mathrm{d}_{1}=\mathrm{d}_{2}\circ\eta\). Furthermore, if \(\eta\) is nondegenerate, then \(\eta\) is called an isomorphism from \(\mathfrak{A}_{1}\) to \(\mathfrak{A}_{2}\). Let \((\mathfrak{A},[-,-,-])\) be a 3-Lie algebra, then the elements in \(\wedge^{2}\mathfrak{A}\) are called fundamental objects of the 3-Lie algebra \((\mathfrak{A},[-,-,-])\). There is a bilinear operation \([-,-]\) on \(\wedge^{2}\mathfrak{A}\), which is given by \[[A,B]_{\mathcal{F}}=[a_{1},a_{2},b_{1}]\wedge b_{2}+b_{1}\wedge[a_{1},a_{2},b_ {2}],\forall A=a_{1}\wedge a_{2},B=b_{1}\wedge b_{2}\in\wedge^{2}\mathfrak{A}.\] It is shown in [10] that \((\wedge^{2}\mathfrak{A},[-,-]_{\mathcal{F}})\) is a Leibniz algebra. Furthermore, we have the following proposition. **Proposition 2.3**.: _Let \((\mathfrak{A},[-,-,-],\mathrm{d})\) be a modified \(\lambda\)-differential 3-Lie algebra. Then, \((\wedge^{2}\mathfrak{A},[-,-]_{\mathcal{F}},\mathrm{d}_{\mathcal{F}})\) is a Leibniz algebra with a derivation, where \(\mathrm{d}_{\mathcal{F}}(a_{1}\wedge a_{2})=\mathrm{d}(a_{1})\wedge a_{2}+a_{ 1}\wedge\mathrm{d}(a_{2})+\lambda a_{1}\wedge a_{2}\), for all \(a_{1}\wedge a_{2}\in\wedge^{2}\mathfrak{A}\). See [6] for more details about Leibniz algebras with derivations_ Proof.: For any \(A=a_{1}\wedge a_{2},B=b_{1}\wedge b_{2}\in\wedge^{2}\mathfrak{A}\), by Eq. (2.2) we have \[\mathrm{d}_{\mathcal{F}}[A,B]_{\mathcal{F}}\] \[= \mathrm{d}[a_{1},a_{2},b_{1}]\wedge b_{2}+[a_{1},a_{2},b_{1}] \wedge\mathrm{d}(b_{2})+\lambda[a_{1},a_{2},b_{1}]\wedge b_{2}\] \[+\mathrm{d}(b_{1})\wedge[a_{1},a_{2},b_{2}]+b_{1}\wedge\mathrm{ d}[a_{1},a_{2},b_{2}]+\lambda b_{1}\wedge[a_{1},a_{2},b_{2}]\] \[= ([\mathrm{d}(a_{1}),a_{2},b_{1}]+[a_{1},\mathrm{d}(a_{2}),b_{1}] +[a_{1},a_{2},\mathrm{d}(b_{1})]+\lambda[a_{1},a_{2},b_{1}])\wedge b_{2}\] \[+[a_{1},a_{2},b_{1}]\wedge\mathrm{d}(b_{2})+\lambda[a_{1},a_{2},b _{1}]\wedge b_{2}+\mathrm{d}(b_{1})\wedge[a_{1},a_{2},b_{2}]\] \[+b_{1}\wedge([\mathrm{d}(a_{1}),a_{2},b_{2}]+[a_{1},\mathrm{d}(a_ {2}),b_{2}]+[a_{1},a_{2},\mathrm{d}(b_{2})]+\lambda[a_{1},a_{2},b_{2}])+ \lambda b_{1}\wedge[a_{1},a_{2},b_{2}]\] \[= ([\mathrm{d}(a_{1}),a_{2},b_{1}]+[a_{1},\mathrm{d}(a_{2}),b_{1}] +\lambda[a_{1},a_{2},b_{1}])\wedge b_{2}+b_{1}\wedge([\mathrm{d}(a_{1}),a_{2},b_{2}]+[a_{1},\mathrm{d}(a_{2}),b_{2}]+\lambda[a_{1},a_{2},b_{2}])\] \[+[a_{1},a_{2},\mathrm{d}(b_{1})])\wedge b_{2}+[a_{1},a_{2},b_{1}] \wedge\mathrm{d}(b_{2})+\lambda[a_{1},a_{2},b_{1}]\wedge b_{2}\] \[+b_{1}\wedge[a_{1},a_{2},\mathrm{d}(b_{2})]+\mathrm{d}(b_{1}) \wedge[a_{1},a_{2},b_{2}]+\lambda b_{1}\wedge[a_{1},a_{2},b_{2}]\] \[= [\mathrm{d}_{\mathcal{F}}(A),B]_{\mathcal{F}}+[A,\mathrm{d}_{ \mathcal{F}}(B)]_{\mathcal{F}}.\] Hence, \((\wedge^{2}\mathfrak{A},[-,-]_{\mathcal{F}},\mathrm{d}_{\mathcal{F}})\) is a Leibniz algebra with a derivation. **Remark 2.4**.: Let \({\rm d}\) be a modified \(\lambda\)-differential operator on \(3\)-Lie algebra \(({\mathfrak{A}},[-,-,-])\). If \(\lambda=0\), then \({\rm d}\) is a derivation on \(3\)-Lie algebra \({\mathfrak{A}}\). See [46] for various derivations of \(3\)-Lie algebras. **Example 2.5**.: Let \(({\mathfrak{A}},[-,-,-])\) be a \(3\)-Lie algebra. Then, a linear operator \({\rm d}:{\mathfrak{A}}\to{\mathfrak{A}}\) is a modified \(\lambda\)-differential operator if and only if \({\rm d}+\frac{\lambda}{2}{\rm id}_{\mathfrak{A}}\) is a derivation on \(3\)-Lie algebra \({\mathfrak{A}}\). **Example 2.6**.: Let \(({\mathfrak{A}},[-,-,-],{\rm d})\) be a modified \(\lambda\)-differential \(3\)-Lie algebra. Then, for \(k\in{\bf k}\), \(({\mathfrak{A}},[-,-,-],k{\rm d})\) is a modified \((k\lambda)\)-differential \(3\)-Lie algebra. **Example 2.7**.: Let \(({\mathfrak{A}},[-,-,-])\) be a \(3\)-dimensional \(3\)-Lie algebra with a basis \({\mathfrak{e}}_{1}\), \({\mathfrak{e}}_{2}\) and \({\mathfrak{e}}_{3}\) defined by \[[{\mathfrak{e}}_{1},{\mathfrak{e}}_{2},{\mathfrak{e}}_{3}]={\mathfrak{e}}_{1}.\] Then, any modified \((-k_{22}-k_{33})\)-differential operator \({\rm d}=(k_{ij})\) has the form as follows \[{\rm d}({\mathfrak{e}}_{1},{\mathfrak{e}}_{2},{\mathfrak{e}}_{3})=({ \mathfrak{e}}_{1},{\mathfrak{e}}_{2},{\mathfrak{e}}_{3})\left(\begin{array}[] {ccc}k_{11}&k_{12}&k_{13}\\ 0&k_{22}&k_{23}\\ 0&k_{32}&k_{33}\end{array}\right),\] for \(k_{ij}\in{\bf k},(i,j=1,2,3)\). **Definition 2.8**.: (i) (see [21]) A representation of a \(3\)-Lie algebra \(({\mathfrak{A}},[-,-,-])\) on a vector space \({\mathfrak{M}}\) is a skew-symmetric bilinear map \(\rho:{\mathfrak{A}}\wedge{\mathfrak{A}}\to{\rm End}({\mathfrak{M}})\), such that \[\rho([a_{1},a_{2},a_{3}],a_{4})=\rho(a_{2},a_{3})\rho(a_{1},a_{4}) +\rho(a_{3},a_{1})\rho(a_{2},a_{4})+\rho(a_{1},a_{2})\rho(a_{3},a_{4}), \tag{2.3}\] \[\rho(a_{1},a_{2})\rho(a_{3},a_{4})=\rho(a_{3},a_{4})\rho(a_{1},a_{ 2})+\rho([a_{1},a_{2},a_{3}],a_{4})+\rho(a_{3},[a_{1},a_{2},a_{4}]), \tag{2.4}\] for all \(a_{1},a_{2},a_{3},a_{4}\in{\mathfrak{A}}\). We also denote a representation of \({\mathfrak{A}}\) on \({\mathfrak{M}}\) by \(({\mathfrak{M}};\rho)\). (ii) A representation of a modified \(\lambda\)-differential \(3\)-Lie algebra \(({\mathfrak{A}},[-,-,-],{\rm d})\) is a triple \(({\mathfrak{M}};\rho,{\rm d}_{\mathfrak{M}})\), where \(({\mathfrak{M}};\rho)\) is a representation of the \(3\)-Lie algebra \(({\mathfrak{A}},[-,-,-])\) and \({\rm d}_{\mathfrak{M}}\) is a linear operator on \({\mathfrak{M}}\), satisfying the following equation \[{\rm d}_{\mathfrak{M}}(\rho(a,b)v)=\rho({\rm d}(a),b)v+\rho(a,{\rm d}(b))v+ \rho(a,b){\rm d}_{\mathfrak{M}}(v)+\lambda\rho(a,b)v, \tag{2.5}\] for any \(a,b\in{\mathfrak{A}}\) and \(v\in{\mathfrak{M}}\). **Remark 2.9**.: Let \(({\mathfrak{M}};\rho,{\rm d}_{\mathfrak{M}})\) be a representation of the modified \(\lambda\)-differential \(3\)-Lie algebra \(({\mathfrak{A}},[-,-,-],{\rm d})\). If \(\lambda=0\), then \(({\mathfrak{M}};\rho,{\rm d}_{\mathfrak{M}})\) is a representation of the \(3\)-Lie algebra with a derivation \(({\mathfrak{A}},[-,-,-],{\rm d})\). One can refer to [18, 34, 44] for more information about \(3\)-Lie algebras with derivations. **Example 2.10**.: Let \((\mathfrak{M};\rho)\) be a representation of the 3-Lie algebra \((\mathfrak{L},[-,-,-])\). Then \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) is a representation of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\) if and only if \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}}+\frac{\lambda}{2}\mathrm{id}_{ \mathfrak{M}})\) is a representation of the 3-Lie algebra with a derivation \((\mathfrak{A},[-,-,-],\mathrm{d}+\frac{\lambda}{2}\mathrm{id}_{\mathfrak{A}})\). **Example 2.11**.: Let \((\mathfrak{M};\rho)\) be a representation of the 3-Lie algebra \((\mathfrak{A},[-,-,-])\). Then, for \(k\in\mathbf{k}\), \((\mathfrak{M};\rho,\mathrm{id}_{\mathfrak{M}})\) is a representation of the modified \((-2k)\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],k\mathrm{id}_{\mathfrak{A}})\). **Example 2.12**.: Let \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) be a representation of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\). Then, for \(k\in\mathbf{k}\), \((\mathfrak{M};\rho,\mathrm{\rho},k\mathrm{d}_{\mathfrak{M}})\) is a representation of the modified \((k\lambda)\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],k\mathrm{d})\). **Proposition 2.13**.: _Let \((\mathfrak{A},[-,-,-])\) be a 3-Lie algebra, and \((\mathfrak{M};\rho)\) be a representation of it. Then \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) is a representation of modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\) if and only if \(\mathfrak{A}\oplus\mathfrak{M}\) is a modified \(\lambda\)-differential 3-Lie algebra under the following maps:_ \[[a_{1}+u_{1},a_{2}+u_{2},a_{3}+u_{3}]_{\rho}:= [a_{1},a_{2},a_{3}]+\rho(a_{1},a_{2})u_{3}+\rho(a_{3},a_{1})u_{2} +\rho(a_{2},a_{3})u_{1},\] \[\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}}(a_{1}+u_{1}):= \mathrm{d}(a_{1})+\mathrm{d}_{\mathfrak{M}}(u_{1}),\] _for all \(a_{1},a_{2},a_{3}\in\mathfrak{A}\) and \(u_{1},u_{2},u_{3}\in\mathfrak{M}\)._ Proof.: At first, it is known that \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho})\) is a 3-Lie algebra. Next, for any \(a_{1},a_{2},a_{3}\in\mathfrak{A},u_{1},u_{2},u_{3}\in\mathfrak{M}\), by Eqs. (2.2) and (2.5), we have \[\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}}([a_{1}+u_{1},a_{2}+u_{2 },a_{3}+u_{3}]_{\rho})\] \[= \mathrm{d}[a_{1},a_{2},a_{3}]+\mathrm{d}_{\mathfrak{M}}(\rho(a_{ 1},a_{2})u_{3})+\mathrm{d}_{\mathfrak{M}}(\rho(a_{3},a_{1})u_{2})+\mathrm{d}_ {\mathfrak{M}}(\rho(a_{2},a_{3})u_{1})\] \[= [\mathrm{d}(a_{1}),a_{2},a_{3}]+[a_{1},\mathrm{d}(a_{2}),a_{3}]+[ a_{1},a_{2},\mathrm{d}(a_{3})]+\lambda[a_{1},a_{2},a_{3}]\] \[+\rho(\mathrm{d}(a_{1}),a_{2})u_{3}+\rho(a_{1},\mathrm{d}(a_{2}) )u_{3}+\rho(a_{1},a_{2})\mathrm{d}_{\mathfrak{M}}(u_{3})+\lambda\rho(a_{1},a _{2})u_{3}\] \[+\rho(\mathrm{d}(a_{3}),a_{1})u_{2}+\rho(a_{3},\mathrm{d}(a_{1}) )u_{2}+\rho(a_{3},a_{1})\mathrm{d}_{\mathfrak{M}}(u_{2})+\lambda\rho(a_{3},a_ {1})u_{2}\] \[+\rho(\mathrm{d}(a_{2}),a_{3})u_{1}+\rho(a_{2},\mathrm{d}(a_{3}) )u_{1}+\rho(a_{2},a_{3})\mathrm{d}_{\mathfrak{M}}(u_{1})+\lambda\rho(a_{2},a_ {3})u_{1}\] \[= [\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}}(a_{1}+u_{1}),a_{2}+u_ {2},a_{3}+u_{3}]_{\rho}+[a_{1}+u_{1},\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}} (a_{2}+u_{2}),a_{3}+u_{3}]_{\rho}\] \[+[a_{1}+u_{1},a_{2}+u_{2},\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M }}(a_{3}+u_{3})]_{\rho}+\lambda[a_{1}+u_{1},a_{2}+u_{2},a_{3}+u_{3}]_{\rho}.\] Hence, \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho},\mathrm{d}\oplus\mathrm{d}_{ \mathfrak{M}})\) is a modified \(\lambda\)-differential 3-Lie algebra. Conversely, assume that \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho},\mathrm{d}\oplus\mathrm{d}_{ \mathfrak{M}})\) is a modified \(\lambda\)-differential 3-Lie algebra, for any \(a_{1},a_{2}\in\mathfrak{A}\) and \(u_{3}\in\mathfrak{M}\), we have \[\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}}[a_{1}+0,a_{2}+0,0+u_{3}]_{\rho}\] \[= [\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}}(a_{1}+0),a_{2}+0,0+u_{3 }]_{\rho}+[a_{1}+0,\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}}(a_{2}+0),0+u_{3}] _{\rho}\] \[+[a_{1}+0,a_{2}+0,\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}}(0+u_{3 })]_{\rho}+\lambda[a_{1}+0,a_{2}+0,0+u_{3}]_{\rho},\] which implies that, \[\mathrm{d}_{\mathfrak{M}}(\rho(a_{1},a_{2})u_{3})=\rho(\mathrm{d}(a_{1}),a_{2})u_{ 3}+\rho(a_{1},\mathrm{d}(a_{2}))u_{3}+\rho(a_{1},a_{2})\mathrm{d}_{\mathfrak{M}} (u_{3})+\lambda\rho(a_{1},a_{2})u_{3}.\] Therefore, \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) is a representation of \((\mathfrak{A},[-,-,-],\mathrm{d})\). Let \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) be a representation of a modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\), and \(\mathfrak{M}^{*}:=\mathrm{Hom}(\mathfrak{M},\mathbf{k})\) be a dual space of \(\mathfrak{M}\). We define a bilinear map \(\rho^{*}:\wedge^{2}\mathfrak{A}\to\mathrm{End}(\mathfrak{M}^{*})\) and a linear map \(\mathrm{d}_{\mathfrak{M}}^{*}:\mathfrak{M}^{*}\to\mathfrak{M}^{*}\), respectively by \[\langle\rho^{*}(a_{1},a_{2})u^{*},v\rangle=-\langle u^{*},\rho(a_{1},a_{2})v \rangle,\text{ and }\langle\mathrm{d}_{\mathfrak{M}}^{*}u^{*},v\rangle=\langle u^{*}, \mathrm{d}_{\mathfrak{M}}(v)\rangle, \tag{2.6}\] for any \(a_{1},a_{2}\in\mathfrak{A},v\in\mathfrak{M}\) and \(u^{*}\in\mathfrak{M}^{*}\). **Proposition 2.14**.: _With the above notations, \((\mathfrak{M}^{*};\rho^{*},-\mathrm{d}_{\mathfrak{M}}^{*})\) is a representation of modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\)._ Proof.: First, It has been proved that [32]\((\mathfrak{M}^{*};\rho^{*})\) is a representation of the 3-Lie algebra \((\mathfrak{A},[-,-,-])\). Furthermore, for any \(a_{1},a_{2}\in\mathfrak{A},v\in\mathfrak{M}\) and \(u^{*}\in\mathfrak{M}^{*}\), by Eqs. (2.5) and (2.6), we have \[\langle\rho^{*}(\mathrm{d}(a_{1}),a_{2})u^{*},v\rangle+\langle \rho^{*}(a_{1},\mathrm{d}(a_{2}))u^{*},v\rangle+\langle\rho^{*}(a_{1},a_{2})(- \mathrm{d}_{\mathfrak{M}}^{*})u^{*},v\rangle+\langle\lambda\rho^{*}(a_{1},a_{ 2})u^{*},v\rangle\] \[-\langle(-\mathrm{d}_{\mathfrak{M}}^{*})\rho^{*}(a_{1},a_{2})u^{ *},v\rangle\] \[= -\langle u^{*},\rho(\mathrm{d}(a_{1}),a_{2})v\rangle-\langle u^{ *},\rho(a_{1},\mathrm{d}(a_{2}))v\rangle-\langle(-\mathrm{d}_{\mathfrak{M}}^{ *})u^{*},\rho(a_{1},a_{2})v\rangle-\langle u^{*},\lambda\rho(a_{1},a_{2})v\rangle\] \[+\langle\rho^{*}(a_{1},a_{2})u^{*},\mathrm{d}_{\mathfrak{M}}(v)\rangle\] \[= -\langle u^{*},\rho(\mathrm{d}(a_{1}),a_{2})v\rangle-\langle u^{ *},\rho(a_{1},\mathrm{d}(a_{2}))v\rangle+\langle u^{*},\mathrm{d}_{\mathfrak{ M}}(\rho(a_{1},a_{2})v)\rangle-\langle u^{*},\lambda\rho(a_{1},a_{2})v\rangle\] \[-\langle u^{*},\rho(a_{1},a_{2})\mathrm{d}_{\mathfrak{M}}(v)\rangle\] \[= -\langle u^{*},\rho(\mathrm{d}(a_{1}),a_{2})v+\rho(a_{1},\mathrm{ d}(a_{2}))v-\mathrm{d}_{\mathfrak{M}}(\rho(a_{1},a_{2})v)+\lambda\rho(a_{1},a_{2})v+ \rho(a_{1},a_{2})\mathrm{d}_{\mathfrak{M}}(v)\rangle\] \[= 0,\] which implies that \(\rho^{*}(\mathrm{d}(a_{1}),a_{2})u^{*}+\rho^{*}(a_{1},\mathrm{d}(a_{2}))u^{*}+ \rho^{*}(a_{1},a_{2})(-\mathrm{d}_{\mathfrak{M}}^{*})u^{*}+\lambda\rho^{*}(a_{ 1},a_{2})u^{*}-(-\mathrm{d}_{\mathfrak{M}}^{*})\rho^{*}(a_{1},a_{2})u^{*}=0\). So we get the result. **Example 2.15**.: Let \((\mathfrak{A},[-,-,-],\mathrm{d})\) be a modified \(\lambda\)-differential 3-Lie algebra and define \(ad:\mathfrak{A}\wedge\mathfrak{A}\to\mathrm{End}(\mathfrak{A})\) by \(ad(a_{1},a_{2})(a)=[a_{1},a_{2},a],\forall a_{1},a_{2},a\in\mathfrak{A}\). Then \((\mathfrak{A};ad,\mathrm{d})\) is a representation of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\), which is called the adjoint representation of \((\mathfrak{A},[-,-,-],\mathrm{d})\). Furthermore, \((\mathfrak{A}^{*};ad^{*},-\mathrm{d}^{*})\) is a dual adjoint representation of \((\mathfrak{A},[-,-,-],\mathrm{d})\). In the next, we will study the cohomology of a modified \(\lambda\)-differential 3-Lie algebra with coefficients in its representation. Recall from [37] that let \((\mathfrak{M};\rho)\) be a representation of a 3-Lie algebra \((\mathfrak{A},[-,-,-])\). Denote the \(n-\)cochains of \(\mathfrak{A}\) with coefficients in representation \((\mathfrak{M};\rho)\) by \[\mathcal{C}^{n}_{\rm 3Lie}(\mathfrak{A},\mathfrak{M}):=\operatorname{Hom}(( \wedge^{2}\mathfrak{A})^{\otimes n-1}\wedge\mathfrak{A},\mathfrak{M}),n\geq 1.\] The coboundary operator \(\delta:\mathcal{C}^{n}_{\rm 3Lie}(\mathfrak{A},\mathfrak{M})\to\mathcal{C}^{n+1}_{\rm 3 Lie}(\mathfrak{A},\mathfrak{M})\), for \(A_{i}=a_{i}\wedge b_{i}\in\wedge^{2}\mathfrak{A},a_{n+1}\in\mathfrak{A}\) and \(f\in\mathcal{C}^{n}_{\rm 3Lie}(\mathfrak{A},\mathfrak{M})\), as \[\delta f(A_{1},\cdots,A_{n},a_{n+1})\] \[= (-1)^{n+1}\big{(}\rho(b_{n},a_{n+1})f(A_{1},\cdots,A_{n-1},a_{n} )+\rho(a_{n+1},a_{n})f(A_{1},\cdots,A_{n-1},b_{n})\big{)}\] \[+\sum_{i=1}^{n}(-1)^{i+1}\rho(a_{i},b_{i})f(A_{1},\cdots,A_{i-1}, A_{i+1},\cdots,A_{n},a_{n+1})\] \[+\sum_{i=1}^{n}(-1)^{i}f(A_{1},\cdots,A_{i-1},A_{i+1},\cdots,A_{n },[a_{i},b_{i},a_{n+1}])\] \[+\sum_{1\leq i<k\leq n}(-1)^{i}f(A_{1},\cdots,A_{i-1},A_{i+1}, \cdots,A_{k-1},[a_{i},b_{i},a_{k}]\wedge b_{k}+a_{k}\wedge[a_{i},b_{i},b_{k}], \cdots,A_{n},a_{n+1}),\] it was proved that \(\delta\circ\delta=0\). **Lemma 2.16**.: _Let \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) be a representation of a modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\). For any \(n\geq 1\), we define a linear map \(\Phi:\mathcal{C}^{n}_{\rm 3Lie}(\mathfrak{A},\mathfrak{M})\to\mathcal{C}^{n}_{\rm 3Lie}( \mathfrak{A},\mathfrak{M})\) by_ \[\Phi f(A_{1},\cdots,A_{n-1},a_{n})= \sum_{i=1}^{n-1}f(A_{1},\cdots,A_{i-1},\mathrm{d}(a_{i})\wedge b_ {i}+a_{i}\wedge\mathrm{d}(b_{i}),A_{i+1},\cdots,A_{n-1},a_{n})\] \[+f(A_{1},\cdots,A_{n-1},\mathrm{d}(a_{n}))+(n-1)\lambda f(A_{1}, \cdots,A_{n-1},a_{n})\] \[-\mathrm{d}_{\mathfrak{M}}(f(A_{1},\cdots,A_{n-1},a_{n})),\] _for any \(f\in\mathcal{C}^{n}_{\rm 3Lie}(\mathfrak{A},\mathfrak{M})\) and \(A_{i}=a_{i}\wedge b_{i}\in\wedge^{2}\mathfrak{A},i=1,\cdots,n-1,a_{n}\in \mathfrak{A}\). Then \(\Phi\) is a cochain map, i.e., \(\Phi\circ\delta=\delta\circ\Phi\)._ Proof.: It follows by a straightforward tedious calculations. Let \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) be a representation of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\), we define \(n\)-cochains for modified \(\lambda\)-differential 3-Lie algebra as follows: \[\mathcal{C}^{n}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M}):=\begin{cases} \mathcal{C}^{n}_{\rm 3Lie}(\mathfrak{A},\mathfrak{M})\oplus\mathcal{C}^{n-1}_{\rm 3Lie}( \mathfrak{A},\mathfrak{M}),&n\geq 2,\\ \operatorname{Hom}(\mathfrak{A},\mathfrak{M}),&n=1.\end{cases}\] We define a linear map \(\partial:\mathcal{C}^{n}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M}) \to\mathcal{C}^{n+1}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})\) by \[\partial(f) =(\delta f,-\Phi f), \text{if }\ f\in\mathcal{C}^{1}_{\rm md3Lie_{\lambda}}(\mathfrak{A}, \mathfrak{M});\] \[\partial(f,g) =(\delta f,\delta g+(-1)^{n}\Phi f), \text{if }\ (f,g)\in\mathcal{C}^{n}_{\rm md3Lie_{\lambda}}(\mathfrak{A}, \mathfrak{M}).\] In view of Lemma 2.16, we have the following theorem. **Theorem 2.17**.: _The linear map \(\partial\) is a coboundary operator, that is, \(\partial\circ\partial=0\)._ Therefore, we obtain a cochain complex \((\mathcal{C}^{*}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M}),\partial)\), for \(n\geq 2\), we denote the set of \(n\)-cocycles by \(\mathcal{Z}^{n}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})=\big{\{}(f,g )\in\mathcal{C}^{n}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})\ |\ \ \partial(f,g)=0\big{\}}\), the set of \(n\)-coboundaries by \(\mathcal{B}^{n}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})=\{\partial(f,g)\ |\ (f,g)\in\mathcal{C}^{n-1}_{\rm md3Lie_{\lambda}}(\mathfrak{A}, \mathfrak{M})\}\) and the \(n\)-th cohomology group of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],{\rm d})\) with coefficients in the representation \((\mathfrak{M};\rho,{\rm d}_{\mathfrak{M}})\) by \(\mathcal{H}^{n}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})=\mathcal{Z} ^{n}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})/\mathcal{B}^{n}_{\rm md 3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})\). Lastly, we calculate the 1-cocycle and 2-cocycle. For \(f\in\mathcal{C}^{1}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})\), \(f\) is a 1-cocycle if \(\partial(f)=(\delta f,-\Phi f)=0\), i.e., \[\rho(b_{1},a_{2})f(a_{1})+\rho(a_{2},a_{1})f(b_{1})+\rho(a_{1},b_{1})f(a_{2})- f([a_{1},b_{1},a_{2}])=0\] and \[{\rm d}_{\mathfrak{M}}(f(a_{1}))-f({\rm d}(a_{1}))=0.\] For \((f,g)\in\mathcal{C}^{2}_{\rm md3Lie_{\lambda}}(\mathfrak{A},\mathfrak{M})\), \((f,g)\) is a 2-cocycle if \(\partial(f,g)=(\delta f,\delta g+\Phi f)=0\), i.e., \[-\rho(b_{2},a_{3})f(a_{1},b_{1},a_{2})-\rho(a_{3},a_{2})f(a_{1},b_{1},b_{2})+ \rho(a_{1},b_{1})f(a_{2},b_{2},a_{3})-\rho(a_{2},b_{2})f(a_{1},b_{1},a_{3})\] \[-f(a_{2},b_{2},[a_{1},b_{1},a_{3}])+f(a_{1},b_{1},[a_{2},b_{2},a_{3}])-f([a_{1 },b_{1},a_{2}],b_{2},a_{3})-f(a_{2},[a_{1},b_{1},b_{2}],a_{3})=0\] and \[\rho(b_{1},a_{2})f(a_{1})+\rho(a_{2},a_{1})f(b_{1})+\rho(a_{1},b_ {1})f(a_{2})-f([a_{1},b_{1},a_{2}])\] \[+f({\rm d}(a_{1}),b_{1},a_{2})+f(a_{1},{\rm d}(b_{1}),a_{2})+f(a_ {1},b_{1},d(a_{2}))+\lambda f(a_{1},b_{1},a_{2})-{\rm d}_{\mathfrak{M}}(f(a_ {1},b_{1},a_{2}))=0.\] ## 3 Deformations of modified \(\lambda\)-differential 3-Lie algebras In this section, we consider linear deformations of the modified \(\lambda\)-differential 3-Lie algebra. Let \((\mathfrak{A},[-,-,-],{\rm d})\) be a modified \(\lambda\)-differential 3-Lie algebra. Denote \(\nu_{0}=[-,-,-]\) and \({\rm d}_{0}={\rm d}\). Consider a family of linear maps: \[\nu_{t}=\nu_{0}+t\nu_{1}+t^{2}\nu_{2},\ \nu_{1},\nu_{2}\in\mathcal{C}^{2}_{ \rm 3Lie}(\mathfrak{A},\mathfrak{A}),\quad{\rm d}_{t}={\rm d}_{0}+t{\rm d}_{1},\ {\rm d}_{1}\in\mathcal{C}^{1}_{\rm 3Lie}( \mathfrak{A},\mathfrak{A}).\] **Definition 3.1**.: A linear deformation of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],{\rm d})\) is a pair \((\nu_{t},{\rm d}_{t})\) which endows \((\mathfrak{A}[[t]],\nu_{t},{\rm d}_{t})\) with the modified \(\lambda\)-differential 3-Lie algebra. **Proposition 3.2**.: _The pair \((\nu_{t},{\rm d}_{t})\) generates a linear deformation of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],{\rm d})\) if and only if the following equations hold:_ \[\sum_{i+j=n}\nu_{i}(a_{1},a_{2},\nu_{j}(a_{3},a_{4},a_{5}))= \sum_{i+j=n}\nu_{i}(\nu_{j}(a_{1},a_{2},a_{3}),a_{4},a_{5})+\sum_{i +j=n}\nu_{i}(a_{3},\nu_{j}(a_{1},a_{2},a_{4}),a_{5})\] \[+\sum_{i+j=n}\nu_{i}(a_{3},a_{4},\nu_{j}(a_{1},a_{2},a_{5})), \tag{3.1}\] \[\sum_{i+l=n}{\rm d}_{l}(\nu_{i}(a_{1},a_{2},a_{3}))= \sum_{i+l=n}\nu_{i}({\rm d}_{l}(a_{1}),a_{2},a_{3})+\sum_{i+l=n} \nu_{i}(a_{1},{\rm d}_{l}(a_{2}),a_{3})\] \[+\sum_{i+l=n}\nu_{i}(a_{1},a_{2},{\rm d}_{l}(a_{3}))+\lambda\nu_ {n}(a_{1},a_{2},a_{3}), \tag{3.2}\] _for any \(a_{1},a_{2},a_{3},a_{4},a_{5}\in\mathfrak{A}\) and \(i,j=0,1,2,l=0,1\)._ Proof.: \((\mathfrak{A}[[t]],\nu_{t},{\rm d}_{t})\) is a modified \(\lambda\)-differential 3-Lie algebra if and only if \[\nu_{t}(a_{1},a_{2},\nu_{t}(a_{3},a_{4},a_{5}))\] \[=\nu_{t}(\nu_{t}(a_{1},a_{2},a_{3}),a_{4},a_{5})+\nu_{t}(a_{3}, \nu_{t}(a_{1},a_{2},a_{4}),a_{5})+\nu_{t}(a_{3},a_{4},\nu_{t}(a_{1},a_{2},a_{5 })), \tag{3.3}\] \[{\rm d}_{t}(\nu_{t}(a_{1},a_{2},a_{3}))\] \[=\nu_{t}({\rm d}_{t}(a_{1}),a_{2},a_{3})+\nu_{t}(a_{1},{\rm d}_{ t}(a_{2}),a_{3})+\nu_{t}(a_{1},a_{2},{\rm d}_{t}(a_{3}))+\lambda\nu_{t}(a_{1},a_{2 },a_{3}). \tag{3.4}\] Comparing the coefficients of \(t^{n}\) on both sides of the above equations, Eqs. (3.3) and (3.4) are equivalent to Eqs. (3.1) and (3.2) respectively. **Corollary 3.3**.: _Let \((\mathfrak{A}[[t]],\nu_{t},{\rm d}_{t})\) be a linear deformation of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},\nu_{0},{\rm d})\). Then \((\nu_{1},{\rm d}_{1})\) is a 2-cocycle of \((\mathfrak{A},[-,-,-],{\rm d})\) with the coefficient in the adjoint representation \((\mathfrak{A};ad,{\rm d})\)._ Proof.: For \(n=1\), Eqs. (3.1) and (3.2) are equivalent to \[\nu_{1}(a_{1},a_{2},[a_{3},a_{4},a_{5}])+[a_{1},a_{2},\nu_{1}(a_{ 3},a_{4},a_{5})]\] \[= \nu_{1}([a_{1},a_{2},a_{3}],a_{4},a_{5})+[\nu_{1}(a_{1},a_{2},a_ {3}),a_{4},a_{5}]+\nu_{1}(a_{3},[a_{1},a_{2},a_{4}],a_{5})+[a_{3},\nu_{1}(a_{1 },a_{2},a_{4}),a_{5}]\] \[+\nu_{1}(a_{3},a_{4},[a_{1},a_{2},a_{5}])+[a_{3},a_{4},\nu_{1}(a_ {1},a_{2},a_{5})],\] \[{\rm d}_{1}([a_{1},a_{2},a_{3}])+{\rm d}(\nu_{1}(a_{1},a_{2},a_{ 3}))\] \[= [{\rm d}_{1}(a_{1}),a_{2},a_{3}]+\nu_{1}({\rm d}(a_{1}),a_{2},a_{ 3})+[a_{1},{\rm d}_{1}(a_{2}),a_{3}]+\nu_{1}(a_{1},{\rm d}(a_{2}),a_{3})+[a_{ 1},a_{2},{\rm d}_{1}(a_{3})]\] \[+\nu_{1}(a_{1},a_{2},{\rm d}(a_{3}))+\lambda\nu_{1}(a_{1},a_{2}, a_{3}),\] which imply that \(\delta\nu_{1}=0,\delta{\rm d}_{1}+\Phi\nu_{1}=0\) respectively. Hence, \((\nu_{1},{\rm d}_{1})\) is a 2-cocycle of \((\mathfrak{A},[-,-,-],{\rm d})\) with the coefficient in the adjoint representation \((\mathfrak{A};ad,{\rm d})\). **Definition 3.4**.: The 2-cocycle \((\nu_{1},{\rm d}_{1})\) is called the infinitesimal of the linear deformation \((\mathfrak{A}[[t]],\nu_{t},{\rm d}_{t})\) of \((\mathfrak{A},[-,-,-],{\rm d})\). **Definition 3.5**.: (i) Two linear deformations \((\mathfrak{A}[[[t]],\nu_{t},\mathrm{d}_{t})\) and \((\mathfrak{A}[[t]],\nu^{\prime}_{t},\mathrm{d}^{\prime}_{t})\) of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\) are said to be equivalent if there exists a linear map \(N:\mathfrak{A}\to\mathfrak{A}\), such that \(N_{t}=\mathrm{id}_{\mathfrak{A}}+tN\) satisfying \[N_{t}(\mathrm{d}_{t}(a_{1}))= \mathrm{d}^{\prime}_{t}(N_{t}(a_{1})), \tag{3.5}\] \[N_{t}\nu_{t}(a_{1},a_{2},a_{3})= \nu^{\prime}_{t}(N_{t}(a_{1}),N_{t}(a_{2}),N_{t}(a_{3})), \tag{3.6}\] for any \(a_{1},a_{2},a_{3}\in\mathfrak{A}\). (ii) A linear deformation \((\mathfrak{A}[[t]],\nu_{t},\mathrm{d}_{t})\) of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\) is said to be trivial if \((\mathfrak{A}[[t]],\nu_{t},\mathrm{d}_{t})\) is equivalent to \((\mathfrak{A},[-,-,-],\mathrm{d})\). Comparing the coefficients of \(t\) on both sides of the above Eqs. (3.5) and (3.6), we have \[\nu_{1}(a_{1},a_{2},a_{3})-\nu^{\prime}_{1}(a_{1},a_{2},a_{3}) =[Na_{1},a_{2},a_{3}]+[a_{1},Na_{2},a_{3}]+[a_{1},a_{2},Na_{3}]-N[ a_{1},a_{2},a_{3}],\] \[\mathrm{d}_{1}(a)-\mathrm{d}^{\prime}_{1}(a) =\mathrm{d}(N_{1}a)-N_{1}\mathrm{d}(a).\] Thus, we have the following theorem. **Theorem 3.6**.: _The infinitesimals of two equivalent linear deformations of \((\mathfrak{A},[-,-,-],\mathrm{d})\) are in the same cohomological class in \(\mathcal{H}^{2}_{\mathrm{md3Lie}_{\lambda}}(\mathfrak{A},\mathfrak{A})\)._ Let \((\mathfrak{A}[[t]],\nu_{t},\mathrm{d}_{t})\) be a trivial deformation of \((\mathfrak{A},[-,-,-],\mathrm{d})\). Then there exists a linear map \(N:\mathfrak{A}\to\mathfrak{A}\), such that \(N_{t}=\mathrm{id}_{\mathfrak{A}}+tN\) satisfying \[N_{t}(\mathrm{d}(a_{1}))= \mathrm{d}(N_{t}(a_{1})), \tag{3.7}\] \[N_{t}\nu_{t}(a_{1},a_{2},a_{3})= [N_{t}(a_{1}),N_{t}(a_{2}),N_{t}(a_{3})]. \tag{3.8}\] Compare the coefficients of \(t^{i}(1\leq i\leq 3)\) on both sides of Eqs. (3.7) and (3.8), and we can get \[N\mathrm{d}(a_{1})= \mathrm{d}(Na_{1}), \tag{3.9}\] \[\nu_{1}(a_{1},a_{2},a_{3})+N[a_{1},a_{2},a_{3}]= [Na_{1},a_{2},a_{3}]+[a_{1},Na_{2},a_{3}]+[a_{1},a_{2},Na_{3}],\] (3.10) \[\nu_{2}(a_{1},a_{2},a_{3})+N\nu_{1}(a_{1},a_{2},a_{3})= [Na_{1},Na_{2},a_{3}]+[a_{1},Na_{2},Na_{3}]+[Na_{1},a_{2},Na_{3}],\] (3.11) \[N\nu_{2}(a_{1},a_{2},a_{3})= [Na_{1},Na_{2},Na_{3}]. \tag{3.12}\] Thus, from a trivial deformation, we can get the following definition of Nijenhuis operator. **Definition 3.7**.: Let \((\mathfrak{A},[-,-,-],\mathrm{d})\) be a modified \(\lambda\)-differential 3-Lie algebra. A linear map \(N:\mathfrak{A}\to\mathfrak{A}\) is called a Nijenhuis operator if the following equations hold: \[N\circ\mathrm{d}= \mathrm{d}\circ N, \tag{3.13}\] \[[Na_{1},Na_{2},Na_{3}]= N([a_{1},Na_{2},Na_{3}]+[Na_{1},a_{2},Na_{3}]+[Na_{1},Na_{2},a_{3}])\] \[-N^{2}([Na_{1},a_{2},a_{3}]+[a_{1},Na_{2},a_{3}]+[a_{1},a_{2},Na_{3 }])+N^{3}[a_{1},a_{2},a_{3}],\] for any \(a_{1},a_{2},a_{3}\in\mathfrak{A}\). **Proposition 3.8**.: _Let \((\mathfrak{A},[-,-,-],\mathrm{d})\) be a modified \(\lambda\)-differential 3-Lie algebra, and \(N:\mathfrak{A}\to\mathfrak{A}\) a Nijenhuis operator. Then \((\mathfrak{A},[-,-,-]_{N},\mathrm{d})\) is a modified \(\lambda\)-differential 3-Lie algebra, where_ \[[a_{1},a_{2},a_{3}]_{N}= [a_{1},Na_{2},Na_{3}]+[Na_{1},a_{2},Na_{3}]+[Na_{1},Na_{2},a_{3}]\] \[-N([Na_{1},a_{2},a_{3}]+[a_{1},Na_{2},a_{3}]+[a_{1},a_{2},Na_{3}]) +N^{2}[a_{1},a_{2},a_{3}].\] Proof.: In the light of [23], \((\mathfrak{A},[-,-,-]_{N})\) is a 3-Lie algebra. Next we prove that \(\mathrm{d}\) is a modified \(\lambda\)-differential operator of \((\mathfrak{A},[-,-,-]_{N})\), for any \(a_{1},a_{2},a_{3}\in\mathfrak{A}\), by Eqs. (2.2) and (3.13), we have \[\mathrm{d}[a_{1},a_{2},a_{3}]_{N}\] \[= \mathrm{d}[a_{1},Na_{2},Na_{3}]+\mathrm{d}[Na_{1},a_{2},Na_{3}]+ \mathrm{d}[Na_{1},Na_{2},a_{3}]\] \[-N(\mathrm{d}[Na_{1},a_{2},a_{3}]+\mathrm{d}[a_{1},Na_{2},a_{3}] +\mathrm{d}[a_{1},a_{2},Na_{3}])+N^{2}\mathrm{d}[a_{1},a_{2},a_{3}]\] \[= [\mathrm{d}(a_{1}),Na_{2},Na_{3}]+[a_{1},N\mathrm{d}(a_{2}),Na_{3} ]+[a_{1},Na_{2},N\mathrm{d}(a_{3})]+\lambda[a_{1},Na_{2},Na_{3}]\] \[+[N\mathrm{d}(a_{1}),a_{2},Na_{3}]+[Na_{1},\mathrm{d}(a_{2}),Na_{3 }]+[Na_{1},a_{2},N\mathrm{d}(a_{3})]+\lambda[Na_{1},a_{2},Na_{3}]\] \[+[N\mathrm{d}(a_{1}),Na_{2},a_{3}]+[Na_{1},N\mathrm{d}(a_{2}),a_{ 3}]+[Na_{1},Na_{2},\mathrm{d}(a_{3})]+\lambda[Na_{1},Na_{2},a_{3}]\] \[-N([N\mathrm{d}(a_{1}),a_{2},a_{3}]+[Na_{1},\mathrm{d}(a_{2}),a_{ 3}]+[Na_{1},a_{2},\mathrm{d}(a_{3})]+\lambda[Na_{1},a_{2},a_{3}])\] \[-N([\mathrm{d}(a_{1}),Na_{2},a_{3}]+[a_{1},N\mathrm{d}(a_{2}),a_{ 3}]+[a_{1},Na_{2},\mathrm{d}(a_{3})]+\lambda[a_{1},Na_{2},a_{3}])\] \[-N([\mathrm{d}(a_{1}),a_{2},Na_{3}]+[a_{1},\mathrm{d}(a_{2}),Na_{ 3}]+[a_{1},a_{2},N\mathrm{d}(a_{3})]+\lambda[a_{1},a_{2},Na_{3}])\] \[+N^{2}([\mathrm{d}(a_{1}),a_{2},a_{3}]+[a_{1},\mathrm{d}(a_{2}),a _{3}]+[a_{1},a_{2},\mathrm{d}(a_{3})]+\lambda[a_{1},a_{2},a_{3}])\] \[= [\mathrm{d}(a_{1}),a_{2},a_{3}]_{N}+[a_{1},\mathrm{d}(a_{2}),a_{ 3}]_{N}+[a_{1},a_{2},\mathrm{d}(a_{3})]_{N}+\lambda[a_{1},a_{2},a_{3}]_{N}.\] So we get the conclusion. **Definition 3.9**.: A linear map \(R:\mathfrak{M}\to\mathfrak{A}\) is called an \(\mathcal{O}\)-operator on the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\) with respect to the representation \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) if the following equations hold: \[R\circ\mathrm{d}_{\mathfrak{M}}= \mathrm{d}\circ R,\] \[[Rv_{1},Rv_{2},Rv_{3}]= R(\rho(Rv_{1},Rv_{2})v_{3}+\rho(Rv_{2},Rv_{3})v_{1}+\rho(Rv_{3},Rv_{1})v_{2}),\] for any \(v_{1},v_{2},v_{3}\in\mathfrak{M}\). **Remark 3.10**.: Obviously, An invertible linear map \(R:\mathfrak{M}\to\mathfrak{A}\) is an \(\mathcal{O}\)-operator if and only if \(R^{-1}\) is a 1-cocycle of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\) with coefficients in the representation \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\). **Proposition 3.11**.: _Let \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) be a representation of a modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\). Then \(R:\mathfrak{M}\to\mathfrak{A}\) is an \(\mathcal{O}\)-operator if and only if \(\overline{R}=\left(\begin{array}{cc}0&R\\ 0&0\end{array}\right):\mathfrak{A}\oplus\mathfrak{M}\to\mathfrak{A}\oplus \mathfrak{M}\) is a Nijenhuis operator on semidirect product modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho},\mathrm{d}\oplus\mathrm{d}_{ \mathfrak{M}})\)._ Proof.: For any \(a_{1},a_{2},a_{3}\in\mathfrak{A}\) and \(u_{1},u_{2},u_{3}\in\mathfrak{M}\), by \(\overline{R}^{2}=0\), we have \[(\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}})\overline{R}(a_{1}+u_ {1})=(\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}})(Ru_{1}+0)=\mathrm{d}(Ru_{1}) +0,\] \[\overline{R}(\mathrm{d}\oplus\mathrm{d}_{\mathfrak{M}})(a_{1}+u _{1})=\overline{R}(\mathrm{d}(a_{1})+\mathrm{d}_{\mathfrak{M}}(u_{1}))=R \mathrm{d}_{\mathfrak{M}}(u_{1})+0,\] \[\overline{R}([a_{1}+u_{1},\overline{R}(a_{2}+u_{2}),\overline{R }(a_{3}+u_{3})]_{\rho}+[\overline{R}(a_{1}+u_{1}),a_{2}+u_{2},\overline{R}(a_ {3}+u_{3})]_{\rho}\] \[+[\overline{R}(a_{1}+u_{1}),\overline{R}(a_{2}+u_{2}),a_{3}+u_{3} ]_{\rho})-[\overline{R}(a_{1}+u_{1}),\overline{R}(a_{2}+u_{2}),\overline{R}(a _{3}+u_{3})]_{\rho}\] \[= \overline{R}([a_{1}+u_{1},Ru_{2}+0,Ru_{3}+0]_{\rho}+[Ru_{1}+0,a_{ 2}+u_{2},Ru_{3}+0]_{\rho}+[Ru_{1}+0,Ru_{2}+0,a_{3}+u_{3}]_{\rho})\] \[-[Ru_{1}+0,Ru_{2}+0,Ru_{3}+0]_{\rho}\] \[= \overline{R}([a_{1},Ru_{2},Ru_{3}]+\rho(Ru_{2},Ru_{3})u_{1}+[Ru_{ 1},a_{2},Ru_{3}]+\rho(Ru_{3},Ru_{1})u_{2}+[Ru_{1},Ru_{2}a_{3}]+\rho(Ru_{1},Ru_ {2})u_{3})\] \[-[Ru_{1},Ru_{2},Ru_{3}]+0\] \[= R(\rho(Ru_{2},Ru_{3})u_{1}+\rho(Ru_{3},Ru_{1})u_{2}+\rho(Ru_{1}, Ru_{2})u_{3})-[Ru_{1},Ru_{2},Ru_{3}]+0,\] which implies that \(R\) is an \(\mathcal{O}\)-operator if and only if \(\overline{R}\) is a Nijenhuis operator. ## 4 Abelian extensions of modified \(\lambda\)-differential 3-Lie algebras In this section, we study abelian extensions of modified \(\lambda\)-differential 3-Lie algebras and show that they are classified by the second cohomology groups. Notice that a vector space \(\mathfrak{M}\) together with a linear map \(\mathrm{d}_{\mathfrak{M}}\) is naturally a modified \(\lambda\)-differential 3-Lie algebra where the bracket on \(\mathfrak{M}\) is defined to be \([-,-,-]_{\mathfrak{M}}=0\). **Definition 4.1**.: An abelian extension of \((\mathfrak{A},[-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\) is a short exact sequence of homomorphisms of modified \(\lambda\)-differential 3-Lie algebras \[0\to(\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\overset{i} {\longrightarrow}(\widehat{\mathfrak{A}},[-,-,-]_{\widehat{\mathfrak{A}}}, \widehat{\mathrm{d}})\overset{p}{\longrightarrow}(\mathfrak{A},[-,-,-], \mathrm{d})\to 0,\] i.e., there exists a commutative diagram: \[\begin{CD}0@>{}>{}>\mathfrak{M}@>{i}>{}>\widehat{\mathfrak{A}}@>{p}>{}> \mathfrak{A}@>{}>{}>0\\ @V{\mathrm{d}_{\mathfrak{M}}}V{}V@V{\widehat{\mathrm{d}}}V{}V\\ 0@>{}>{}>\mathfrak{M}@>{i}>{}>\widehat{\mathfrak{A}}@>{p}>{}>\mathfrak{A}@>{}>{}>0, \end{CD}\] where the modified \(\lambda\)-differential 3-Lie algebras \((\widehat{\mathfrak{A}},[-,-,-]_{\widehat{\mathfrak{A}}},\widehat{\mathrm{d}})\) satisfies \([-,u,v]_{\widehat{\mathfrak{A}}}=0\), for all \(u,v\in\mathfrak{M}\). We will call \((\widehat{\mathfrak{A}},[-,-,-]_{\widehat{\mathfrak{A}}},\widehat{\mathrm{d}})\) an abelian extension of \((\mathfrak{A},[-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\). A section of an abelian extension \((\widehat{\mathfrak{A}},[-,-,-]_{\widehat{\mathfrak{A}}},\widehat{\mathrm{d}})\) of \((\mathfrak{A},[-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\) is a linear map \(s:\mathfrak{A}\to\widehat{\mathfrak{A}}\) such that \(p\circ s=\mathrm{id}_{\mathfrak{A}}\). Let \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\) be a representation of a modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\). Assume that \((f,g)\in\mathcal{C}^{2}_{\mathrm{md3Lie}_{\lambda}}(\mathfrak{A},\mathfrak{M})\). Define \([-,-,-]_{\rho f}:\wedge^{3}(\mathfrak{A}\oplus\mathfrak{M})\to\mathfrak{A} \oplus\mathfrak{M}\) and \(\mathrm{d}_{g}:\mathfrak{A}\oplus\mathfrak{M}\to\mathfrak{A}\oplus\mathfrak{M}\) respectively by \[[a_{1}+u_{1},a_{2}+u_{2},a_{3}+u_{3}]_{\rho f}\] \[=[a_{1},a_{2},a_{3}]+\rho(a_{2},a_{3})u_{1}+\rho(a_{3},a_{1})u_{ 2}+\rho(a_{1},a_{2})u_{3}+f(a_{1},a_{2},a_{3}), \tag{4.1}\] \[\mathrm{d}_{g}(a_{1}+u_{1})=\mathrm{d}(a_{1})+\mathrm{d}_{ \mathfrak{M}}(u_{1})+g(a_{1}),\,\forall a_{1},a_{2},a_{3}\in\mathfrak{A},\ u_{1},u_{2},u_{3}\in \mathfrak{M}. \tag{4.2}\] **Proposition 4.2**.: _The triple \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho f},\mathrm{d}_{g})\) is a modified \(\lambda\)-differential 3-Lie algebra if and only if \((f,g)\) is a 2-cocycle in the cohomology of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\) with the coefficient in \((\mathfrak{M};\rho,\mathrm{d}_{\mathfrak{M}})\). In this case,_ \[0\to(\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\hookrightarrow( \mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho f},\mathrm{d}_{g})\stackrel{{ p}}{{\longrightarrow}}(\mathfrak{A},[-,-,-],\mathrm{d})\to 0\] _is an abelian extension._ Proof.: \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho f},\mathrm{d}_{g})\) is a modified \(\lambda\)-differential 3-Lie algebra if and only if \[[[a_{1}+u_{1},a_{2}+u_{2},a_{3}+u_{3}]_{\rho f},a_{4}+u_{4},a_{5} +u_{5}]_{\rho f}+[a_{3}+u_{3},[a_{1}+u_{1},a_{2}+u_{2},a_{4}+u_{4}]_{\rho f},a _{5}+u_{5}]_{\rho f}\] \[+[a_{3}+u_{3},a_{4}+u_{4},[a_{1}+u_{1},a_{2}+u_{2},a_{5}+u_{5}]_ {\rho f}]_{\rho f}-[a_{1}+u_{1},a_{2}+u_{2},[a_{3}+u_{3},a_{4}+u_{4},a_{5}+u_{5 }]_{\rho f}]_{\rho f} \tag{4.3}\] \[=0,\] \[[\mathrm{d}_{g}(a_{1}+u_{1}),a_{2}+u_{2},a_{3}+u_{3}]_{\rho f}+[a _{1}+u_{1},\mathrm{d}_{g}(a_{2}+u_{2}),a_{3}+u_{3}]_{\rho f}+[a_{1}+u_{1},a_{ 2}+u_{2},\mathrm{d}_{g}(a_{3}+u_{3})]_{\rho f}\] \[+\lambda[a_{1}+u_{1},a_{2}+u_{2},a_{3}+u_{3}]_{\rho f}-\mathrm{d} _{g}[a_{1}+u_{1},a_{2}+u_{2},a_{3}+u_{3}]_{\rho f}\] \[=0, \tag{4.4}\] for any \(a_{1},a_{2},a_{3},a_{4},a_{5}\in\mathfrak{A},\ u_{1},u_{2},u_{3},\ u_{4},u_{5} \in\mathfrak{M}\). Furthermore, Eqs. (4.3) and (4.4) are equivalent to \[\rho(a_{4},a_{5})f(a_{1},a_{2},a_{3})+f([a_{1},a_{2},a_{3}],a_{4},a_{5})+\rho(a_{5},a_{3})f(a_{1},a_{2},a_{4})+f(a_{3},[a_{1},a_{2},a_{4}],a_{5})\] \[+\rho(a_{3},a_{4})f(a_{1},a_{2},a_{5})+f(a_{3},a_{4},[a_{1},a_{2},a_{5}])-\rho(a_{1},a_{2})f(a_{3},a_{4},a_{5})-f(a_{1},a_{2},[a_{3},a_{4},a_{5}])\] \[=0 \tag{4.5}\] \[\rho(a_{2},a_{3})g(a_{1})+f(\mathrm{d}(a_{1}),a_{2},a_{3})+\rho(a _{3},a_{1})g(a_{2})+f(a_{1},\mathrm{d}(a_{2}),a_{3})+\rho(a_{1},a_{2})g(a_{3})\] \[+f(a_{1},a_{2},\mathrm{d}(a_{3}))+\lambda f(a_{1},a_{2},a_{3})- \mathrm{d}_{\mathfrak{M}}(f(a_{1},a_{2},a_{3}))-g([a_{1},a_{2},a_{3}])=0, \tag{4.6}\] using Eqs. (4.5) and (4.6), we get \(\delta f=0\) and \(\delta g+\Phi f=0\), respectively. Therefore, \(\partial(f,g)=(\delta f,\delta g+\Phi f)=0\), which implies that \((f,g)\) is a 2-cocycle. Conversely, if \((f,g)\) satisfying Eqs. (4.5) and (4.6), then \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho f},\mathrm{d}_{g})\) is a modified \(\lambda\)-differential 3-Lie algebra. Let \((\widehat{\mathfrak{A}},[-,-,-]_{\widehat{\mathfrak{A}}},\widehat{\mathrm{d}})\) be an abelian extension of \((\mathfrak{A},[-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\) and \(s:\mathfrak{A}\to\widehat{\mathfrak{A}}\) a section. Define \(\varrho:\wedge^{2}\mathfrak{A}\to\mathrm{End}(\mathfrak{M})\), \(\upsilon:\wedge^{3}\mathfrak{A}\to\mathfrak{M}\) and \(\mu:\mathfrak{A}\to\mathfrak{M}\) respectively by \[\varrho(a_{1},a_{2})u: =[s(a_{1}),s(a_{2}),u]_{\widehat{\mathfrak{A}}},\] \[\upsilon(a_{1},a_{2},a_{3}): =[s(a_{1}),s(a_{2}),s(a_{3})]_{\widehat{\mathfrak{A}}}-s([a_{1},a_{2},a_{3}]),\] \[\mu(a_{1}): =\widehat{\mathrm{d}}(s(a_{1}))-s(\mathrm{d}(a_{1})),\quad \forall a_{1},a_{2},a_{3}\in\mathfrak{A},u\in\mathfrak{M}.\] Note that \(\varrho\) is independent on the choice of \(s\). **Proposition 4.3**.: _With the above notations, \((\mathfrak{M},\varrho,\mathrm{d}_{\mathfrak{M}})\) is a representation over the modified \(\lambda\)-differential 3-Lie algebras \((\mathfrak{A},[-,-,-],\mathrm{d})\) and \((\upsilon,\mu)\) is a 2-cocycle in the cohomology of the modified \(\lambda\)-differential 3-Lie algebras \((\mathfrak{A},[-,-,-],\mathrm{d})\) with the coefficient in \((\mathfrak{M};\varrho,\mathrm{d}_{\mathfrak{M}})\). Furthermore, the cohomological class of the 2-cocycle \([(\upsilon,\mu)]\in\mathcal{H}^{2}_{\mathrm{md3Lie}_{\lambda}}(\mathfrak{A}, \mathfrak{M})\) is independent of the choice of sections of \(p\)._ Proof.: First, for any \(a_{1},a_{2},a_{3},a_{4}\in\mathfrak{A},u\in\mathfrak{M}\), by Eq. (2.1), we get \[\varrho(a_{2},a_{3})\varrho(a_{1},a_{4})u+\varrho(a_{3},a_{1}) \varrho(a_{2},a_{4})u+\varrho(a_{1},a_{2})\varrho(a_{3},a_{4})u-\varrho([a_{1 },a_{2},a_{3}],a_{4})u\] \[= [s(a_{2}),s(a_{3}),[s(a_{1}),s(a_{4}),u]_{\widehat{\mathfrak{A}}} ]_{\widehat{\mathfrak{A}}}+[s(a_{3}),s(a_{1}),[s(a_{2}),s(a_{4}),u]_{\widehat {\mathfrak{A}}}\] \[+[s(a_{1}),s(a_{2}),[s(a_{3}),s(a_{4}),u]_{\widehat{\mathfrak{A}}} ]_{\widehat{\mathfrak{A}}}-[s([a_{1},a_{2},a_{3}]),s(a_{4}),u]_{\widehat{ \mathfrak{A}}}\] \[= [s(a_{2}),s(a_{3}),[s(a_{1}),s(a_{4}),u]_{\widehat{\mathfrak{A}}} ]_{\widehat{\mathfrak{A}}}+[s(a_{3}),s(a_{1}),[s(a_{2}),s(a_{4}),u]_{\widehat {\mathfrak{A}}}\] \[+[s(a_{1}),s(a_{2}),[s(a_{3}),s(a_{4}),u]_{\widehat{\mathfrak{A}}} ]_{\widehat{\mathfrak{A}}}-[[s(a_{1}),s(a_{2}),s(a_{3})]_{\widehat{ \mathfrak{A}}}-\upsilon(a_{1},a_{2},a_{3}),s(a_{4}),u]_{\widehat{\mathfrak{A}}}\] \[= 0,\] \[\varrho(a_{3},a_{4})\varrho(a_{1},a_{2})u+\varrho([a_{1},a_{2},a_ {3}],a_{4})u+\varrho(a_{3},[a_{1},a_{2},a_{4}])u-\varrho(a_{1},a_{2})\varrho( a_{3},a_{4})u\] \[= [s(a_{3}),s(a_{4}),[s(a_{1}),s(a_{2}),u]_{\widehat{\mathfrak{A}}} ]_{\widehat{\mathfrak{A}}}+[s([a_{1},a_{2},a_{3}]),s(a_{4}),u]_{\widehat{ \mathfrak{A}}}+[s(a_{3}),s([a_{1},a_{2},a_{4}]),u]_{\widehat{\mathfrak{A}}}\] \[-[s(a_{1}),s(a_{2}),[s(a_{3}),s(a_{4}),u]_{\widehat{\mathfrak{A} }}]_{\widehat{\mathfrak{A}}}\] \[= [s(a_{3}),s(a_{4}),[s(a_{1}),s(a_{2}),u]_{\widehat{\mathfrak{A}}} ]_{\widehat{\mathfrak{A}}}+[[s(a_{1}),s(a_{2}),s(a_{3})]-\upsilon(a_{1},a_{2 },a_{3}),s(a_{4}),u]_{\widehat{\mathfrak{A}}}\] \[+[s(a_{3}),[s(a_{1}),s(a_{2}),s(a_{4})]-\upsilon(a_{1},a_{2},a_ {4}),u]_{\widehat{\mathfrak{A}}}-[s(a_{1}),s(a_{2}),[s(a_{3}),s(a_{4}),u]_{ \widehat{\mathfrak{A}}}\] \[= 0.\] In addition, by Eq. (2.2), we have \[\mathrm{d}_{\mathfrak{M}}(\varrho(a_{1},a_{2})u)=\mathrm{d}_{ \mathfrak{M}}([s(a_{1}),s(a_{2}),u]_{\widehat{\mathfrak{A}}})\] \[= [\widehat{\mathrm{d}}(s(a_{1})),s(a_{2}),u]_{\widehat{\mathfrak{A} }}+[s(a_{1}),\widehat{\mathrm{d}}(s(a_{2})),u]_{\widehat{\mathfrak{A}}}+[s(a_ {1}),s(a_{2}),\mathrm{d}_{\mathfrak{M}}(u)]_{\widehat{\mathfrak{A}}}+\lambda[s(a _{1}),s(a_{2}),u]_{\widehat{\mathfrak{A}}}\] \[= [s(\mathrm{d}(a_{1}))+\mu(a_{1}),s(a_{2}),u]_{\widehat{\mathfrak{A} }}+[s(a_{1}),s(\mathrm{d}(a_{2}))+\mu(a_{2}),u]_{\widehat{\mathfrak{A}}}+[s(a _{1}),s(a_{2}),\mathrm{d}_{\mathfrak{M}}(u)]_{\widehat{\mathfrak{A}}}\] \[+\lambda[s(a_{1}),s(a_{2}),u]_{\widehat{\mathfrak{A}}}\] \[= \varrho(\mathrm{d}(a_{1}),a_{2})u+\varrho(a_{1},\mathrm{d}(a_{2}))u +\varrho(a_{1},a_{2})\mathrm{d}_{\mathfrak{M}}(u)+\lambda\varrho(a_{1},a_{2})u.\] Hence, \((\mathfrak{M},\varrho,\mathrm{d}_{\mathfrak{M}})\) is a representation over \((\mathfrak{A},[-,-,-],\mathrm{d})\). Since \((\widehat{\mathfrak{A}},[-,-,-]_{\widehat{\mathfrak{A}}},\widehat{\mathrm{d}})\) is an abelian extension of \((\mathfrak{A},[-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\), by Proposition 4.2, \((\upsilon,\mu)\) is a 2-cocycle. Moreover, Let \(s_{1},s_{2}:\mathfrak{A}\to\widehat{\mathfrak{M}}\) be two distinct sections providing 2-cocycles \((\upsilon_{1},\mu_{1})\) and \((\upsilon_{2},\mu_{2})\) respectively. Define linear map \(\iota:\mathfrak{A}\to\mathfrak{M}\) by \(\iota(a_{1})=s_{1}(a_{1})-s_{2}(a_{1})\). Then \[\upsilon_{1}(a_{1},a_{2},a_{3})\] \[= [s_{1}(a_{1}),s_{1}(a_{2}),s_{1}(a_{3})]_{\widehat{\mathfrak{A}}_ {1}}-s_{1}([a_{1},a_{2},a_{3}])\] \[= [s_{2}(a_{1})+\iota(a_{1}),s_{2}(a_{2})+\iota(a_{2}),s_{2}(a_{3}) +\iota(a_{3})]_{\widehat{\mathfrak{A}}_{1}}-(s_{2}([a_{1},a_{2},a_{3}])+\iota ([a_{1},a_{2},a_{3}]))\] \[= [s_{2}(a_{1}),s_{2}(a_{2}),s_{2}(a_{3})]_{\widehat{\mathfrak{A}}_ {2}}+\varrho(a_{2},a_{3})\iota(a_{1})+\varrho(a_{3},a_{1})\iota(a_{2})+ \varrho(a_{1},a_{2})\iota(a_{3})\] \[-s_{2}([a_{1},a_{2},a_{3}])-\iota([a_{1},a_{2},a_{3}])\] \[= [s_{2}(a_{1}),s_{2}(a_{2}),s_{2}(a_{3})]_{\widehat{\mathfrak{A}}_ {2}}-s_{2}([a_{1},a_{2},a_{3}])\] \[+\varrho(a_{2},a_{3})\iota(a_{1})+\varrho(a_{3},a_{1})\iota(a_{2 })+\varrho(a_{1},a_{2})\iota(a_{3})-\iota([a_{1},a_{2},a_{3}])\] \[= \upsilon_{2}(a_{1},a_{2},a_{3})+\delta\iota(a_{1},a_{2},a_{3})\] and \[\mu_{1}(a_{1}) =\widehat{d}(s_{1}(a_{1}))-s_{1}(\mathrm{d}(a_{1}))\] \[=\widehat{\mathrm{d}}(s_{2}(a_{1})+\iota(a_{1}))-\big{(}s_{2}( \mathrm{d}(a_{1}))+\iota(\mathrm{d}(a_{1}))\big{)}\] \[=\big{(}\widehat{\mathrm{d}}(s_{2}(a_{1}))-s_{2}(\mathrm{d}(a_{1 }))\big{)}+\widehat{\mathrm{d}}(\iota(a_{1}))-\iota(\mathrm{d}(a_{1}))\] \[=\mu_{2}(a_{1})+\mathrm{d}_{\mathfrak{M}}(\iota(a_{1}))-\iota( \mathrm{d}(a_{1}))\] \[=\mu_{2}(a_{1})-\Phi\iota(a_{1}),\] which implies that \((\upsilon_{1},\mu_{1})-(\upsilon_{2},\mu_{2})=(\delta\iota,-\Phi\iota)= \partial(\iota)\in\mathcal{C}^{2}_{\mathrm{md3Lie}_{\lambda}}(\mathfrak{A}, \mathfrak{M})\). So \([(\upsilon_{1},\mu_{1})]=[(\upsilon_{2},\mu_{2})]\in\mathcal{H}^{2}_{\mathrm{ md3Lie}_{\lambda}}(\mathfrak{A},\mathfrak{M})\). **Definition 4.4**.: Let \((\widehat{\mathfrak{A}}_{1},[-,-,-]_{\widehat{\mathfrak{A}}_{1}},\widehat{ \mathrm{d}}_{1})\) and \((\widehat{\mathfrak{A}}_{2},[-,-,-]_{\widehat{\mathfrak{A}}_{2}},\widehat{ \mathrm{d}}_{2})\) be two abelian extensions of \((\mathfrak{A},[-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\). They are said to be equivalent if there is an isomorphism of modified \(\lambda\)-differential 3-Lie algebras \(\eta:(\widehat{\mathfrak{A}}_{1},[-,-,-]_{\widehat{\mathfrak{A}}_{1}},\widehat{ \mathrm{d}}_{1})\to(\widehat{\mathfrak{A}}_{2},[-,-,-]_{\widehat{\mathfrak{A}}_ {2}},\widehat{\mathrm{d}}_{2})\) such that the following diagram is commutative: \[\begin{CD}0@>{}>{}>(\mathfrak{M},\mathrm{d}_{\mathfrak{M}})@>{i_{1}}>{}>( \widehat{\mathfrak{A}}_{1},\widehat{\mathrm{d}}_{1})@>{p_{1}}>{}>(\mathfrak{A}, \mathrm{d})@>{}>{}>0\\ @V{}V{\eta}V@V{}V{}V\\ 0@>{}>{}>(\mathfrak{M},\mathrm{d}_{\mathfrak{M}})@>{i_{2}}>{}>(\widehat{ \mathfrak{A}}_{2},\widehat{\mathrm{d}}_{2})@>{p_{2}}>{}>(\mathfrak{A},\mathrm{d })@>{}>{}>0.\end{CD}\] Next we are ready to classify abelian extensions of a modified \(\lambda\)-differential 3-Lie algebra. **Theorem 4.5**.: _There is a one-to-one correspondence between equivalence classes of abelian extensions of a modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\) and the second cohomology group \(\mathcal{H}^{2}_{\mathrm{md3Lie}_{\lambda}}(\mathfrak{A},\mathfrak{M})\) of \((\mathfrak{A},[-,-,-],\mathrm{d})\) with coefficients in the representation \((\mathfrak{M},\varrho,\mathrm{d}_{\mathfrak{M}})\)._ Proof.: Assume that \((\widehat{\mathfrak{A}}_{1},[-,-,-]_{\widehat{\mathfrak{A}}_{1}},\widehat{ \mathsf{d}}_{1})\) and \((\widehat{\mathfrak{A}}_{2},[-,-,-]_{\widehat{\mathfrak{A}}_{2}},\widehat{ \mathsf{d}}_{2})\) are two equivalent abelian extensions of \((\mathfrak{A},\)\([-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\) with the associated isomorphism \(\eta:(\widehat{\mathfrak{A}}_{1},[-,-,-]_{\widehat{\mathfrak{A}}_{1}}, \widehat{\mathsf{d}}_{1})\to(\widehat{\mathfrak{A}}_{2},[-,-,-]_{\widehat{ \mathfrak{A}}_{2}},\widehat{\mathsf{d}}_{2})\). Let \(s_{1}\) be a section of \((\widehat{\mathfrak{A}}_{1},[-,-,-]_{\widehat{\mathfrak{A}}_{1}},\widehat{ \mathsf{d}}_{1})\). As \(p_{2}\circ\eta=p_{1}\), we have \[p_{2}\circ(\eta\circ s_{1})=p_{1}\circ s_{1}=\mathrm{id}_{\mathfrak{A}}.\] That is, \(\eta\circ s_{1}\) is a section of \((\widehat{\mathfrak{A}}_{2},[-,-,-]_{\widehat{\mathfrak{A}}_{2}},\widehat{ \mathsf{d}}_{2})\). Denote \(s_{2}:=\eta\circ s_{1}\). Since \(\eta\) is a isomorphism of modified \(\lambda\)-differential 3-Lie algebras such that \(\eta|_{\mathfrak{M}}=\mathrm{id}_{\mathfrak{M}}\), we get \[\upsilon_{2}(a_{1},a_{2},a_{3}) =[s_{2}(a_{1}),s_{2}(a_{2}),s_{2}(a_{3})]_{\widehat{\mathfrak{A} }_{2}}-s_{2}([a_{1},a_{2},a_{3}])\] \[=[\eta(s_{1}(a_{1})),\eta(s_{1}(a_{2})),\eta(s_{1}(a_{3}))]_{ \widehat{\mathfrak{A}}_{2}}-\eta(s_{1}([a_{1},a_{2},a_{3}]))\] \[=\eta\big{(}[s_{1}(a_{1}),s_{1}(a_{2}),s_{1}(a_{3})]_{\widehat{ \mathfrak{A}}_{1}}-s_{1}([a_{1},a_{2},a_{3}])\big{)}\] \[=\eta(\upsilon_{1}(a_{1},a_{2},a_{3}))\] \[=\upsilon_{1}(a_{1},a_{2},a_{3})\] and \[\mu_{2}(a_{1}) =\widehat{\mathsf{d}}_{2}(s_{2}(a_{1}))-s_{2}(\mathrm{d}(a_{1})) =\widehat{\mathsf{d}}_{2}\big{(}\eta(s_{1}(a_{1}))\big{)}-\eta\big{(}s_{1}( \mathrm{d}(a_{1}))\big{)}\] \[=\eta\big{(}\widehat{\mathsf{d}}_{1}(s_{1}(a_{1}))-s_{1}(\mathrm{ d}(a_{1}))\big{)}\] \[=\eta(\mu_{1}(a_{1}))\] \[=\mu_{1}(a_{1}).\] Hence, all equivalent abelian extensions give rise to the same element in \(\mathcal{H}^{2}_{\mathrm{md3Lie}_{\lambda}}(\mathfrak{A},\mathfrak{M})\). Conversely, suppose that \([(f_{1},g_{1})]=[(f_{2},g_{2})]\in\mathcal{H}^{2}_{\mathrm{md3Lie}_{\lambda}}( \mathfrak{A},\mathfrak{M})\), we can construct two abelian extensions \(0\to(\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\hookrightarrow( \mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho f_{1}},\mathrm{d}_{g_{1}}) \stackrel{{ p_{1}}}{{\longrightarrow}}(\mathfrak{A},[-,-,-], \mathrm{d})\to 0\) and \(0\to(\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\hookrightarrow( \mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho f_{2}},\mathrm{d}_{g_{2}}) \stackrel{{ p_{2}}}{{\longrightarrow}}(\mathfrak{A},[-,-,-], \mathrm{d})\to 0\) via Eqs. (4.1) and (4.2). Then there exists a linear map \(\iota:\mathfrak{A}\to\mathfrak{M}\) such that \[(f_{2},g_{2})=(f_{1},g_{1})+\partial(\iota).\] Define linear map \(\eta_{\iota}:\mathfrak{A}\oplus\mathfrak{M}\to\mathfrak{A}\oplus\mathfrak{M}\) by \(\eta_{\iota}(a_{1},u_{1}):=a_{1}+\iota(a_{1})+u_{1},\ a_{1}\in\mathfrak{A},u_ {1}\in\mathfrak{M}\). Then, \(\eta_{\iota}\) is an isomorphism of these two abelian extensions \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho f_{1}},\mathrm{d}_{g_{1}})\) and \((\mathfrak{A}\oplus\mathfrak{M},[-,-,-]_{\rho f_{2}},\mathrm{d}_{g_{2}})\). **Remark 4.6**.: In particular, any vector space \(\mathfrak{M}\) with linear transformation \(\mathrm{d}_{\mathfrak{M}}\) can serve as a trivial representation of \((\mathfrak{A},[-,-,-],\mathrm{d})\). In this situation, central extensions of \((\mathfrak{A},[-,-,-],\mathrm{d})\) by \((\mathfrak{M},[-,-,-]_{\mathfrak{M}},\mathrm{d}_{\mathfrak{M}})\) are classified by the second cohomology group \(\mathcal{H}^{2}_{\rm md3Lie_{a}}(\mathfrak{A},\mathfrak{M})\) of \((\mathfrak{A},[-,-,-],{\rm d})\) with the coefficient in the trivial representation \((\mathfrak{M},\rho=0,{\rm d}_{\mathfrak{M}})\). ## 5 \(T^{*}\)-extensions of modified \(\lambda\)-differential 3-Lie algebras The \(T^{*}\)-extensions of 3-Lie algebra was studied in [25]. In this section, we consider \(T^{*}\)-extensions of modified \(\lambda\)-differential 3-Lie algebras by the second cohomology groups with the coefficient in a dual adjoint representation. Let \((\mathfrak{A},[-,-,-],{\rm d})\) be a modified \(\lambda\)-differential 3-Lie algebra and \(\mathfrak{A}^{*}\) be the dual space of \(\mathfrak{A}\). By Example 2.15, \((\mathfrak{A}^{*};ad^{*},-{\rm d}^{*})\) is a representation of \((\mathfrak{A},[-,-,-],{\rm d})\). Suppose that \((f,g)\in\mathcal{C}^{2}_{\rm md3Lie_{a}}(\mathfrak{A},\mathfrak{A}^{*})\). Define a trilinear map \([-,-,-]_{f}:\wedge^{3}(\mathfrak{A}\oplus\mathfrak{A}^{*})\to\mathfrak{A} \oplus\mathfrak{A}^{*}\) and a linear map \({\rm d}_{g}:\mathfrak{A}\oplus\mathfrak{A}^{*}\to\mathfrak{A}\oplus\mathfrak{ A}^{*}\) respectively by \[[a_{1}+\alpha_{1},a_{2}+\alpha_{2},a_{3}+\alpha_{3}]_{f}\] \[=[a_{1},a_{2},a_{3}]+ad^{*}(a_{2},a_{3})\alpha_{1}+ad^{*}(a_{3},a _{1})\alpha_{2}+ad^{*}(a_{1},a_{2})\alpha_{3}+f(a_{1},a_{2},a_{3}), \tag{5.1}\] \[{\rm d}_{g}(a_{1}+\alpha_{1})={\rm d}(a_{1})-{\rm d}^{*}(\alpha_ {1})+g(a_{1}),\,\forall a_{1},a_{2},a_{3}\in\mathfrak{A},\ \alpha_{1},\alpha_{2},\alpha_{3}\in\mathfrak{A}^{*}. \tag{5.2}\] Similar to Proposition 4.2, we have the following result. **Proposition 5.1**.: _With the above notations, \((\mathfrak{A}\oplus\mathfrak{A}^{*},[-,-,-]_{f},{\rm d}_{g})\) is a modified \(\lambda\)-differential 3-Lie algebra if and only if \((f,g)\) is a 2-cocycle in the cohomology of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],{\rm d})\) with the coefficient in the representation \((\mathfrak{A}^{*};ad^{*},-{\rm d}^{*})\)._ **Definition 5.2**.: The modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A}\oplus\mathfrak{A}^{*},[-,-,-]_{f},{\rm d}_{g})\) is called the \(T^{*}\)-extension of the modified \(\lambda\)-differential 3-Lie algebra \((\mathfrak{A},[-,-,-],{\rm d})\). Denote the \(T^{*}\)-extension by \(T^{*}_{(f,g)}(\mathfrak{A})=(T^{*}(\mathfrak{A})=\mathfrak{A}\oplus\mathfrak{ A}^{*},[-,-,-]_{f},{\rm d}_{g})\). **Definition 5.3**.: Let \((\mathfrak{A},[-,-,-],{\rm d})\) be a modified \(\lambda\)-differential 3-Lie algebra. \((\mathfrak{A},[-,-,-],{\rm d})\) is said to be metrised if it has a non-degenerate symmetric bilinear form \(\varpi_{\mathfrak{A}}\) which satisfying \[\varpi_{\mathfrak{A}}([a_{1},a_{2},a_{3}],a_{4})+\varpi_{ \mathfrak{A}}(a_{3},[a_{1},a_{2},a_{4}])=0, \tag{5.3}\] \[\varpi_{\mathfrak{A}}({\rm d}(a_{1}),a_{2})+\varpi_{\mathfrak{A} }(a_{1},{\rm d}(a_{2}))=0,\ \ \forall a_{1},a_{2},a_{3},a_{4}\in\mathfrak{A}. \tag{5.4}\] We may also say that \((\mathfrak{A},[-,-,-],{\rm d},\varpi_{\mathfrak{A}})\) is a metrised modified \(\lambda\)-differential 3-Lie algebra. Define a bilinear map \(\varpi:\wedge^{2}T^{*}(\mathfrak{A})\to\mathfrak{A}\) by \[\varpi(a_{1}+\alpha_{1},a_{2}+\alpha_{2})=\alpha_{1}(a_{2})+\alpha_{2}(a_{1}),\ \ \forall a_{1},a_{2}\in\mathfrak{A},\alpha_{1},\alpha_{2}\in\mathfrak{A}^{*} \tag{5.5}\] **Proposition 5.4**.: _With the above notations, \((T^{*}_{(f,g)}(\mathfrak{A}),\varpi)\) is a metrised modified \(\lambda\)-differential 3-Lie algebra if and only if_ \[f(a_{1},a_{2},a_{3})(a_{4})+f(a_{1},a_{2},a_{4})(a_{3})=0,\ \ g(a_{1})(a_{2})+g(a_{2}) (a_{1})=0,\ \ \forall a_{1},a_{2},a_{3},a_{4}\in\mathfrak{A}.\] Proof.: For any \(a_{1},a_{2},a_{3},a_{4}\in\mathfrak{A},\ \alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4} \in\mathfrak{A}^{*}\), using Eqs. (2.6), (5.1)-(5.5) we have \[\varpi([a_{1}+\alpha_{1},a_{2}+\alpha_{2},a_{3}+\alpha_{3}]_{f},a _{4}+\alpha_{4})+\varpi(a_{3}+\alpha_{3},[a_{1}+\alpha_{1},a_{2}+\alpha_{2},a_ {4}+\alpha_{4}]_{f})\] \[= \varpi([a_{1},a_{2},a_{3}]+ad^{*}(a_{2},a_{3})\alpha_{1}+ad^{*}(a _{3},a_{1})\alpha_{2}+ad^{*}(a_{1},a_{2})\alpha_{3}+f(a_{1},a_{2},a_{3}),a_{4} +\alpha_{4})\] \[+\varpi(a_{3}+\alpha_{3},[a_{1},a_{2},a_{4}]+ad^{*}(a_{2},a_{4}) \alpha_{1}+ad^{*}(a_{4},a_{1})\alpha_{2}+ad^{*}(a_{1},a_{2})\alpha_{4}+f(a_{1 },a_{2},a_{4}))\] \[= \alpha_{4}([a_{1},a_{2},a_{3}])+ad^{*}(a_{2},a_{3})\alpha_{1}(a_{ 4})+ad^{*}(a_{3},a_{1})\alpha_{2}(a_{4})+ad^{*}(a_{1},a_{2})\alpha_{3}(a_{4}) +f(a_{1},a_{2},a_{3})(a_{4})\] \[+\alpha_{3}([a_{1},a_{2},a_{4}])+ad^{*}(a_{2},a_{4})\alpha_{1}(a_ {3})+ad^{*}(a_{4},a_{1})\alpha_{2}(a_{3})+ad^{*}(a_{1},a_{2})\alpha_{4}(a_{3}) +f(a_{1},a_{2},a_{4})(a_{3})\] \[= \alpha_{4}([a_{1},a_{2},a_{3}])-\alpha_{1}([a_{2},a_{3},a_{4}]- \alpha_{2}([a_{3},a_{1},a_{4}])-\alpha_{3}([a_{1},a_{2},a_{4}])+f(a_{1},a_{2 },a_{3})(a_{4})\] \[+\alpha_{3}([a_{1},a_{2},a_{4}])-\alpha_{1}([a_{2},a_{4},a_{3}])- \alpha_{2}([a_{4},a_{1},a_{3}])-\alpha_{4}(a_{1},a_{2},a_{3})+f(a_{1},a_{2},a _{4})(a_{3})\] \[= f(a_{1},a_{2},a_{3})(a_{4})+f(a_{1},a_{2},a_{4})(a_{3})\] \[= 0,\] \[\varpi(\mathrm{d}_{g}(a_{1}+\alpha_{1}),a_{2}+\alpha_{2})+\varpi( a_{1}+\alpha_{1},\mathrm{d}_{g}(a_{2}+\alpha_{2}))\] \[= \varpi(\mathrm{d}(a_{1})-\mathrm{d}^{*}(\alpha_{1})+g(a_{1}),a_{2 }+\alpha_{2})+\varpi(a_{1}+\alpha_{1},\mathrm{d}(a_{2})-\mathrm{d}^{*}( \alpha_{2})+g(a_{2}))\] \[= -\mathrm{d}^{*}(\alpha_{1})(a_{2})+g(a_{1})(a_{2})+\alpha_{2}( \mathrm{d}(a_{1}))+\alpha_{1}(\mathrm{d}(a_{2}))-\mathrm{d}^{*}(\alpha_{2})(a _{1})+g(a_{2})(a_{1})\] \[= -\alpha_{1}(\mathrm{d}(a_{2}))+g(a_{1})(a_{2})+\alpha_{2}( \mathrm{d}(a_{1}))+\alpha_{1}(\mathrm{d}(a_{2}))-\alpha_{2}(\mathrm{d}(a_{1})) +g(a_{2})(a_{1})\] \[= g(a_{1})(a_{2})+g(a_{2})(a_{1})\] \[= 0.\] Thus, we get the result. **Acknowledgments.** The paper is supported by the Science and Technology Program of Guizhou Province (Grant Nos. QKHZC[2023]372), the Scientific Research Foundation for Science & Technology Innovation Talent Team of the Intelligent Computing and Monitoring of Guizhou Province (Grant No. QJJ[2023]063), the National Natural Science Foundation of China (Grant No. 12161013).
2306.10785
Multitrack Music Transcription with a Time-Frequency Perceiver
Multitrack music transcription aims to transcribe a music audio input into the musical notes of multiple instruments simultaneously. It is a very challenging task that typically requires a more complex model to achieve satisfactory result. In addition, prior works mostly focus on transcriptions of regular instruments, however, neglecting vocals, which are usually the most important signal source if present in a piece of music. In this paper, we propose a novel deep neural network architecture, Perceiver TF, to model the time-frequency representation of audio input for multitrack transcription. Perceiver TF augments the Perceiver architecture by introducing a hierarchical expansion with an additional Transformer layer to model temporal coherence. Accordingly, our model inherits the benefits of Perceiver that posses better scalability, allowing it to well handle transcriptions of many instruments in a single model. In experiments, we train a Perceiver TF to model 12 instrument classes as well as vocal in a multi-task learning manner. Our result demonstrates that the proposed system outperforms the state-of-the-art counterparts (e.g., MT3 and SpecTNT) on various public datasets.
Wei-Tsung Lu, Ju-Chiang Wang, Yun-Ning Hung
2023-06-19T08:58:26Z
http://arxiv.org/abs/2306.10785v1
# Multitrack Music Transcription with a Time-Frequency Perceiver ###### Abstract Multitrack music transcription aims to transcribe a music audio input into the musical notes of multiple instruments simultaneously. It is a very challenging task that typically requires a more complex model to achieve satisfactory result. In addition, prior works mostly focus on transcriptions of regular instruments, however, neglecting vocals, which are usually the most important signal source if present in a piece of music. In this paper, we propose a novel deep neural network architecture, _Perceiver TF_, to model the time-frequency representation of audio input for multitrack transcription. Perceiver TF augments the Perceiver architecture by introducing a hierarchical expansion with an additional Transformer layer to model temporal coherence. Accordingly, our model inherits the benefits of Perceiver that posses better scalability, allowing it to well handle transcriptions of many instruments in a single model. In experiments, we train a Perceiver TF to model 12 instrument classes as well as vocal in a multi-task learning manner. Our result demonstrates that the proposed system outperforms the state-of-the-art counterparts (e.g., MT3 and SpecTNT) on various public datasets. Wei-Tsung Lu, Ju-Chiang Wang, and Yun-Ning Hung SAMI, ByteDance, Mountain View, CA, USA {weitsung.lu, ju-chiang.wang, yunning.hung}@tiktok.com Time-frequency, Perceiver, automatic music transcription, multi-task learning, random-mixing augmentation. ## 1 Introduction Automatic music transcription (AMT) is a Music Information Retrieval (MIR) task that aims to transcribe a music audio input into a sequence of musical notes where each note contains attributes of one-set, pitch, duration, and velocity. The output is typically delivered in the format of MIDI. In a multitrack setting, an AMT system should identify every instrument that is present in the input, and estimate the associated notes accordingly into a channel of the MIDI output. Ideally speaking, using the identified instrument for each corresponding channel, the synthesized audio mixture from the output MIDI should resemble the original input audio in a musically plausible way. Although recent years have seen significant progress using deep learning techniques [1, 2], our analysis and review indicate that two major challenges have not yet been addressed effectively: _model scalability_ and _instrument discrimination_. Multitrack AMT is generally regarded as a very challenging task. The number of commonly used instruments can be up to 100. Among them, musical notes of regular instruments like guitar, violin, and synthesizers are difficult to characterize due to their tremendous variations in timbre, expressivity, and playing techniques. Other than that, vocals, which usually are the most predominant instrument if present, vary their timbre and pitch to convey lyrics and expressions. To handle all instruments simultaneously, it requires better _model scalability_. Our observations on existing multitask AMT systems reveal that they oftentimes result in many false positive notes for popular pitched instruments like piano and guitar. For instance, notes of string ensemble are massively captured by piano. This might be because the system does not provide clear timbre-dependent features or it is not robust to timbral variations across different instruments. We believe this problem can be mitigated if the system can _discriminate_ each instrument source from the mixture while making inference. To address model scalability, we propose _Perceiver TF_, which is an augmented variant of the Perceiver [3]. Perceiver has been well-known for its better scalability in the Transformer family to tackle high-dimensional data input. In this work, we adopt spectrogram for audio input, with \(T\) and \(F\) representing the lengths of the time- and frequency-axes, respectively. For multitrack AMT, capability to model the timbre-dependent pitches of multiple instruments is crucial, so more comprehensive operations are needed to capture useful features along the high-resolution frequency axis. Recently, the SpecTNT architecture [4] was proposed for this purpose and achieved state-of-the-art performance in vocal melody extraction (a sub-task of AMT). SpecTNT consists of two Transformers in a hierarchical structure, where the lower-level Transformer performs self-attention directly on the spectrum of a frame. However, such design leads to a cubic complexity of attention computation, i.e., \(\mathcal{O}(TF^{2}+T^{2})\), limiting its expandability for more complex tasks. To this end, we conceive a non-trivial combination of Perceiver and SpecTNT: expanding Perceiver to be hierarchical. The resulting _Perceiver TF_ takes advantage of the cross-attention to extract spectral features into a latent bottleneck for each frame, and adds an additional Transformer for self-attention along the time axis, overall resulting in a quadratic complexity of \(\mathcal{O}(TF+T^{2})\). Since \(F\) is typically large, this complexity reduction is significant, allowing the model to handle more instruments simultaneously. To address _instrument discrimination_, we adopt the _random-mixing_ augmentation technique learned from music source separation (MSS) [5, 6], which aims to separate each instrument stem from the input audio mixture [7]. Moreover, we train our AMT model in a multi-task learning fashion, with each sub-task modeling the transcription of an instrument. This multi-task design along with the random-mixing technique allows more flexibility to train with enormous amount of augmented training samples. Our strategy differs from previous works that jointly train the AMT task with instrument recognition [8] or MSS [9] to help inform the model of instrument-dependent features. To our knowledge, little work has been done using random-mixing technique to improve multitrack AMT. ## 2 Related Work Multi-instrument AMT has been explored in several previous works. Wu et al. [10] and Hung et al. [8] trained a transcription model with related tasks in a multi-task learning fashion. Tanaka et al. used clustering approaches to separate transcribed instruments [11], while Cheuk et al. used unsupervised learning techniques to im prove transcription on low-resource datasets [1, 12]. These prior examples demonstrated that models based on the pianoroll representation are able to capture instrument-dependent onset, pitch, and duration of notes. Different from the pianoroll approach, Gardner et al. [2] created a new paradigm that proposes a sequence-to-sequence model, called MT3, to tackle multitrack AMT. They trained a standard encoder-decoder Transformer to model multitrack MIDI tokens from audio, and demonstrated state-of-the-art performance on several public datasets. By contrast, vocal transcription is usually treated as an independent task in the literature, even though it shares the same goal of AMT. Due to the lack of training data, few works focused on transcribing note-level outputs from polyphonic music audio. Recently, Wang et al. released a human annotated dataset including 500 Chinese songs [13]. They provide a CNN based model (EFN) for a baseline of the task. In [14], a teacher-student training scheme is proposed to utilize pseudo labels derived from F0 estimations of vocal. Lately, [15] proposed a vocal transcription system that requires an MSS as front-end. In this work, we propose a unified framework that combines vocal and multi-instrument transcriptions, and it does not rely on pre-trained modules such as an MSS front-end. ## 3 Methodology In this work, we adopt the pianoroll approach instead of the sequence-to-sequence (seq-to-seq) approach for two major reasons. First, it is easier to manipulate the loss computation to learn from partially labeled data. For example, it is non-trivial to train a seq-to-seq model that joints a vocal transcription dataset where the MIDI ground truth of accompaniments is not available. Second, the inference time complexity of seq-to-seq depends on the number of notes (tokens) due to the auto-regressive nature. If the audio input contains many instruments with complex, dense polyphonic notes, the inference will be very slow. Although our proposed model is also a Transformer-oriented architecture, we focus on the encoder part to predict the pianoroll directly. The following sections explain the proposed model architecture (Sections 3.1 - 3.3) and the random-mixing augmentation technique (Section 3.4). Our model consists of three sub-modules: convolutional module, Perceiver TF module, and output module. The input spectrogram is first passed through the convolution module for local feature aggregation. Then, the perceiver TF module, which includes multiple Perceiver TF blocks, extracts the features and outputs the temporal embeddings at each time step. Lastly, the output module projects the temporal embeddings into desired dimensions for pianoroll outputs. ### Convolutional Module Using convolutional neural network (CNN) as the front-end of Transformer-based models has became a common design choice in speech recognition pipeline [16]. Previous works have also found that the CNN front-end plays an crucial role in SpecTNT and MIRTransformer for many MIR tasks [4, 17, 18, 19]. Following this practice, we stack multiple residual units [20] with average pooling to reduce the dimensionality of the frequency axis. We denote the resulting time-frequency representation as \(\mathcal{S}=[S_{0},S_{1},\ldots,S_{T-1}]\in\mathbb{R}^{T\times F\times C}\), where \(T\), \(F\), and \(C\) represent the dimensions of time, frequency, and channel, respectively. ### Perceiver TF Module A conventional Perceiver architecture contains two major components [3]: (i) a cross-attention module that maps the input data and a latent array into a latent array; (ii) a Transformer tower that maps a latent array into a latent array. Upon this structure, our design principle to expand Perceiver is twofold. (1) We consider the spectral representation of a time step, \(S_{t}\), is pivotal to carry the pitch and timbral information, so it serves as the input data for the cross-attention module to project the spectral information into a latent array for the time step \(t\). Each latent array is responsible for extracting the local spectral features. (2) Having a sequence of latent arrays of different time steps, we need a Transformer to exchange the local spectral information along the time axis to learn their temporal coherence. The Perceiver TF architecture is illustrated in Fig. 1. A Perceiver TF block contains three Transformer-style modules: _spectral cross-attention_, _latent Transformer_, and _temporal Transformer_, which are responsible for modeling the spectral, channel-wise, and temporal information, respectively. Each of them includes the attention mechanism and a position-wise feed-forward network. The _spectral cross-attention_ (SCA) module operates directly on an input spectral representation \(S_{t}\) and projects it into the Key (\(K\)) and Value (\(V\)) matrices. Unlike the traditional Transformer, the cross-attention module in Perceiver maps a latent array into the Query (\(Q\)) matrix and then performs the \(QKV\) self-attention accordingly. We follow the Perceiver design to initialize a set of \(K\) learnable latent arrays \(\Theta^{0}\in\mathbb{R}^{K\times D}\), where \(K\) is the index dimension, and \(D\) is the channel dimension. Then, we repeat \(\Theta^{0}\) for \(T\) times and associate each to a time step \(t\), which is then denoted as \(\Theta^{0}_{t}\), such that \(\Theta^{0}_{0}=\Theta^{0}_{1}=\ldots\Theta^{0}_{T-1}\), meaning that all latent arrays are from the same initialization across the time axis. This \(\Theta^{h}_{t}\) plays an important role of carrying the spectral information from the first Perceiver TF block throughout the entire stack of blocks. The query-key-value (\(QKV\)) attention of our SCA of the \(h\)-th iteration can be written as: \(f_{\text{SCA}}:\{\Theta^{h}_{t},S_{t}\}\rightarrow\Theta^{(h+1)}_{t}\), and this process will be repeated as the Perceiver TF block repeats in order to maintain Figure 1: The block diagram of the Perceiver TF module. Positional embedding is first added to the latent arrays, denoted as \(\Theta^{0}_{t}\). The Spectral Cross-Attention module projects the spectral input \(S_{t}\) into \(\Theta^{h}_{t}\), followed by the Latent Transformer module. The Temporal Transformer processes \(\Theta^{h}_{t}\) of all time steps to model the temporal coherence. The details are explained in Section 3.2. the connection between \(\Theta_{t}^{h}\) and the input \(S_{t}\). The design of the cross-attention module is the key that significantly improves the computational scalability of Perciever. For instance, our SCA results in \(\mathcal{O}(FK)\), which is much cheaper than \(\mathcal{O}(F^{2})\) of the spectral Transformer in SpecTNT [4], given that \(K\) (dimension of the latent array) is typically small (i.e., \(K\ll F\)). The _latent Transformer_ module takes place after the SCA module. It contains a stack of \(N\) Transformers to perform standard self-attention on the latent arrays of \(\Theta_{t}^{h}\). The resulting complexity \(\mathcal{O}(NK^{2})\) is efficient as well. In the context of AMT, this process means the interactions among the onsets, pitches, and instruments are explicitly modeled. To perform multitrack AMT, we initialize \(K\) latent arrays and train each latent array to handle one specific task. Following [21], for an instrument, we arrange two latent arrays to model the onset and frame-wise (pitch) activations, respectively. This leads to \(K=2J\), where \(J\) is the number of target instruments. The _temporal Transformer_ module is placed to enable the communication between any pairs of \(\Theta_{t}^{h}\) of different time steps. To make the _temporal Transformer_ understand the time positions of each latent array, we add a trainable positional embedding to each \(\Theta_{t}^{0}\) during the initialization. Let \(\theta_{t}^{h}(k)\), \(k\) = 0,..., \(K\)-1, denote each latent array in \(\Theta_{t}^{h}\), we arrange \(K\) parallel standard Transformers in which each serves the corresponding input sequence of latent arrays: \([\theta_{0}^{h}(k),\theta_{1}^{h}(k),\ldots,\theta_{T-1}^{h}(k)]\). The module is repeated \(M\) times, yielding a complexity of \(\mathcal{O}(MT^{2})\). Finally, we repeat \(L\) times the Perciever TF block to form the overall module. Note that, different from the original Perciever, the weights of _spectral cross-attention_ and _latent Transformer_ are not shared across the repeated blocks. ### Output Module We utilize two GRU modules [22] with sigmoid activation function for the onset and frame-wise latent array outputs, respectively. We follow prior work [21] that uses the onset outputs to condition the frame-wise activation. ### Multi-task Training Loss We formulate the loss function for training the proposed model: \[\mathcal{L}=\sum_{j=0}^{J-1}(l^{j}_{\text{onset}}+l^{j}_{\text{frame}}) \tag{1}\] where \(l\) is the binary cross-entropy loss between the ground-truth and prediction, \(l^{j}_{\text{onset}}\) and \(l^{j}_{\text{frame}}\) are respectively the onset and frame activation losses for instrument \(j\). Note that the losses for all \(J\) instruments should be computed, regardless of whether the corresponding instruments are active or not in a training sample. Therefore, a zero output is expected for instruments that are not present in the sample. ## 4 Experiments ### Datasets We use four public datasets for evaluation. **Slakh2100**[23] contains 2100 pieces of multitrack MIDI and the corresponding synthesized audio. The MIDI files are a subset of Lakh dataset [24], and the audio samples were synthesized by professional-grade software. Instruments were grouped into 12 MIDI classes defined in the Slakh dataset.1 We used the official train/validation/test splits in our experiments. **MAESTROv3**[25] contains about 200 hours of piano solo recordings with the aligned note annotations acquired by the MIDI capturing device on piano. We follow the official train/validation/test splits. **GuitarSet**[26] contains 360 high-quality guitar recordings and their synchronized note annotations. Since there is no official splits for this dataset, we follow the setting in [2]. The first two progressions of each style are used for training, and the last one is for testing. **MIR-ST500**[13] contains 500 Chinese-pop songs with note annotations for the lead vocal melody. We used the official train-test split. Although around 10% of the training set is missing due to failure, we ensure the testing set is complete for fair comparison. Footnote 1: There is no ”Sound Effects”, “Percusive” and “Ethnic” instruments. We grouped “Strings” and “Ensemble” into one instrument class. ### Data Augmentations Annotating data for multitrack AMT is labor intensive. To better exploit the data at hand, we apply two data augmentation techniques during training. Following previous works [4, 27], pitch-shifting is randomly performed to all the non-percussive instruments during training. We introduce the cross-dataset _random-mixing_ (RM) technique. Let us first define three types of datasets: * _Multi-track_: each sample contains multi-tracks of instrument-wise audio stems with polyphonic notes (e.g., Slakh), and no vocal signals are present. * _Single-track_: each sample contains only a single non-vocal stem with polyphonic notes (e.g., MAESTRO and GuitarSet). * _Vocal-mixture_: each sample is a full mixture of music with monophonic notes only for lead vocal (e.g., MIR-ST500). We employ a MSS tool [28] to separate each sample into vocal and accompaniment stems. Each training sample is excerpted from a random moment of its original song with a duration depending on the model input length (e.g., 6 seconds). Suppose we want to transcribe \(J\) classes of instruments, and the corresponding instrument set is denoted as \(\boldsymbol{\Omega}=\{\omega_{j}\}_{j=0}^{J-1}\). Then, we apply three treatments to the three mentioned types of datasets respectively as follows. First, for a training sample \(s_{i}\) from a _multi-track_ dataset, we denote its instrumentation template as \(\boldsymbol{\mu}_{i}\subseteq\boldsymbol{\Omega}\), indicating the instruments present in \(s_{i}\). Then, for each instrument \(\omega_{j}\) in \(\boldsymbol{\mu}_{i}\), it has a \(p\%\) chance to be replaced by a \(\omega_{j}\) in \(\boldsymbol{\mu}_{u}\), where \(i\neq u\) (i.e., a different sample). Second, for a sample \(s_{i}\) from a _single-track_ dataset, we randomly pick an existing instrumentation template \(\boldsymbol{\mu}_{u}\) (\(i\neq u\)) as its background. If the instrument of \(s_{i}\) is present in \(\boldsymbol{\mu}_{u}\), that stem will be removed from \(\boldsymbol{\mu}_{u}\). For instance, if \(s_{i}\) is a piano solo, then we will remove the piano stem from \(\boldsymbol{\mu}_{u}\). From our preliminary experiment, presenting the solo example to model training without mixing it with a background can degrade the performance. Lastly, for a sample \(s_{i}\) from a _vocal-mixture_ dataset, it has a \(q\%\) chance to replace its background by two methods: (i) like the _single-track_ treatment, we randomly pick an existing \(\boldsymbol{\mu}_{u}\) (\(i\neq u\)) as its background; or (ii) we randomly pick an accompaniment stem separated from \(s_{v}\), where \(i\neq v\). For the second method, since the selected accompaniment stem does not have the ground-truth notes, we mask the instrument outputs and only count the loss for the vocal output (see Eq. 1). ### Implementation Details We implemented our system using PyTorch [29]. The audio waveform is re-sampled to 16kHz sampling rate. We set the model input length to be 6 seconds. The log-magnitude spectrogram is then computed using 2048 samples of Hann window and a hop size of 320 samples (i.e., 20 ms). The convolutional module contains 3 residual blocks, each of them has 128 channels and is followed by an average pooling layer with a time-frequency filter of (1, 2). For the Perceiver TF module, we use the following parameters (referring to Fig. 1): (i) depending on different experiment configurations, initialize \(2J\) latent arrays, each uses a dimension of 128; (ii) stack \(L=3\) Perceiver TF blocks, (iii) for each Perceiver TF block, use 1 spectral cross-attention layer, \(N=2\) latent Transformer layers, and \(M=2\) temporal Transformer layers. All the Transformer layers has an hidden size of 128 with 8 heads for the multi-head attention. Finally, the output module is a 2-layer Bi-directional GRU with 128 hidden units. All of the Transformer module in the Perceiver TF include dropout with a rate of 0.15. The output dimension for onset and frame activations are 128 and 129, respectively, where 128 corresponds to the MIDI pitches, and the additional 1 dimension in the frame activation is for the silence. We use AdamW [30] as the learning optimizer. The initial learning rate and weight decay rate are set to \(10^{-3}\) and \(5\times 10^{-3}\), respectively. For final output, we take a threshold of 0.25 for both the onset and frame probability outputs to get the binary representations, so the frame-wise activations can be merged to generate each note in a piano-roll representation. No further post-processing is applied. For data augmentation, all of the non-percussive instruments of a training example have a 100\(\%\) probability to be path-shift up or down by at most 3 semi-tones. For random-mixing, we use \(p=25\%\) and \(q=50\%\) for data from _multi-track_ and _vocal-mixture_ datasets, respectively. To generate an input sample, all the instrument stems in each training example are linearly summed up. ### Baselines Two state-of-the-art models, MT3 [2] and SpecTNT [4], are selected as the baselines. For MT3, we replicated the model following [2],2 which includes the official model checkpoint and inference pipeline on the test set. For SpecTNT, we adopted the configuration used for vocal melody extraction reported in [4]. In the preliminary experiments, we found it non-trivial to successfully train the original SpecTNT on Slakh2100 under the multi-instrument setting, so we skip this experiment. For vocal transcription, the best results of EFN [13] and JDC\({}_{note}\)(L+U) [14] are reported. Footnote 2: [https://github.com/magenta/mt3/blob/main/mt3/colab/music_transcription_with_transformers.ipvnb](https://github.com/magenta/mt3/blob/main/mt3/colab/music_transcription_with_transformers.ipvnb) ### Evaluation Metrics We use "Onset F1" score, which indicates the correctness of both pitches and onset timestamps, as the evaluation metric for comparison with previous work [2]. To further evaluate the performance of multi-instrument transcription, we report the "Multi-instrument Onset F1" score for the Slakh dataset. The outputs from our replicated MT3 model are grouped into 12 instrument classes based on their program numbers. The Multi-instrument Onset F1 score we used only counts Onset F1, which is similar to the MV2H metric [31]. It could be slightly different from the one used in [2], since the "Drums" outputs do not contain clear offset information. ### Result and Discussion Table 1 shows the comparison in terms of Onset F1 between the proposed model and baselines. The proposed model and SpecTNT which directly model the spectral inputs with the attention mechanism shows higher performance for cases even trained on low resources of a single dataset, such as GuitarSet. On MIR-ST500, the proposed model significantly outperforms the baselines. Although SpecTNT (Single) performs slightly better than our model on MAESTRO, we still consider Perceiver TF to be more advantageous to practical use for its better inference efficiency. Table 2 presents the Multi-instrument Onset F1 (instrument-weighted average) and the Onset F1 scores of individual instrument classes on Slakh2100 to reveal instrument-wise performance. Compared to MT3\({}^{\dagger}\), our model without the random-mixing augmentation (No-RM) performs significantly better on less-common instruments such as "Pipe" (the Onset F1 score is upper by over 100%). Applying random-mixing in training can further boost the performance in all cases, indicating the technique indeed improves the model robustness to discriminate between different instruments. Finally, we observe that combining multi-instrument and vocal transcriptions can improve the vocal transcription alone, as the combined model is trained with more randomly mixed vocal-accompaniment samples. ## 5 Conclusion We have presented Perceiver TF, a novel architecture that adequately addresses the _model scalability_ problem for multitrack AMT. To address the _instrument discrimination_ issue, we have proposed the random-mixing augmentation technique, which significantly facilitates the data usability across datasets. Our system has demonstrated state-of-the-art performance on different public datasets. We believe Perceiver TF is generic and can be applied to other analogous tasks. \begin{table} \begin{tabular}{l|c c c c c c c c c c c} \hline \hline Dataset & Slakh & MAESTRO & GuitarSet & MIR-ST500 & & & & MIR-ST500 & \\ \hline \hline SpecTNT (Single) & - & **.969** &.907 &.778 \\ MT3 (Single) &.760 &.960 &.830 & - & & & & \\ MT3 (Mix) &.760 &.950 &.900 & - & & & \\ MT3\({}^{\dagger}\) (Mix) &.763 &.958 &.891 & - & & & \\ \hline Ours (Single) &.808 &.967 &.903 &.777 \\ Ours (Mix+Vocal) & **.819** &.968 & **.911** & **.785** \\ \hline EFN & - & - & - & - &.666 \\ JDC\({}_{note}\)(L+U) & - & - & - & - &.697 \\ \hline \hline \end{tabular} \end{table} Table 1: The results of Onset F1 scores. MT3\({}^{\dagger}\) is our replication. Models with (Mix) or (Mix+Vocal) are trained on the mixture of datasets, while models with (Single) are trained on a single dataset. \begin{table} \begin{tabular}{|l||c c c c c c c c c c c c c|} \hline \hline Slakh & All & Piano & Bass & Drums & Guitar & Strings & Brass & Organ & Pipe & Reed & S.lead & S.pad & C.perc. \\ \hline MT3\({}^{\dagger}\) &.743 &.780 &.906 &.773 &.732 &.551 &.433 &.363 &.282 &.440 &.409 &.234 &.353 \\ \hline \hline Ours (No-RM) &.763 &.809 &.921 &.759 &.727 &.699 &.632 &.562 &.578 &.649 &.677 &.358 &.458 \\ Ours & **.798** & **.854** & **.930** & **.785** & **.777** & **.744** & **.732** & **.694** & **.666** & **.725** & **.769** & **.474** & **.575** \\ \hline \hline \end{tabular} \end{table} Table 2: The results of different models trained on (Mix) datasets and tested on Slakh2100. MT3\({}^{\dagger}\) is our replication, as the instrument-wise results are not reported in [2]. “All” presents the Multi-instrument Onset F1 scores. The following columns show the Onset F1 scores for individual instrument. “S.lead”, “S.pad”, and “C.perc.” stand for Synth Lead, Synth Pad, and Chromatic Percussion, respectively.
2302.14014
The formal theory of relative monads
We develop the theory of relative monads and relative adjunctions in a virtual equipment, extending the theory of monads and adjunctions in a 2-category. The theory of relative comonads and relative coadjunctions follows by duality. While some aspects of the theory behave analogously to the non-relative setting, others require new insights. In particular, the universal properties that define the algebra object and the opalgebra object for a monad in a virtual equipment are stronger than the classical notions of algebra object and opalgebra object for a monad in a 2-category. Inter alia, we prove a number of representation theorems for relative monads, establishing the unity of several concepts in the literature, including the devices of Walters, the $j$-monads of Diers, and the relative monads of Altenkirch, Chapman, and Uustalu. A motivating setting is the virtual equipment $\mathbb{V}\text{-}\mathbf{Cat}$ of categories enriched in a monoidal category $\mathbb{V}$, though many of our results are new even for $\mathbb{V} = \mathbf{Set}$.
Nathanael Arkor, Dylan McDermott
2023-02-27T18:13:23Z
http://arxiv.org/abs/2302.14014v3
# The formal theory of relative monads ###### Abstract. We develop the theory of relative monads and relative adjunctions in a virtual equipment, extending the theory of monads and adjunctions in a 2-category. The theory of relative comonads and relative coadjunctions follows by duality. While some aspects of the theory behave analogously to the non-relative setting, others require new insights. In particular, the universal properties that define the algebra-object and the opalgebra-object for a monad qua trivial relative monad are stronger than the classical notions of algebra-object and opalgebra-object for a monad qua monad. Inter alia, we prove a number of representation theorems for relative monads, establishing the unity of several concepts in the literature, including the devices of Walters, the \(j\)-monads of Diers, and the relative monads of Altenkirch, Chapman, and Uustalu. A motivating setting is the virtual equipment \(\mathbb{V}\)-\(\mathbf{Cat}\) of categories enriched in a monoidal category \(\mathbb{V}\), though many of our results are new even for \(\mathbb{V}=\mathbf{Set}\). Department of Mathematics and Statistics, Faculty of Science, Masaryk University, Czech Republic Department of Computer Science, Reykjavik University, Iceland 28 February 2023 ###### Contents * 1 Introduction * 2 Virtual equipments * 3 Formal category theory * 4 Relative monads * 5 Relative adjunctions * 6 Algebras and opalgebras * 7 Relative comonads and relative coadjunctions * 8 Enriched relative monads ## 1. Introduction The definition of a monad, being 2-diagrammatic in nature - expressed purely in terms of categories, functors, natural transformations, and equations therebetween - may be internalised in any 2-category [1], and much of the theory of ordinary monads on categories continues to hold in this context [21]. This permits a unified treatment of monads on ordinary categories, enriched categories, internal categories, and so on. A monad on a category is in particular a structured _endofunctor_. It is natural to ask whether this restriction might be relaxed, permitting monads whose domains may be distinct from their codomains. This is precisely the notion of relative monad [1]. Given a fixed functor \(j\colon A\to E\), a _\(j\)-relative monad_ comprises a functor \(t\colon A\to E\) equipped with natural transformations - the _unit_\(\eta\colon j\Rightarrow t\) and the _extension operator_\(\dagger\colon E(j,t)\Rightarrow E(t,t)\) - subject to laws expressing unitality and associativity. Much of the theory of monads extends, with appropriate modifications, to the context of relative monads. Herein, we develop the theory of relative monads in a 2-dimensional setting, analogous to the theory of monads in a 2-category. However, unlike the definition of a monad, the definition of a relative monad is not 2-diagrammatic: the extension operator \(\dagger\colon E(j,t)\Rightarrow E(t,t)\) involves a transformation between homs and cannot be captured by the structure of a 2-category. It is therefore necessary to work in a context for formal category theory, which axiomatises the structure of such transformations. In particular, we work within the context of a virtual equipment [1]. While we work throughout at this level of generality, many of our results are new even in the classical setting of relative monads in \(\mathbf{Cat}\). For instance, the following results are likely to be of interest even to readers who are not concerned with the formal aspects of the theory. * Relative monads are always monoids, permitting one to drop the left extension existence assumptions of [1, 1], provided one is willing to work with skew-multicategories rather than skew-monoidal categories (Theorems 4.16 and 4.29). * Relative adjunctions may be presented by means of a unit and a counit, in addition to the classical isomorphism of hom-sets (Lemma 5.5). * Left relative adjoints may be computed by (pointwise) left lifts (Proposition 5.8), and right relative adjoints by (pointwise) left extensions (Proposition 5.10). * Relative monads and relative adjunctions may be composed with suitable relative adjunctions (Propositions 5.28 and 5.30), recovering several known constructions of relative monads and relative adjunctions. * In addition to forming initial and terminal resolutions, the Kleisli and Eilenberg-Moore categories for a relative monad satisfy stronger universal properties with respect to morphisms of relative adjunctions (Theorems 6.37 and 6.46). * Relative monads embed faithfully into categories of slices and coslices via their Kleisli and Eilenberg-Moore constructions (Corollaries 6.38 and 6.47). * The Kleisli categories of arbitrary relative monads may be constructed from Kleisli categories of trivial relative monads (Proposition 6.53). As part of our development, we prove a number of representation theorems for relative monads (Theorems 4.16, 4.19, 4.22 and 4.29 and Proposition 4.26). In doing so, we unify several concepts that have arised in the categorical literature, such as the _devices_ of Walters [20, 21], the _\(j\)-monads_ of Diers [17], and the _relative monads_ and _skew monoids_ of Altenkirch, Chapman and Uustalu [1, 1] (Example 8.10). ### Outline of the paper In Section 2 we recall the definition of virtual equipment [19], and in Section 3 develop some basic category theory in this setting, such as the theory of weighted limits and colimits, pointwise extensions and lifts, and full faithfulness and density. In Section 4, we introduce relative monads (Definition 4.13). We motivate the definition by identifying relative monads with monoids in a skew-multicategory structure on the hom-categories of a virtual equipment (Theorem 4.16), which we introduce in Theorem 4.4. We furthermore establish a number of equivalent definitions of relative monad (Theorems 4.19, 4.22 and 4.29), recovering notions of monad-like structures that have arisen in the literature. In Section 5, we introduce relative adjunctions (Definition 5.1), giving several equivalent characterisations akin to those for (non-relative) adjunctions (Lemma 5.5), establish their limit and colimit preservation properties (Propositions 5.11 and 5.12), and explain their relation to relative monads. In Section 6, we introduce algebras (Definition 6.8) and opalgebras (Definition 6.18) for relative monads as left- and right-actions of monoids in skew-multicategories, and consider universal algebras (Definition 6.31) and opalgebras (Definition 6.42), which generalise the notions of algebra-object (or _Eilenberg-Moore object_) and opalgebra-object (or _Kleisli object_) for a monad. In particular, we prove that every algebra-object forms a relatively monadic resolution (Corollary 6.39), and that every opalgebra-object forms a relatively opmonadic resolution (Corollary 6.48). In Section 7, we briefly discuss the dual theory of relative comonads and relative coadjunctions. Finally, in Section 8, we consider the special case of relative monads in the virtual equipment \(\mathbb{V}\)**-Cat** of categories enriched in a monoidal category \(\mathbb{V}\). In particular, we show that the definition of relative monad in that setting may be simplified (Theorem 8.8), and construct (co)algebra-objects (Theorems 8.15 and 8.19) and (co)opalgebra-objects (Theorems 8.17 and 8.20). Previous notions of enriched relative monad in the literature are recovered as special cases. ### Deferrals It is worth highlighting some aspects of the formal theory of relative monads we have chosen not to pursue in this paper. First, in this paper, we study \(1\)-categories of relative monads - namely, the \(1\)-category of \(j\)-relative monads for a fixed root \(j\colon A\to E\) - and do not consider the \(2\)-dimensional structure formed by relative monads with different roots. This is in contrast to the seminal paper of Street on the formal theory of monads [10]. There are two reasons for this choice. The first is that we are motivated by applications for which the root \(j\) is fixed; and the second is that, contrary to morphisms of monads, the appropriate definition of morphism between arbitrary relative monads is nonevibcident. Second, we do not consider the relationship between relative monads and non-relative monads, or, more generally, between relative monads with different roots, as studied by Walters [14] and Altenkirch, Chapman and Uustalu [1, 1]. While this is an essential aspect of the theory of relative monads, it has been omitted from the present paper for reasons of space. Third, though we focus herein only on enriched relative monads, there are several examples of structures resembling relative monads that we expect may be seen as relative monads in particular equipments, such as the _spectral algebraic theories_ of Jarzembski [11, Definition 2.1]; the _strong relative monads_ of Uustalu [12]; the _enriched abstract clones_ of Fiore [13, Definition 1.1]; and the _relative monads_ of [14, Definition 2.1]. These aspects, and others, shall be developed in forthcoming work. ### Related work The study of relative monads in a formal setting has been previously proposed. Maillard [15] and Arkor [1] independently defined relative monads in a representable virtual equipment (a _proarrow equipment_ in the sense of Wood [16, 17]): their definition coincides with ours in that setting. However, our treatment is more general, and addresses several deficiencies with these previous approaches: we give a more detailed comparison throughout. A different approach was proposed by Lobbia [14], who defined a notion of relative monad in any 2-category, generalising the _extension systems_ in a 2-category defined by Marmolejo and Wood [14]. While it is possible to capture relative monads for ordinary and internal categories in this setting, it is not possible to capture relative monads for enriched categories, and therefore is inadequate for our purposes. ### Acknowledgements The authors thank John Bourke, Gabriele Lobbia, and Tarmo Uustalu for discussions about relative monads and skew-multicategories; and Christian Williams for introducing the authors to string diagrams for double categories, which simplify many of the proofs. The paper has benefitted from comments by Marcelo Fiore, Richard Garner, and Martin Hyland on an earlier development of the theory [1]. The second author was supported by Icelandic Research Fund grant No 228684-052. ## 2. Virtual equipments There are many flavours of category theory - enriched, internal, indexed and fibred, and so on - each of which admits much of the same theory as ordinary category theory, such as the study of limits and colimits, adjunctions and monads, presheaves, pointwise extensions, and so on. To avoid the repetition inherent in proving the same theorems in each setting - for instance that every adjunction induces a monad, or that left adjoints preserve colimits - it is desirable to work in a general context in which these theorems may be proven and for which each of these flavours of category theory is merely an example. This is the study of _formal category theory_[10]. A fundamental question is then: what is an appropriate setting for formal category theory? In other words: what structure of categories is fundamental to their study? An evident choice is the 2-categorical structure possessed by categories, functors, and natural transformations, and early attempts to study formal category theory took place in the setting of 2-categories equipped with various property-like structure [10, 11, 12, 13]. This setting is apt for studying some kinds of categorical structure, in particular monads and adjunctions [11, 12], which are essentially 2-categorical in nature. However, it was clear from the beginning that this setting was not expressive enough to capture many fundamental concepts in enriched category theory. The shortcoming with 2-categories as a setting for formal category theory is the absence of a notion of _hom_ (such as hom-sets for ordinary categories, or hom-objects for enriched categories), which are crucial in defining concepts such as weighted limits and colimits, presheaves, pointwise extensions, and (crucially for our purposes) relative monads and relative adjunctions. While in some settings (notably for internal categories), homs may be captured faithfully using comma objects, justifying the use of 2-categories in these cases, this is not possible for enriched categories. Instead, homs must be provided as extra structure on a 2-category: this was the central insight of Street and Walters [14], whose introduced _Yoneda structures_ as a setting for formal category theory that captures enriched categories in addition to internal categories. A Yoneda structure axiomatises the presheaf construction together with the existence of nerves for suitably small functors. However, a shortcoming of the notion of Yoneda structure is that there are flavours of category theory that do not admit a presheaf construction: for instance, \(\mathbb{V}\)-enriched category theory for non-closed monoidal categories \(\mathbb{V}\). Shortly following the paper of Street and Walters, Wood [13] introduced _proarrow equipments_ as a simplification of Yoneda structures. Proarrow equipments axiomatise the structure of distributors (also called _profunctors_ or _(bi)modules_), rather than the presheaf construction. A distributor from \(A\) to \(B\), denoted \(A\xrightarrow{\text{\small\sf{}}}B\), is simply a functor \(B^{\text{\small\sf{op}}}\times A\xrightarrow{\text{\small\sf{}}}\text{\small \sf{Set}}\). Distributors capture the structure of the hom-sets of a category: for every locally-small category \(A\), the Yoneda embedding forms a distributor \(A(-_{1},-_{2})\colon A^{\text{\small\sf{op}}}\times A\xrightarrow{\text{ \small\sf{}}}\text{\small\sf{Set}}\) (in fact, this forms the identity distributor on \(A\)). Every Yoneda structure induces a proarrow equipment by considering a distributor to be a cocontinuous \(1\)-cell between presheaf objects, and in this sense proarrow equipments generalise Yoneda structures. Furthermore, since the existence of a presheaf construction is not required, proarrow equipments capture more general bases of enrichment than Yoneda structures. However, the setting of proarrow equipments is not quite general enough to capture \(\mathbb{V}\)-enriched category theory for arbitrary \(\mathbb{V}\). In particular, to compose distributors requires sufficient colimits in \(\mathbb{V}\), which may not exist in general. This motivated Cruttwell and Shulman [10] to introduce _virtual equipments_, which are a generalisation of proarrow equipments that do not require the existence of composite distributors. In contrast to previous approaches, virtual equipments are general enough to capture enriched category for arbitrary bases of enrichment. For this reason, we view it as the appropriate setting in which to develop formal category theory, and it is the setting in which we work. Our main example is the virtual equipment \(\mathbb{V}\)**-Cat** of categories enriched in a monoidal category \(\mathbb{V}\), which we discuss in Section 8. The reader may wish to look at that section to see how to instantiate the general theory we present here. ### Virtual double categories A virtual equipment is in particular a virtual double category, so we begin by recalling the definition and introducing the notation we shall use. A virtual double category is a generalisation of a pseudo-double category whose morphisms in one axis (the _loose_ axis) do not necessarily have composites, and whose morphisms in the other axis (the _tight_ axis) compose strictly. We shall employ a string diagram notation for virtual double categories and equipments, which aids the readability of diagrammatic proofs. Our notation is based on that of Myers [15, 16], though we have made some alterations. For the convenience of readers unfamiliar with string diagrams, we generally present definitions in terms both of pasting diagrams and of string diagrams, but use either as convenient in proofs. **Definition 2.1** ([1, p. 61; 10, Definition 1; 10, Definition 2.1]).: A _virtual double category_\(\mathbb{X}\) comprises the following data. 1. A category \(\mathbf{X}\) of _objects_ and _tight-cells_. We will occasionally elide object names where unimportant in pasting diagrams, denoting each (potentially distinct) object by \(\cdot\). In string diagrammatic notation, we denote an object by a region, such as the following. In practice, we elide the object names in string diagrams, which may be inferred from context. To aid readability, we will often colour regions, using a different colour for each object. The colours are not essential for interpreting the string diagrams. We denote a tight-cell \(f\) from an object \(a\) to an object \(b\) by an arrow \(f\colon a\to b\); denote the composition of tight-cells \(f\colon A\to B\) and \(g\colon B\to C\) both by \(f\mathbin{;}g\colon A\to C\) and by \(gf\colon A\to C\); and denote the identity of an object \(A\) by \(1_{A}\colon A\to A\), or simply by \(=\) in pasting diagrams. In string diagrammatic notation, we denote a tight-cell \(f\colon A\to B\) by a horizontal line with an arrow. (The purpose of the arrow will be explained in Definition 2.6.) Composition of tight-cells \(f\,;g\) is denoted by vertical conjunction. \[\tikzfig{fig:f}\quad=\quad f\,;g\] Identity tight-cells are implicit in string diagrams. 2. For each pair of objects \(a,b\in\mathbf{X}\), a class of _loose-cells_. We denote a loose-cell \(p\) from \(a\) to \(b\) by an arrow with a vertical stroke \(p\colon a\xrightarrow{}b\). In string diagrammatic notation, we denote such a loose-cell by a vertical line, as follows. 3. For each chain of loose-cells \(p_{1},\dots,p_{n}\) (\(n\geq 0\)) and compatible tight-cells \(f_{0}\) and \(f_{n}\) (together forming a _frame_), a class of \(2\)-cells. \[\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad \tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig: f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig: f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig: f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig: f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikzfig{fig:f_{n}}\quad\tikfigfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{figfig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{figfig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfigfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfigfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{figfig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfig{fig:f_{n}}\quad\tikfigfig{fig:f_ 4. For every configuration of \(2\)-cells of the following shape, \(\xy(0,0){0,0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0}{0} {0 Our motivating example of a virtual double category will be the virtual double category of \(\mathbb{V}\)-enriched categories, in which the tight-cells are \(\mathbb{V}\)-functors, the loose-cells are \(\mathbb{V}\)-distributors, and the \(2\)-cells are \(\mathbb{V}\)-forms. We defer an explicit definition to Definition 8.1. #### 2.1.1. Composites While loose-cells do not admit composites in general, a given virtual double category may admit some composites, which are characterised by a universal property, analogous to the characterisation of tensors in multicategories. **Definition 2.4** ([10, Definition 5.1]).: A \(2\)-cell in a virtual double category is _operatesian_ if any \(2\)-cell factors uniquely therethrough: In this case, we call \(q\) the _(loose-) composite_\(q_{1}\odot\cdots\odot q_{m}\) of \(q_{1},\dots,q_{m}\) (note that we write composites in nondiagrammatic order). To aid readability, we shall often elide the distinction between \(\phi\) and \(\tilde{\phi}\). In string diagrammatic notation, we denote the opcartesian \(2\)-cell above by horizontal conjunction. We denote by \(\phi_{1},\dots,\phi_{m}\) a \(2\)-cell of the following form, assuming the composite exists. When \(m=0\), we call \(q\colon A\to A\) the _loose-identity_ and denote it by \(A(1,1)\), or simply by \(\rightleftharpoons[0]\) in pasting diagrams. Identity loose-cells are implicit in string diagrams. We denote a nullary loose-cell with loose-identity codomain by \(\phi\colon f\Rightarrow g\). A virtual double category is _representable_ when it admits all loose-composites (including loose-identities). Loose-composites are unique up to isomorphism and are essentially associative and unital. Representable virtual double categories are equivalent to pseudo-double categories [10, Theorem 5.2]. As an intuition for opcartesian 2-cells, observe that it does not make sense to ask whether a non-unary 2-cell in a virtual double category is invertible, since 2-cells have unary codomain. Opcaresian 2-cells act as a universal unary approximant for a chain of loose-cells, and thus behave much as an invertible 2-cell would (in particular, opcartesian 2-cells with unary domain are invertible). Due to our string diagram notation for opcartesian 2-cells, we may draw string diagrams that have multiple loose-cells at the bottom, but only when these loose-cells have a composit; a pasting diagram corresponding to a string diagram having multiple loose-cells at the bottom has an opcartesian 2-cell at the bottom. When a virtual double category admits loose-identities, the tight-cells form a 2-category. **Definition 2.5** ([10, Proposition 6.1]).: Let \(\mathbb{X}\) be a virtual double category with loose-identities. Denote by \(\underline{\mathbb{X}}\) the _tight 2-category_ associated to \(\mathbb{X}\), having 1. objects: those of \(\mathbb{X}\); 2. 1-cells: tight-cells in \(\mathbb{X}\); 3. 2-cells \(\phi\colon f\Rightarrow g\): nullary 2-cells with loose-identity codomain in \(\mathbb{X}\) as follows. Given objects \(A\) and \(B\) in such an \(\mathbb{X}\), we denote by \(\mathbb{X}[A,B]\) the hom-category \(\underline{\mathbb{X}}(A,B)\). Identities and composition of 2-cells in \(\underline{\mathbb{X}}\) are given by composition of 2-cells in \(\mathbb{X}\) as follows. For instance, the tight 2-category \(\underline{\mathbf{Cat}}\) associated to the virtual double category \(\mathbf{Cat}\) is the usual 2-category of categories, functors, and natural transformations. ### Virtual equipments A crucial property of the virtual double categories with which we shall be concerned is the ability to restrict loose-cells along adjacent tight-cells. **Definition 2.6** ([10, Definition 7.1]).: A 2-cell in a virtual double category is _cartesian_ if any 2-cell factors uniquely therethrough: In this case, we call \(p\) the _restriction_\(q(f,g)\). If \(q\) is a loose-identity \(A(1,1)\), we denote \(p=A(1,1)(f,g)\) simply by \(A(f,g)\). We denote the factorisation of a \(2\)-cell through a cartesian \(2\)-cell by and consequently elide the distinction between \(\phi\) and \(\hat{\phi}\). We denote the cartesian \(2\)-cell by Restrictions are unique up to isomorphism and are pseudofunctorial: given a loose-cell \(p\colon B\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\rightarrow}}}C\) and tight-cells \(f\colon D\to C\) and \(g\colon A\to B\), each pair of \(2\)-cells \(\phi\colon f^{\prime}\Rightarrow f\) and \(\gamma\colon g\Rightarrow g^{\prime}\) induces a \(2\)-cell \(p(\phi,\gamma)\colon p(f,g)\Rightarrow p(f^{\prime},g^{\prime})\), assuming both restrictions exist. Our string diagram notation is justified by the fact that the restriction \(q(f,g)\) is the composite \(B(f,1)\odot q\odot B^{\prime}(1,g)\) when \(\mathbb{X}\) admits the loose-identities \(B(1,1)\) and \(B^{\prime}(1,1)\) ([10, Theorem 7.16]). When \(\mathbb{X}\) does not admit loose-identities, we use the same notation: however, in this case, the labels \(B(f,1)\) and \(B^{\prime}(1,g)\) do not represent isolated loose-cells, and must be read together with \(q\) as the restriction \(q(f,g)\). Observe that, as a special case of the string diagram notation for restrictions, if a tight-cell \(f\colon A\to B\) admits the restriction \(B(1,f)\colon A\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\rightarrow}}}B\) (called the _companion_ of \(f\)), then it may be bent down; while if it admits the restriction \(B(f,1)\colon B\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\rightarrow}}}A\) (called the _conjoint_ of \(f\)), then it may be bent up. Note that the existence of the companion and conjoint for \(f\) is predicated on the existence of a loose-identity for the codomain. This explains the use of the arrow notation: in a virtual double category with companions and conjoints, lines annotated with arrows in a string diagram may be bent (they may point right, up, or down, but not left). The universal property of restriction ensures that the following _zig-zag laws_ hold [10, p. 618; 11, 12]. From this observation, we may easily deduce that, given parallel tight-cells \(f,g\colon A\to B\), the following are in natural bijection (cf. [10, Corollary 7.22]), and hence that restriction is fully faithful. \[f\Rightarrow g B(1,f)\Rightarrow B(1,g) B(g,1)\Rightarrow B(f,1)\] Consequently, \(f\cong g\) if and only if \(B(1,f)\cong B(1,g)\) if and only if \(B(g,1)\cong B(f,1)\). This permits us to view tight-cells as special loose-cells by taking either companions or conjoints: this process may be thought of as _loosening_ a tight-cell (cf. [10]). **Notation 2.7**.: Let \(f\colon A\to B\) be a tight-cell. Assuming \(f\) admits the restrictions \(B(1,f)\) and \(B(f,1)\), denote by \(cpf\) and \(pcf\) the following 2-cells. Furthermore, given a 2-cell and a tight-cell \(k\colon C\to D\), denote by the 2-cell defined by The existence of companions and conjoints typically holds for virtual double categories of category-like structures (cf. [10, Examples 7.7]). For instance, in **Cat**, the companion of a tight-cell \(f\colon A\to B\) is the distributor \(B(-1,f-_{2})\colon A\twoheadrightarrow B\) given by postcomposition by \(f\), while the conjoint of \(f\) is the distributor \(B(f-_{1},-_{2})\colon B\twoheadrightarrow A\) given by precomposition by \(f\). The importance of this structure motivates the following definition. **Definition 2.8** ([10, Definition 7.6]).: A _virtual equipment_ (or simply _equipment_) is a virtual double category that admits all loose-identities and restrictions. It will be useful to have terminology for those loose-cells induced by tight-cells via restriction. **Definition 2.9**.: Let \(j\colon A_{0}\to B\) and \(i\colon A_{n}\to B\) be tight-cells, and consider a chain \(p_{1}\colon A_{1}\nrightarrow A_{0},\dots,p_{n}\colon A_{n-1}\) of loose-cells. If \(B(j,i)\) exists and forms the loose-composite of the chain, then we say that \(p_{1},\dots,p_{n}\) is _\(j\)-represented by \(i\)_, is _\(i\)-corepresented by \(j\)_, is _\(j\)-representable_, and is _\(i\)-corepresentable_. We omit the prefixes \(j\)- and \(i\)- when the respective tight-cells are the identity. A loose-cell is thus representable precisely when it is the companion of a tight-cell, and is corepresentable precisely when it is the conjoint of a tight-cell. ### Duality **Definition 2.10**.: The _dual_\(\mathbb{X}^{\mathrm{co}}\) of a virtual double category \(\mathbb{X}\) is the virtual double category with the same objects and tight-cells as \(\mathbb{X}\), whose loose-cells \(A\nrightarrow B\) are the loose-cells \(B\nrightarrow A\) in \(\mathbb{X}\), and whose \(2\)-cells with frame are the \(2\)-cells in \(\mathbb{X}\). String diagramatically, \(\mathbb{X}^{\mathrm{co}}\) arises from \(\mathbb{X}\) by horizontal reflection. **Remark 2.11**.: A virtual double category has only one notion of dual, which combines the two notions of duality for a \(2\)-category or bicategory. In particular, the duality acts as \((-)^{\mathrm{co}}\) on the tight-cells; and as \((-)^{\mathrm{op}}\) on the loose-cells; the two dualities coincide for the \(2\)-cells. The tight \(2\)-category of \(\mathbb{X}^{\mathrm{co}}\) is \((\underline{\mathbb{X}})^{\mathrm{co}}\), where the latter is formed in the usual way by reversing \(2\)-cells. **Lemma 2.12**.: _Let \(\mathbb{X}\) be a virtual double category. Then \(\mathbb{X}^{\mathrm{co}}\) is an equipment if and only if \(\mathbb{X}\) is an equipment. Furthermore, \(\mathbb{X}^{\mathrm{co}}\) admits a loose-composite of a chain \(p_{n},\dots,p_{1}\) if and only if \(\mathbb{X}\) admits a loose-composite of \(p_{1},\dots,p_{n}\)._ Proof.: Loose-identities \(A(1,1)\) in \(\mathbb{X}^{\mathrm{co}}\) are loose-identities \(A(1,1)\) in \(\mathbb{X}\) (and conversely). Restrictions \(p(g,f)\) in \(\mathbb{X}^{\mathrm{co}}\) are restrictions \(p(f,g)\) in \(\mathbb{X}\) (and conversely). Loose-composites \(p_{n}\odot\dots\odot p_{1}\) in \(\mathbb{X}^{\mathrm{co}}\) are loose-composites \(p_{1}\odot\dots p_{n}\) in \(\mathbb{X}\). They satisfy their respective universal properties by definition. ### Monads and adjunctions Since a virtual double category has two kinds of morphism - tight-cells and loose-cells - there are two kinds of monad one may consider in a virtual double category, assuming the existence of loose-identities. Monads formed from tight-cells (which we simply call _monads_, or _tight-monads_ to disambiguate), and their generalisation to relative monads, will be of primary interest throughout the paper. However, monads formed from loose-cells (which we call _loose-monads_) are of secondary interest in certain representation theorems, and it will be useful to introduce them here. **Definition 2.13**.: Let \(\mathbb{X}\) be a virtual double category admitting loose-identities and let \(A\) be an object. A _monad_ on \(A\) is a monad on \(A\) in the tight \(2\)-category \(\underline{\mathbb{X}}\)[1, Definition 5.4.1]. Denote by \(\mathbf{Mnd}(A)\) the category of monads on \(A\), and by \(U_{A}\colon\mathbf{Mnd}(A)\to\mathbb{X}[A,A]\) the forgetful functor. In the above definition, the only loose-identity we require is \(A(1,1)\) since, if one expands the definition of \(\underline{\mathbb{X}}\), this is the only one that is used. We assume all loose-identities exist for simplicity, so that we can state the definition in terms of the tight \(2\)-category. A similar consideration applies to several of the definitions below. **Definition 2.14**.: A _loose-monad_ ([1, SS2.6]) in a virtual double category comprises 1. an object \(A\), the _base_; 2. a loose-cell \(t\colon A\nrightarrow A\), the _underlying loose-cell_; 3. a \(2\)-cell \(\mu\colon t,t\to t\), the _multiplication_; 4. a \(2\)-cell \(\eta\colon\ \Rightarrow t\), the _unit_, satisfying the following equations. \[\begin{aligned} \left(\mu,1_{t}\right);\mu=\left(1_{t},\mu \right);\mu&\left(\eta,1_{t}\right);\mu=1_{t}&(1_{t}, \eta)\;;\mu=1_{t}\end{aligned}\] A _loose-monad on \(A\)_ is a loose-monad with base \(A\). A morphism of loose-monads1 on \(A\) from \((t,\mu,\eta)\) to \((t^{\prime},\mu^{\prime},\eta^{\prime})\) is a 2-cell \(\tau\colon t\Rightarrow t^{\prime}\) satisfying the following equations. Footnote 1: The loose-monad morphisms we consider are a special case of those of [10, §2.6] and [11, Definition 8.3], which permit morphisms between loose-monads with different bases. \[\eta\;;\tau=\eta^{\prime} \mu\;;\tau=(\tau,\tau)\;;\mu^{\prime}\] Loose-monads on \(A\) and their morphisms form a category \(\times\mathbf{Mnd}(A)\). Denote by the faithful functor sending \((t,\mu,\eta)\mapsto t\). In \(\mathbf{Cat}\), tight-monads are the classical notion of monad on a category. Loose-monads are monads in the bicategory of distributors, which are known to correspond to bijective-on-objects functors [12, p. 6.22]. Just as with monads in a 2-category, we have a corresponding notion of adjunction for (tight) monads and loose-monads. **Definition 2.15**.: Let \(\mathbb{X}\) be a virtual double category admitting loose-identities. An _adjunction_ in \(\mathbb{X}\) is an adjunction in the tight 2-category \(\underline{\mathbb{X}}\)[10, SS2]. **Definition 2.16** ([12, Definition 5.31]).: A _loose-adjunction_ in a virtual double category comprises 1. an object \(A\), the _base_; 2. an object \(C\), the _apex_, admitting a loose-identity; 3. a loose-cell \(\ell\colon A\xrightarrow{}C\), the _left loose-adjoint_; 4. a loose-cell \(r\colon C\xrightarrow{}A\), the _right loose-adjoint_, admitting a composite \(r\odot\ell\colon A\xrightarrow{}A\); 5. a 2-cell \(\eta\colon\ \Rightarrow r\odot\ell\), the _unit_; 6. a 2-cell \(\varepsilon\colon\ell,r\Rightarrow C(1,1)\), the _counit_, satisfying the following equations. The motivating example of a loose-adjunction is the relationship between the representable and corepresentable loose-cells induced by a tight-cell. **Lemma 2.17**.: _Let \(\ell\colon A\xrightarrow{}C\) be a tight-cell in a virtual equipment. Then \(C(1,\ell)\dashv C(\ell,1)\)._ Proof.: First, observe that \(C(\ell,\ell)\cong C(\ell,1)\odot C(1,\ell)\). The unit is given by \(\sim_{\ell}\) and the counit is given by \(\sim_{\ell}\). The zig-zag laws follow from those for restriction. **Remark 2.18**.: Following Lemma 2.17, representable loose-cells in an equipment are left adjoints (often simply called _maps_). However, the converse is not generally true: for instance, the left adjoint distributors \(A\xrightarrow{}E\) are equivalent not to functors \(A\xrightarrow{}E\), but to functors from \(A\) to the cocompletion of \(E\) under absolute colimits (cf. [11, SS6]). The distinction between representables and maps is a crucial aspect of the insufficiency of 2-categories as a setting for formal category theory (cf. Remark 4.23). As expected, loose-adjunctions induce loose-monads. **Lemma 2.19**.: _Every loose-adjunction \(\ell\dashv r\) induces a loose-monad._ Proof.: Let \((\ell,r,\eta,\varepsilon)\) be a loose-adjunction. We define a loose-monad by \[(r\odot\ell,r\odot\varepsilon\odot\ell,\eta)\] The unit laws follows from the zig-zag laws for a loose-adjunction, while the associativity law follows from associativity of composition of \(2\)-cells in \(\mathbb{X}\). As a consequence of Lemmas2.17 and 2.19, we have that every tight-cell \(\ell\colon A\to C\) in a virtual equipment induces a loose-monad \(C(\ell,\ell)\) (cf. [10, Lemma 8.4]). **Definition 2.20**.: Let \(\ell\) be a tight-cell. A loose-monad \(T\) is _induced by \(\ell\)_ if there exists a tight-cell \(\ell\) such that \(C(1,\ell)\dashv C(\ell,1)\) induces \(T\) via Lemma2.19. Monads and adjunctions in an equipment induce loose-monads and loose-adjunctions: this will be discussed in the greater generality of relative monads and relative adjunctions in Sections4 and 5. Note that, while (tight) comonads may also be defined in a virtual double category with loose-identities, loose-comonads may not without the assumption of loose-composites, since their definition involves \(2\)-cells with non-unary codomain (cf. Section7). ## 3. Formal category theory We shall now introduce some basic concepts and results, well known in ordinary category theory, in the formal setting of equipments. Many of these results are known in the context of Yoneda structures [11] or proarrow equipments [25, 26], but have not yet been generalised to the context of virtual double categories. The reader interested primarily in relative monads is recommended to skip directly to Section4, and refer back to this section for definitions and lemmas where necessary. The remainder of the paper may be read as if it applied only to enriched categories: our terminology has been chosen to align with the standard terminology in \(\mathbb{V}\)-\(\mathbf{Cat}\), as will be established in Section8.1. Throughout, we work in the context of an arbitrary virtual double category \(\mathbb{X}\) with restrictions. When we discuss right lifts in Section3.1, we do not assume that \(\mathbb{X}\) admits loose-identities, but assume the existence of loose-identities when discussing colimits, from Section3.2 onwards. ### Right lifts A fundamental structure in an equipment is a _right lift_, which generalises the usual notion of right lift in a \(2\)-category. Right lifts will be used to define weighted colimits and (pointwise) left extensions. **Definition 3.1**.: Let \(p\colon Y\xrightarrow{\,\,\,}Z\) and \(q\colon X\xrightarrow{\,\,\,}Z\) be loose-cells. A loose-cell \(q\operatorname{\blacktriangleleft}p\colon X\xrightarrow{\,\,\,}Y\) equipped with a \(2\)-cell, the _counit_, is the _right lift_ of \(q\) through \(p\) when every \(2\)-cell of the form on the left below factors uniquely as a diagram of the form on the right below. **Remark 3.2**.: The definition of right lift in [11, Definition 9.1.2] only requires the above factorisations of \(2\)-cells when \(f\) and \(g\) are identities. When \(\mathbb{X}\) admits loose-identities (as assumed ibid.), this definition is equivalent to ours, since the following forms of \(2\)-cell are in bijection, permitting every \(2\)-cell to be expressed as a \(2\)-cell with trivial tight-cells. However, in the absence of loose-identities, the universal property of Definition 3.1 is stronger. The stronger universal property is required in the proof of Lemma 3.4 below. We prove some basic useful results concerning right lifts. A useful intuition is that when \(\mathbb{X}\) is the delooping of a monoidal category (so that loose-cells in \(\mathbb{X}\) correspond to the objects of a monoidal category), a right lift corresponds to a right-hom. From this perspective, the following lemma expresses unitality of the right-hom and currying. **Lemma 3.3**.: _Let \(q\colon X\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\Rightarrow}}}Z\) be a loose-cell._ 1. _If the loose-identity_ \(Z(1,1)\) _exists, then_ \(q\) _forms the right lift_ \(q\blacktriangleleft Z(1,1)\)_._ 2. _If_ \(Y^{\prime}\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\Rightarrow}}}Y \allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\Rightarrow}}}Z\) _are loose-cells such that the composite_ \(p\odot p^{\prime}\colon Y^{\prime}\allowbreak\mathrel{\xrightarrow{\makebox[0.0 pt][l]{\Rightarrow}}}Z\) _and right lift_ \(q\blacktriangleleft p\colon X\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{ \Rightarrow}}}Y\) _exist, then, if either side of the following exists, so does the other, in which case they are isomorphic._ Proof.: For (1) the universal cell \(\varpi\) is induced from the identity on \(q\) by opcartesianness of the nullary \(2\)-cell \(\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\Rightarrow}}}Z(1,1)\). For each tight-cell \(f\colon W_{0}\to Z\) there is a unique \(2\)-cell \(\phi_{f}\) as follows. Composition with \(\phi_{f}\) then yields a bijection between \(2\)-cells with the following two frames. For (2), there are bijections between \(2\)-cells with the following three frames, using that \((p\odot p^{\prime})(1,f)\cong p\odot(p^{\prime}(1,f))\), and the universal property of \(q\blacktriangleleft p\). The universal property of \(q\blacktriangleleft(p\odot p^{\prime})\) is therefore equivalent to that of \((q\blacktriangleleft p)\blacktriangleleft p^{\prime}\). Restrictions preserve right lifts as follows. **Lemma 3.4**.: _Let \(q\colon X\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\Rightarrow}}}Z\) and \(p\colon Y\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{\Rightarrow}}}Z\) be loose-cells such that the right lift \(q\blacktriangleleft p\colon X\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{ \Rightarrow}}}Y\) exists, and let \(x\colon X^{\prime}\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{ \Rightarrow}}}X\) and \(y\colon Y^{\prime}\allowbreak\mathrel{\xrightarrow{\makebox[0.0pt][l]{ \Rightarrow}}}Y\) be tight-cells. Then the loose-cell \((q\blacktriangleleft p)(y,x)\colon X^{\prime}\allowbreak\mathrel{\xrightarrow{ \makebox[0.0pt][l]{\Rightarrow}}}Y^{\prime}\) forms the right lift of \(q(1,x)\) through \(p(1,y)\)._ \[(q\blacktriangleleft p)(y,x)\cong q(1,x)\blacktriangleleft p(1,y)\] _The universal 2-cell is given by factoring the following 2-cell through the cartesian 2-cell associated to the restriction \(q(1,x)\)._ Proof.: Consider the following four forms of 2-cell. The two at the top are in bijection with each other, as are the two on the bottom, via the universal property of restrictions. The two on the right are in bijection with each other by the universal property of \(q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}}p\). Hence the two forms of 2-cell on the left are in bijection with each other, so \((q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}}p)(y,x)\) is the required right lift. We obtain the counit by calculating the action of the bijections on the identity of \((q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}}p)(y,x)\). ### Weighted colimits We use right lifts to define the notion of _weighted colimit_ in an equipment \(\mathbb{X}\). The definition involves loose-identities, so henceforth we assume their existence in \(\mathbb{X}\). While, in enriched category theory, weights are often taken to be presheaves [10, (3.5)], in a formal context the appropriate notion of weight is a loose-cell (cf. [11, SS4; 12, SS2])2. Footnote 2: Note that we use follow modern practice in using the term _weighted colimit_. Older texts such as [11, 12, 13] instead use the term _indexed colimit_. **Definition 3.5**.: Let \(p\colon Y\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}}Z\) be a loose-cell, and \(f\colon Z\to X\) be a tight-cell. A \(p\)_-weighted cylinder_ (or simply \(p\)_-cylinder_) for \(f\) is a pair \((c,\gamma)\) of a tight-cell \(c\colon Y\to X\) and a 2-cell \(\gamma\colon p\to X(f,c)\). A cylinder \((p*f,\lambda)\) is the \(p\)_-weighted colimit_ (or simply \(p\)_-colimit_) of \(f\) when the 2-cell exhibits \(X(p*f,1)\) as the right lift \(X(f,1)\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}}p\). A tight-cell \(g\colon X\to X^{\prime}\)_preserves_ the colimit \(p*f\) when the cylinder \((((p*f)\,;g),(\lambda\,;g))\) is the \(p\)-colimit of \((f\,;g)\colon Z\to X^{\prime}\). As with right lifts, weighted colimits interact nicely with loose-identities and composites. **Lemma 3.6**.: _Let \(f\colon Z\to X\) be a tight-cell._ 1. _The colimit_ \(Z(1,1)*f\) _exists and is isomorphic to_ \(f\)_._ 2. _If_ \(Y^{\prime}\xrightarrow{p^{\prime}}Y\xrightarrow{p}Z\) _are loose-cells such that the composite_ \(p\odot p^{\prime}\colon Y^{\prime}\mathbin{\mathchoice{\vbox{\hbox{ \scalebox{.5}{$\bullet$}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}}Z\) _and colimit_ \(p*f\colon Y\to X\) _exist, then, if either side of the following exists, so does the other, in which case they are isomorphic._ \[(p\odot p^{\prime})*f\cong p^{\prime}*(p*f)\] Proof.: Immediate from Lemma 3.3. Weighted colimits interact with restriction as follows. **Lemma 3.7**.: _Let \(p\colon Y\twoheadrightarrow Z\) be a loose-cell and \(f\colon Z\to X\) be a tight-cell, such that the colimit \(p*f\colon Y\to X\) exists. For each \(x\colon X^{\prime}\to X\) and \(y\colon Y^{\prime}\to Y\), the following 2-cell exhibits \(X((p*f)y,x)\) as the right lift \(X(f,x)\blacktriangleleft p(1,y)\)._ _In particular, for each tight-cell \(y\colon Y^{\prime}\to Y\), the 2-cell_ \[\lambda,Y(1,y)\colon p(1,y)\Rightarrow X(f,(p*f)y)\] _exhibits \((y\,;(p*f))\colon Y^{\prime}\to X\) as the colimit \(p(1,y)*f\)._ Proof.: Using Lemma 3.4, we have \[X((p*f)y,x)\cong X(p*f,1)(y,x)\cong(X(f,1)\blacktriangleleft p)(y,x)\cong X(f,x)\blacktriangleleft p(1,y)\] from which we may calculate the universal 2-cell. The second part follows by taking \(x=1_{X}\). ### Pointwise left extensions We specialise the definition of weighted colimit to obtain a definition of (pointwise) _left extension_. In enriched category theory, there are two notions of left extension: nonpointwise extensions, which are defined by a 2-categorical universal property; and pointwise extensions, which are typically defined by a universal property involving presheaves. In a formal context, the nonpointwise notion is appropriate for loose-cells (cf. Definition 3.18), whereas the pointwise notion is appropriate for tight-cells. Concretely, in \(\mathbf{Cat}\), it is generally appropriate only to consider nonpointwise extensions of distributors, and to consider pointwise extensions of functors. We shall therefore drop the qualifiers _pointwise_ or _nonpointwise_ except to disambiguate. **Definition 3.8**.: Let \(j\colon Z\to Y\) and \(f\colon Z\to X\) be tight-cells. A tight-cell \(j\mathbin{\vartriangleright}f\colon Y\to X\) equipped with a 2-cell \(\pi\colon f\Rightarrow j\,;(j\mathbin{\vartriangleright}f)\) is the _left extension_ of \(f\) along \(j\) when the 2-cell \[Y(j,1)\xrightarrow{Y(j,1),\,\sim_{j\vdash f}}X((j\mathbin{\vartriangleright}f)j,j\mathbin{\vartriangleright}f)\xrightarrow{X(\pi,j\mathbin{\vartriangleright}f)}X (f,j\mathbin{\vartriangleright}f)\] exhibits \(j\mathbin{\vartriangleright}f\) as the \(Y(j,1)\)-colimit of \(f\). As with right lifts and weighted colimits, left extensions interact nicely with identities and composites, and with restriction. **Lemma 3.9**.: _Let \(f\colon Z\to X\) be a tight-cell._ 1. _The left extension_ \(1_{Z}\mathbin{\vartriangleright}f\) _exists and is isomorphic to_ \(f\)_._ 2. _If_ \(Z\xrightarrow{j}Y\xrightarrow{j^{\prime}}Y^{\prime}\) _are tight-cells such that the left extension_ \(j\mathbin{\vartriangleright}f\colon Y\to X\) _exists, then, if either side of the following exists, so does the other, in which case they are isomorphic._ \[(j^{\prime}j)\mathbin{\vartriangleright}f\cong j^{\prime}\mathbin{\vartriangleright }(j\mathbin{\vartriangleright}f)\] Proof.: Immediate from Lemma 3.6, since \(Y(j,1)\odot Y^{\prime}(j^{\prime},1)\cong Y^{\prime}(j^{\prime}j,1)\). **Lemma 3.10**.: _Let \(j\colon Z\to Y\) and \(f\colon Z\to X\) be tight-cells such that the left extension \(j\mathbin{\vartriangleright}f\colon Y\to X\) exists. For each \(x\colon X^{\prime}\to X\) and \(y\colon Y^{\prime}\to Y\), the following 2-cell exhibits \(X((j\mathbin{\vartriangleright}f)y,x)\) as the right lift \(X(f,x)\mathbin{\blacktriangleleft}Y(j,y)\)._ _In particular, for each \(y\colon Y^{\prime}\to Y\), the 2-cell_ \[Y(j,y)\xRightarrow{\,\,Y(j,1),\,\cap_{j\triangleright f},Y(1,y)\,\,\overbrace{ \longrightarrow}\,\,X((j\triangleright f)j,(j\triangleright f)y)\xRightarrow{X( \pi,1)}\,\,X(f,(j\triangleright f)y)}\] _exhibits \((y\,\,;(j\triangleright f))\colon Y^{\prime}\to X\) as the colimit \(Y(j,y)\ast f\)._ Proof.: Immediate from Lemma 3.7. Pointwise extensions in particular satisfy the 2-categorical universal property of nonpointwise extensions. **Lemma 3.11**.: _Let \(j\colon Z\to Y\) and \(f\colon Z\to X\) be tight-cells such that the left extension \(j\triangleright f\colon Y\to X\) exists. There is a natural bijection of 2-cells_ \[\frac{r_{1},\ldots,r_{n}\Rightarrow X((j\triangleright f)y,x)}{\overline{Y(j,y),r_{1},\ldots,r_{n}\Rightarrow X(f,x)}}\] _In particular, there is a natural bijection of 2-cells_ \[\frac{j\triangleright f\Rightarrow x}{\overline{f\Rightarrow j\,;\,x}}\] _so that \(\pi\colon f\Rightarrow j\,;(j\triangleright f)\) exhibits \(j\triangleright f\) as the (nonpointwise) left extension of \(f\) along \(j\) in the tight 2-category \(\underline{X}\)._ Proof.: By Lemma 3.10, \(X((j\triangleright f)y,x)\) is the right lift \(X(f,x)\mathbin{\blacktriangleleft}Y(j,y)\). The first bijection is immediate from the universal property of this right lift. The second bijection follows by taking \(y=1_{Y}\) and \(n=0\), since we have natural bijections \[\begin{array}{c}j\triangleright f\Rightarrow x\\ \hline\hline\Rightarrow X(j\triangleright f,x)\\ \hline\hline\overline{Y(j,1)\Rightarrow X(f,x)}\\ \hline\hline f\Rightarrow j\,;\,x\end{array}\] using the first bijection, and the universal properties of the restrictions. ### Density and absolute colimits Restriction is fully faithful, so that 2-cells \(E(1,f)\Rightarrow E(1,g)\) between representable loose-cells are in bijection with 2-cells \(f\Rightarrow g\) between tight-cells. We shall often desire a similar property with respect to 2-cells between \(j\)-representable loose-cells, i.e. a bijection between 2-cells \(E(j,f)\Rightarrow E(j,g)\) and 2-cells \(f\Rightarrow g\). This holds provided that the tight-cell \(j\colon A\to E\) is _dense_ when the identity 2-cell \(1_{j}\colon j\Rightarrow j\) exhibits \(1_{E}\colon E\to E\) as the left extension \(j\triangleright j\). **Lemma 3.13**.: _Let \(j\colon A\to E\) be a dense tight-cell. There is a natural bijection of 2-cells_ \[\frac{r_{1},\ldots,r_{n}\Rightarrow E(g,h)}{E(j,g),r_{1},\ldots,r_{n} \Rightarrow E(j,h)}\] _In particular, there is a natural bijection of 2-cells_ \[\frac{g\Rightarrow h}{E(j,g)\Rightarrow E(j,h)}\] Proof.: Density of \(j\) implies \(E(g,h)\cong E((j\bowtie j)g,h)\), so the first bijection is immediate from Lemma 3.11. The second bijection follows therefrom by taking \(n=0\) and using the bijection \[\frac{g\Rightarrow h}{\Rightarrow E(g,h)}\qed\] _Absoluteness_ with respect to a tight-cell \(j\colon A\to E\) is a well-behavedness condition for colimits that permits the calculation of a weighted colimit via loose-composition. **Definition 3.14**.: Let \(p\colon Y\allowbreak\mathrel{\mathop{\hbox to 12.0pt{\rightarrowfill}}\limits}Z\) be a loose-cell and let \(j\colon A\to E\) and \(f\colon Z\to E\) be tight-cells. The colimit \((p*f,\lambda)\) is _\(j\)-absolute_ if the 2-cell is opcartesian. Hence \(p*f\) is \(j\)-absolute just when the composite \(E(j,f)\odot p\) exists, and the canonical 2-cell \(E(j,f)\odot p\Rightarrow E(j,p*f)\) is an isomorphism. **Lemma 3.15**.: _Let \(j\colon A\to E\) and \(j^{\prime}\colon E\to E^{\prime}\) be tight-cells. Every \(j^{\prime}\)-absolute colimit is \((j\;;j^{\prime})\)-absolute._ Proof.: Immediate by pasting \(E(j,1)\) on to the 2-cell defining \(j^{\prime}\)-absoluteness. **Lemma 3.16**.: _Each left extension \(j\bowtie f\) preserves \(j\)-absolute colimits._ Proof.: Let \(p*g\) be a \(j\)-absolute colimit. We have \[p*(g\mathbin{\mathop{\hbox to 12.0pt{\rightarrowfill}}\limits}(j \bowtie f)) \cong p*(E(j,g)*f)\] (Lemma 3.10) \[\cong(E(j,g)\odot p)*f\] (Lemma 3.6) \[\cong E(j,p*g)*f\] ( \[p*g\] is \[j\] -absolute) \[\cong(p*g)\mathbin{\mathop{\hbox to 12.0pt{\rightarrowfill}}\limits}(j \bowtie f)\] (Lemma 3.10) A simple calculation shows that the 2-cell \(p\Rightarrow E((j\bowtie f)g,(j\bowtie f)(p*g))\) induced by these isomorphisms is the canonical one. In general, to show that a tight-cell \(f\) forms a \(j\)-absolute colimit, we must show both that it forms the colimit and that a particular 2-cell is opcartesian. When \(j\) is dense, it is enough to establish the existence of an opcartesian 2-cell, which then implies that \(f\) forms a colimit, as we show in the following lemma. **Lemma 3.17**.: _Let \(p\colon Y\allowbreak\mathrel{\mathop{\hbox to 12.0pt{\rightarrowfill}}\limits}Z\) be a loose-cell and let \(j\colon A\to E\) and \(f\colon Z\to E\) be tight-cells. If \(j\) is dense, then a tight-cell \(f^{\prime}\colon Y\to E\) forms the \(j\)-absolute \(p\)-colimit of \(f\) if and only if there is an isomorphism_ \[E(j,f)\odot p\cong E(j,f^{\prime})\] Proof.: The only if direction is trivial. For the other direction, assume there is such an isomorphism. Then \(f^{\prime}\) forms the colimit \(p*f\) because \[f^{\prime} \cong E(j,f^{\prime})*j\] (Lemma 3.10, using density of \[j\] ) \[\cong(E(j,f)\odot p)*j\] (assumption) \[\cong p*(E(j,f)*j)\] (Lemma 3.6 ) \[\cong p*f\] (Lemma 3.10, using density of \[j\] The universal \(2\)-cell \(\lambda\colon p\Rightarrow E(f,f^{\prime})\) is the unique \(2\)-cell such that the opcartesian \(2\)-cell \(E(j,f),p\Rightarrow E(j,f^{\prime})\) witnessing the isomorphism above is equal to that in the definition of \(j\)-absoluteness (Definition 3.14). Hence this colimit is \(j\)-absolute. ### Right extensions Above, we defined the notion of right lift to capture colimit-like notions. We also define the dual notion of _right extension_, to capture limit-like notions. **Definition 3.18**.: Let \(p\colon X\xrightarrow{}Y\) and \(q\colon X\xrightarrow{}Z\) be loose-cells. A loose-cell \(p\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.2999904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.299904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0.299954pt\vrule height 6.29904pt width 0.4pt depth 0.299954pt\vrule height 6.299904 pt width 0.4pt depth 0. **Lemma 3.21** (cf. [13, Proposition 8.5]).: _Let \(p\colon X\xrightarrow{}Y\) be a loose-cell, and \(f\colon Y\xrightarrow{}Z\) and \(g\colon X\xrightarrow{}Z\) be tight-cells. Supposing that the \(p\)-colimit \(p*f\) and \(p\)-limit \(\{p,g\}\) exist, there is a bijection of 2-cells_ \[\frac{p*f\Rightarrow g}{f\Rightarrow\{p,g\}}\] Proof.: We have \(Z(p*f,g)\cong Z(f,g)\mathbin{\blacktriangleleft}p\) and \(Z(f,\{p,g\})\cong p\mathbin{\blacktriangleright}Z(f,g)\) by Lemma 3.7, so there are bijections \[\frac{p*f\Rightarrow g}{f\Rightarrow\{p,g\}}\] ### Pointwise left lifts We define a notion of _pointwise_ left lift. This notion has not explicitly appeared in the literature previously. It is the appropriate pointwise notion of left lift, in the same sense that Definition 3.8 is the appropriate pointwise notion of left extension. Pointwise left lifts are closely related to relative adjunctions, as we explain in Section 5. **Definition 3.22**.: Let \(j\colon Z\xrightarrow{}X\) and \(f\colon Y\xrightarrow{}X\) be tight-cells. A tight-cell \(j\mathbin{\triangleleft}f\colon Z\xrightarrow{}Y\) equipped with a 2-cell \(\eta\colon j\Rightarrow(j\mathbin{\triangleleft}f)\mathbin{};f\) is the _left lift_ of \(j\) through \(f\) when the following 2-cell exhibits \(Y(j\mathbin{\triangleleft}f,1)\) as the right extension \(X(f,1)\mathbin{\blacktriangleright}X(j,1)\). We give the definition in the above form for ease of comparison with the definition of left extension (Definition 3.8). However, since \(X(f,1)\odot Y(j\mathbin{\triangleleft}f,1)\cong X(f(j\mathbin{\triangleleft}f ),1)\), the 2-cell above can be written equivalently as \[Y(j\mathbin{\triangleleft}f,1),X(f,1)\xRightarrow{\operatorname{opcart}}X(f(j \mathbin{\triangleleft}f),1)\xRightarrow{X(\eta,1)}X(j,1)\] **Lemma 3.23**.: _Let \(j\colon Z\xrightarrow{}X\) and \(f\colon Y\xrightarrow{}X\) be tight-cells and suppose that the left lift \(j\mathbin{\triangleleft}f\colon Z\xrightarrow{}Y\) exists. Then the 2-cell_ \[Y(j\mathbin{\triangleleft}f,1)\xRightarrow{Y(j\mathbin{\trianglerightleft}f,1), \curvearrowright_{f}}X(f(j\mathbin{\triangleleft}f),f)\xRightarrow{X(\eta, f)}X(j,f)\] _is an isomorphism._ Proof.: Restrictions \(p(y,x)\) in \(\mathbb{X}^{\operatorname{co}}\) are restrictions \(p(x,y)\) in \(\mathbb{X}\), so we have the following isomorphisms, which compose to the required 2-cell. \[Y(j\mathbin{\triangleleft}f,1) \cong E(f,1)\mathbin{\blacktriangleright}E(j,1)\] (Definition 3.22) \[\cong(E(1,1)\mathbin{\blacktriangleright}E(j,1))(1,f)\] (Lemma 3.4) \[\cong E(j,f)\] (Lemma 3.3) In Lemma 3.11, we observed that every pointwise left extension in \(\mathbb{X}\) was in particular a non-pointwise left extension in \(\underline{\mathbb{X}}\). The following shows that pointwise left lifts in \(\mathbb{X}\) satisfy an analogous, but stronger, universal property. **Proposition 3.24**.: _Let \(j\colon Z\xrightarrow{}X\) and \(f\colon Y\xrightarrow{}X\) be tight-cells and suppose the left lift \(j\mathbin{\triangleleft}f\colon Z\xrightarrow{}Y\) exists. Then \(j\mathbin{\triangleleft}f\) is an absolute (nonpointwise) left lift in the tight 2-category \(\underline{\mathbb{X}}\)._ Proof.: We show that, for every \(z\colon Z^{\prime}\to Z\), the tight-cell \((z\,;\,(j\trianglelefteq f))\colon Z^{\prime}\to Y\) equipped with the \(2\)-cell \((z\,;\eta)\colon(z\,;\,j)\Rightarrow(z\,;\,(j\trianglelefteq f)\,;f)\) is the left lift of \((z\,;\,j)\) through \(f\) in \(\mathbb{X}\). For each \(y\colon Y^{\prime}\to Y\) we have \[Y((j\trianglelefteq f)z,y)\cong Y(j\trianglelefteq f,1)(z,y)\cong X(j,f)(z, y)\cong X(jz,fy)\] by Lemma 3.23. Hence there are bijections \[\frac{z\,;\,(j\trianglelefteq f)\Rightarrow y\,}{z\,;\,j\Rightarrow y\,;f}\] that send the identity on \(z\,;\,(j\trianglelefteq f)\) to \((z\,;\,\eta)\), as required. We summarise the relationship between pointwise left and right extensions and lifts in the table below; pointwise right extensions and lifts in \(\mathbb{X}\) are defined to be pointwise left extensions and lifts in \(\mathbb{X}^{\mathrm{co}}\). \[\begin{array}{c|c}\text{Pointwise}&\text{ are characterised by }\xrightarrow{}\\ \hline\text{left extensions}&X(j\trianglerightq f,1)&\cong&X(f,1)\blackblacktriangleleft Y (j,1)\\ \text{left lifts}&Y(j\trianglelefteq f,1)&\cong&X(f,1)\blacktriangleright X(j,1)\\ \text{right extensions}&X(1,j\blacktrianglerightq f)&\cong&Y(1,j)\blacktriangleright X (1,f)\\ \text{right lifts}&Y(1,j\blacktrianglelefteq f)&\cong&X(1,j)\blacktriangleleft X (1,f)\end{array}\] Note that pointwise left extensions and pointwise left lifts in \(\mathbb{X}\), like pointwise right lifts and pointwise right extensions in \(\mathbb{X}\), are characterised in terms of _right_ lifts and extensions in \(\mathbb{X}\). ### Full faithfulness **Definition 3.25**.: A tight-cell \(j\colon A\to E\) is _fully faithful_ when the \(2\)-cell \(\sim_{j}\colon\,\Rightarrow E(j,j)\) is opcarte-sian; equivalently when the induced \(2\)-cell \(A(1,1)\Rightarrow E(j,j)\) is invertible. **Lemma 3.26**.: _Let \(j\colon A\to E\) be a tight-cell. If \(j\) is fully faithful, then for every tight-cell \(f\colon A\to X\) for which the left extension \((j\trianglerightq f,\pi)\) exists, the 2-cell \(\pi\colon f\Rightarrow j\,;\,(j\trianglerightq f)\) is invertible._ Proof.: The \(2\)-cell \(\pi\) is equal to the composite of the following isomorphisms. \[f \cong\ A(1,1)*f\] (Lemma 3.6) \[\cong\ E(j,j)*f\] ( \[j\] is fully faithful) \[\cong\ j\,;\,(j\trianglerightq f)\] (Lemma 3.10) ## 4. Relative monads Relative monads were introduced as a generalisation of monads to arbitrary functors [1]. However, despite this motivation, the traditional definition of relative monad ([1, Definition 1]) does not immediately appear monad-esque. We therefore begin not with the traditional definition (which appears as Definition 4.13), but by justifying the definition from an alternative perspective. For an object \(A\) in a \(2\)-category \(\mathcal{K}\), the hom-category \(\mathcal{K}(A,A)\) is canonically equipped with the structure of a strict monoidal category, whose tensor product is given by composition of endo-\(1\)-cells, and whose unit is given by the identity \(1\)-cell on \(A\). A monoid in \(\mathcal{K}(A,A)\) is precisely a monad on \(A\). More generally, we may consider monads in a virtual double category \(\mathbb{X}\). In this context there are two notions of monad: loose-monads and tight-monads (Section 2.4). For an object \(A\) in \(\mathbb{X}\), we may consider both loose-monads and tight-monads on \(A\) as monoids. As with monads in \(2\)-categories, endo-tight-cells on \(A\) form a strict monoidal category \(\mathbb{X}[A,A]\) (assuming \(A\) admits a loose-identity), and a monoid therein is precisely a tight-monad in the sense of Definition 2.13. However, since loose-cells do not admit composites in general, endo-loose-cells on \(A\) form not a monoidal category, but a multicategory \(\mathbb{X}[A,A]\), the objects of which are loose-cells \(A\twoheadrightarrow A\), and the multimorphisms of which are \(2\)-cells A monoid in \(\mathbb{X}[\![A,A]\!]\) is then precisely a loose-monad in the sense of Definition 2.14. Furthermore, when \(\mathbb{X}\) is an equipment, the monoidal category \(\mathbb{X}[\![A,A]\!]\) forms a full sub-multicategory of \(\mathbb{X}[\![A,A]\!]\), each tight-monad \((t,\mu,\eta)\) being represented by a loose-monad \((A(1,t),A(1,\mu),A(1,\eta))\). We should like to generalise this situation to relative monads by considering arbitrary hom-categories, for which the tight- and loose-cells may have a domain different to their codomain. On the face of it, such a proposition makes little sense, since it is not possible to form a chain of two loose-cells \(p,q\colon A\allowbreak\twoheadrightarrow E\) unless \(A=E\). However, supposing we were given a loose-cell \(j^{*}\colon E\allowbreak\twoheadrightarrow A\), we could form a chain of loose-cells \[A\mathrel{\xrightarrow{q}}E\mathrel{\xrightarrow{j^{*}}}A\mathrel{ \xrightarrow{p}}E\] which acts as a form of composition _relative to \(j^{*}\)_. For this composition to be associative and unital in an appropriate sense, we cannot simply take any \(1\)-cell \(j^{*}\colon E\allowbreak\twoheadrightarrow A\) relative to which to compose. However, it is enough to assume that \(j^{*}\) is the right adjoint of a loose-adjunction \(j_{*}\dashv j^{*}\). In particular, in the context of an equipment \(\mathbb{X}\), we may take \(j^{*}:=E(j,1)\) to be the conjoint of a tight-cell \(j\colon A\to E\), which is right-adjoint to the companion \(j_{*}:=E(1,j)\) by Lemma 2.17. We may then define a notion of multimorphism between loose-cells \(A\allowbreak\twoheadrightarrow E\), given by \(2\)-cells Though this does not quite suffice to define an appropriate multicategory structure on the loose-cells \(A\allowbreak\twoheadrightarrow E\) of \(\mathbb{X}\), it does form a weaker notion of multicategory that generalises the skew-monoidal categories of Szlachanyi [10] in the same way that multicategories generalise monoidal categories (cf. Remark 4.2). Thereafter, it is natural to consider monoids in this generalised multicategory as a notion of _\(j\)-relative_ monad. By restricting to the monoids that are representable in a sense analogous to that of tight-monads above, we shall show that this recovers the traditional definition of relative monad. ### Associative-normal left-skew-multicategories We begin by defining the generalised notion of multicategory required to describe the skew composition described above: these generalised multicategories are similar to multicategories, but in which multimorphisms may additionally have _markers_ in their domain, denoted by \(\bullet\), which represent the unit in a left-skew-monoidal category. Below, we write \(\bullet^{m}\) as an abbreviation for \(\underbrace{\bullet,\ldots,\bullet}_{m}\) (where \(m\geq 0\)). **Definition 4.1**.: An _associative-normal left-skew-multicategory_\(\mathbf{M}\) comprises 1. a class \(|\mathbf{M}|\) of _objects_; 2. a class \(\mathbf{M}(A_{1},\ldots,A_{n};B)\) of _multimorphisms_ for each \(n>0\), \(A_{1},\ldots,A_{n}\in|\mathbf{M}|+\{\bullet\}\) and \(B\in|\mathbf{M}|\); 3. an _identity_ multimorphism \(1_{A}\in\mathbf{M}(A;A)\) for each \(A\in|\mathbf{M}|\); 4. for each multimorphism \(g\colon\bullet^{m_{0}},B_{1},\bullet^{m_{1}},\ldots,\bullet^{m_{n-1}},B_{n}, \bullet^{m_{n}}\to C\) where \(B_{1},\ldots,B_{n},C\in|\mathbf{M}|\) and \(n,m_{i}\geq 0\), and multimorphisms \(f_{1}\colon\overrightarrow{A_{1}}\to B_{1}\), \(\ldots\), \(f_{n}\colon\overrightarrow{A_{n}}\to B_{n}\) where \(\overrightarrow{A_{i}}\in(|M|+\{\bullet\})^{*}\), a _composite_ multimorphism \[\bullet^{m_{0}},\overrightarrow{A_{1}},\bullet^{m_{1}},\ldots,\bullet^{m_{n- 1}},\overrightarrow{A_{n}},\bullet^{m_{n}}\xrightarrow{(f_{1},\ldots,f_{n}),g}C\] 5. a _left-unitor_ function \[\lambda_{(\overrightarrow{A};B),k}\colon\mathbf{M}(A_{1},\ldots,A_{k},A_{k+ 1},\ldots,A_{n};B)\to\mathbf{M}(A_{1},\ldots,A_{k},\bullet,A_{k+1},\ldots,A_ {n};B)\] for each \(0\leq k<n\); 6. a _right-unitor_ function \[\rho_{(\overrightarrow{A};B),k}\colon\mathbf{M}(A_{1},\ldots,A_{k},\bullet,A_ {k+1},\ldots,A_{n};B)\to\mathbf{M}(A_{1},\ldots,A_{k},A_{k+1},\ldots,A_{n};B)\] for each \(0<k\leq n\), such that composition is associative and unital; that the left- and right-unitors cohere with pre- and postcomposition; and that the right-unitor is a retraction of the left-unitor, in the following sense. \[(f_{1},\ldots,\lambda_{(\overrightarrow{A}_{i};B_{i}),k}f_{i}, \ldots,f_{n})\,;g= (0\leq k<|\overrightarrow{A}_{i}|)\] \[\lambda_{(\overrightarrow{A};C),(\sum_{0\leq j<i}m_{j})+(\sum_{0 <j<i}|\overrightarrow{A}_{j}|)+k}((f_{1},\ldots,f_{n})\,;g)\] \[(f_{1},\ldots,\rho_{(\overrightarrow{A}_{i};B_{i}),k}f_{i}, \ldots,f_{n})\,;g= (0<k\leq|\overrightarrow{A}_{i}|)\] \[\rho_{(\overrightarrow{A};C),(\sum_{0\leq j<i}m_{j})+(\sum_{0<j<i }|\overrightarrow{A}_{j}|)+k}((f_{1},\ldots,f_{n})\,;g)\] \[(f_{1},\ldots,f_{n})\,;(\lambda_{(\overrightarrow{B};C),(\sum_ {0\leq j<k}m_{j})+k+\ell}g)= (0\leq k\leq n,\,0\leq\ell\leq m_{k},\,k<n\vee\ell<m_{k})\] \[(f_{1},\ldots,f_{n})\,;(\rho_{(\overrightarrow{B};C),(\sum_{0 \leq j<k}m_{j})+k+\ell}g)= (0\leq k\leq n,\,0\leq\ell<m_{k},\,0<k\lor 0<\ell)\] \[\rho_{(\overrightarrow{A};C),(\sum_{0\leq j<k}m_{j})+(\sum_{0<j \leq|\overrightarrow{A}_{j}|)+\ell}((f_{1},\ldots,f_{n})\,;g)}\] \[\lambda_{(\overrightarrow{A};B),k}\,;\rho_{(\overrightarrow{A};B ),k}=1_{\mathbf{M}(\overrightarrow{A};B)} (1<k<n)\] Above, \(\overrightarrow{A}\) is shorthand for the domain of a multimorphism: \[\bullet^{m_{0}},\overrightarrow{A}_{1},\bullet^{m_{1}},\ldots,\bullet^{m_{n- 1}},\overrightarrow{A}_{n},\bullet^{m_{n}}\] \(\mathbf{M}\) is _left-normal_ when \(\lambda\) is invertible; and is _right-normal_ when \(\rho\) is invertible. A _functor_ between associative-normal left-skew-multicategories is a homomorphism of associative-normal left-skew-multicategories. **Remark 4.2**.: Associative-normal left-skew-multicategories are part of a larger story, which we briefly outline. The construction of the free left-skew monoidal category described in [1] extends to a virtual double monad \(\mathbb{S}\) on \(\mathbf{Cat}\) via convolution in the usual way (cf. [11, SS11; 12, Theorem 7.3]). Normalised \(\mathbb{S}\)-monoids in the sense of Cruttwell and Shulman [10, Definition 8.3] might then naturally be called _left-skew-multicategories_. The construction \(\mathbb{S}\) of the free left-skew monoidal category restricts to give constructions \(\mathbb{S}_{N}\) of free _strict partially-normal_ left-skew monoidal categories (cf. [1, SS1; 13, Definition 3.1]), where some subset \(N\subseteq\{\alpha,\lambda,\rho\}\) of the structural transformations for associativity, and left- and right-unitality of a left-skew monoidal category are taken to be identities. Correspondingly, normalised \(\mathbb{S}_{N}\)-monoids give notions of _\(N\)-normal left-skew-multicategories_: in particular, * \(\emptyset\)-normal left-skew-multicategories are left-skew-multicategories in the aforementioned sense; * \(\{\alpha\}\)-normal left-skew-multicategories are the associative-normal left-skew-multicategories of Definition 4.1; * \(\{\alpha,\rho\}\)-normal left-skew-multicategories are the _skew-multicategories_ of [1, Definition 4.2] (cf. [1, SS3, Alternative perspective 2]); * \(\{\alpha,\lambda,\rho\}\)-normal left-skew-multicategories are multicategories [1, p. 106]. For \(N\subseteq N^{\prime}\subseteq\{\alpha,\lambda,\rho\}\), each \(N\)-normal left-skew-multicategory \(\mathbf{M}\) has an underlying wide \(N^{\prime}\)-normal left-skew-multicategory \(\mathbf{M}_{N^{\prime}\setminus N}\), given by restricting to the _\((N^{\prime}\setminus N)\)-normal_ multimorphisms. In particular, every associative-normal left-skew-multicategory has an underlying \(\{\alpha,\rho\}\)-normal left-skew-multicategory, which permits us the later use of the theory of left-representability developed in [1]. The associative-normal left-skew-multicategories with which we are concerned satisfy an additional representability property: namely, the existence of a nullary tensor product. **Definition 4.3**.: A _unit_ in an associative-normal left-skew-multicategory \(\mathbf{M}\) comprises an object \(J\in\mathbf{M}\) and a multimorphism \(\bullet\to J\) such that the function \[\mathbf{M}(X_{1},\ldots,X_{k},J,X_{k+1},\ldots,X_{n};Y)\to\mathbf{M}(X_{1}, \ldots,X_{k},\bullet,X_{k+1},\ldots,X_{n};Y)\] induced by precomposition is a bijection for all objects \(X_{1},\dots,X_{n},Y\in\mathbf{M}\) and \(0\leq k\leq n\). An associative-normal left-skew-multicategory with a unit is called _unital_. We now make precise the claim of the section introduction by constructing a skew-multicategorical structure on the hom-categories of a virtual double category. **Theorem 4.4**.: _Let \(\mathbb{X}\) be a virtual double category with a loose-adjunction \(j_{*}\dashv j^{*}\colon E\twoheadrightarrow A\). The loose-cells \(A\twoheadrightarrow E\) in \(\mathbb{X}\) together with 2-cells of the form \(p_{1},j^{*},p_{2},j^{*},\dots,j^{*},p_{n}\Rightarrow q\) form a multi-category, which extends to a unital associative-normal left-skew-multicategory \(\mathbb{X}[\![j_{*}\dashv j^{*}]\!]\)._ Proof.: We define a multicategory \(\mathbb{X}[\![j_{*}\dashv j^{*}]\!]\) as follows. The class of objects is given by those of \(\mathbb{X}[\![A,E]\!]\). The multimorphisms \(p_{1},\dots,p_{n}\to q\) for \(n>0\) are 2-cells \(p_{1},j^{*},p_{2},j^{*},\dots,j^{*},p_{n}\Rightarrow q\). There are no nullary multimorphisms. The identity multimorphism on \(p\) is given by the identity 2-cell \(1_{p}\). Composition is given by associativity and unitality being inherited from that of composition of 2-cells in \(\mathbb{X}\). Next, we derive an associative-normal left-skew-multicategory structure on \(\mathbb{X}[\![j_{*}\dashv j^{*}]\!]\). We define a function \[[1_{\mathbb{X}[\![A,E]\!]},(-)\mapsto j_{*}]\colon\mathbb{X}[\![A,E]\!]+\{ \bullet\}\to\mathbb{X}[\![A,E]\!]\] sending the marker \(\bullet\) to the loose-cell \(j_{*}\). This defines a multicategory with objects \(\mathbb{X}[\![A,E]\!]+\{\bullet\}\). We define a family \[\lambda_{(p_{1},\dots,p_{n};q),k}\colon\mathbb{X}[\![j_{*}\dashv j^{*}]\!](p_{ 1},\dots,p_{n};q)\to\mathbb{X}[\![j_{*}\dashv j^{*}]\!](p_{1},\dots,\bullet, \dots,p_{n};q)\] by pasting the counit of the loose-adjunction, and a family \[\rho_{(p_{1},\dots,p_{n};q),k}\colon\mathbb{X}[\![j_{*}\dashv j^{*}]\!](p_{ 1},\dots,\bullet,\dots,p_{n};q)\to\mathbb{X}[\![j_{*}\dashv j^{*}]\!](p_{1}, \dots,p_{n};q)\] by pasting the unit of the loose-adjunction. That these cohere with composition follows from associativity of composition in \(\mathbb{X}\). The compatibility condition between \(\lambda\) and \(\rho\) follows from the zig-zag condition associated to the loose-adjunction. The left-adjoint loose-cell \(j_{*}\) provides a unit for \(\mathbb{X}[\![j_{*}\dashv j^{*}]\!]\) by definition. **Remark 4.5**.: Theorem 4.4 generalises the construction of [12, SS7] from bicategories and skew-monoidal categories to virtual double categories and skew-multicategories. In fact, the construction of the associative-normal left-skew-multicategory \(\mathbb{X}[\![j_{*}\dashv j^{*}]\!]\) fits into a more general context, which we briefly outline. Recall from [12, SS3] that skew-monoidal structures on a category \(C\) are often induced from monoidal structures on \(C\) by tensoring with a skew-warping \(T\), via \(a\otimes b:=a\otimes Tb\). The construction in [12, SS7] is essentially a categorification of this idea, where the monoidal category \(C\) is replaced by a bicategory \(\mathcal{K}\). In this context, the tensor \(\otimes\) becomes composition \(\odot\) of loose-cells, and the skew-warping is given by \(j^{*}\odot(-)\). However, although Lack and Street do consider a bicategorical notion of skew-warping in SS4 ibid., they do not exhibit the skew-monoidal structure on \(\mathcal{K}[A,E]\) as an instance of this construction. To do so would require the consideration of a notion of _relative skew-warping_, to capture the skew-monoidal structure induced by a single right-adjoint 1-cell \(j^{*}\colon E\twoheadrightarrow A\), rather than by a family of right-adjoint 1-cells indexed by the objects of \(\mathcal{K}\). We have chosen to follow Lack and Street [12] in giving an explicit construction of the skew-multicategorical structure in Theorem 4.4, since a formalisation of the approach outlined above would require a further generalisation of the theory ibid. to virtual double categories. **Definition 4.6**.: Let \(\mathbb{X}\) be an equipment with a tight-cell \(j\colon A\to E\). Denote by \(\mathbb{X}[\![j]\!]\) the skew-multicategory \(\mathbb{X}[\![E(1,j)\dashv\!]E(j,1)\!]\). **Definition 4.7**.: Given a skew-multicategory \(\mathbf{M}\), denote by \(\mathbf{M}_{1}\) the category of unary multimorphisms, i.e. the category whose objects are those of \(\mathbf{M}\) and whose hom-set \(\mathbf{M}_{1}(X,Y):=\mathbf{M}(X;Y)\). In particular \(\mathbb{X}[\![j]\!]_{1}=\mathbb{X}[\![A,E]\!]\). With the intention of obtaining the classical definition of relative monad, we examine the monoids in the skew-multicategory \(\mathbb{X}[\![j]\!]\): it will turn out that \(j\)-relative monads are equivalent to monoids whose underlying loose-cell is representable. **Definition 4.8**.: Let \(\mathbf{M}\) be an associative-normal left-skew-multicategory. A _monoid_ in \(\mathbf{M}\) comprises 1. an object \(M\in\mathbf{M}\), the _carrier_; 2. a multimorphism \(m\colon M,M\to M\), the _multiplication_; 3. a multimorphism \(u\colon\bullet\to M\), the _unit_, satisfying the following equations. \[(u,1_{M})\,;m=\lambda_{(M;M),0}(1_{M})\qquad\rho_{(M;M),1}((1_{M},u)\,;m)=1_{M }\qquad(m,1_{M})\,;m=(1_{M},m)\,;m\] A _monoid homomorphism_ from \((M,m,u)\) to \((M^{\prime},m^{\prime},u^{\prime})\) is a multimorphism \(f\colon M\to M^{\prime}\) satisfying the following equations. \[u\,;f=u^{\prime} m\,;f=(f,f)\,;m^{\prime}\] Monoids in \(\mathbf{M}\) and their homomorphisms form a category \(\mathbf{Mon}(\mathbf{M})\) functorial in \(\mathbf{M}\). Denote by \(U_{\mathbf{M}}\colon\mathbf{Mon}(\mathbf{M})\to\mathbf{M}_{1}\) the faithful functor sending each monoid \((M,m,u)\) to its carrier \(M\). Any unital skew-multicategory contains an initial monoid, which will shall later show to be of particular interest for relative monads. **Proposition 4.9**.: _Let \(\mathbf{M}\) be a unital skew-multicategory. The unit \(J\) forms a monoid, which is initial amongst monoids in \(\mathbf{M}\)._ Proof.: Unitality of \(\mathbf{M}\) gives a multimorphism \(u\colon\bullet\to J\), and induces from \(\lambda_{(J;J),0}(1_{J})\colon\bullet,J\to J\) a multimorphism \(m\colon J,J\to J\). The left unit law follows from unitality and the definition of \(m\). The right unit law follows from compatibility of the unitors with composition, the \(\lambda\)-\(\rho\) interaction law, and unitality. The associativity law follows from naturality of precomposition of \(u\colon\bullet\to J\). Given any monoid \((M^{\prime},m^{\prime},u^{\prime})\) in \(\mathbf{M}\), the unit \(u^{\prime}\colon\bullet\to M\) induces a multimorphism \(J\to M\) which forms a monoid homomorphism: the unit law follows from the unitality bijection; while the multiplication law follows from unitality and the unit laws for \(M^{\prime}\). Before introducing the notion of relative monad we shall study henceforth, we first introduce the slightly more general notion of _loose relative monad_, which stands in a similar relation to the notion of relative monad that loose-monads stand in relation to (tight) monads (Section 2.4), and will be used to simplify some later proofs. **Definition 4.10**.: For a tight-cell \(j\colon A\to E\), denote by \(\times^{\prime}\mathbf{RMnd}(j):=\mathbf{Mon}(\mathbb{X}[\![j]\!])\) the category of _loose \(j\)-relative monads_, and denote by \(U_{j}\colon\times^{\prime}\mathbf{RMnd}(j)\to\mathbb{X}[\![A,E]\!]\) the forgetful functor. Every loose \(j\)-relative monad induces a loose-monad on its domain by restricting along \(j\). As a consequence, loose-monads relative to identity tight-cells are simply loose-monads. **Lemma 4.11**.: _Restriction along \(j\) induces a functor \((-)(j,1)\colon\times^{\prime}\mathbf{RMnd}(j)\to\times^{\prime}\mathbf{Mnd}(A)\) commuting with the forgetful functors._ _Futhermore, when \(j\) is the identity, this functor is an isomorphism._ Proof.: There is a functor of associative-normal left-skew-multicategories \((-)(j,1)\colon\mathbb{X}[\![j]\!]\to\mathbb{X}[\![A;A]\!]\) sending each loose-cell \(p\colon A\twoheadrightarrow E\) to \(p(j,1)\colon A\twoheadrightarrow A\) and each \(2\)-cell \[\bullet^{m_{0}},p_{1},\bullet^{m_{1}},\ldots,\bullet^{m_{n-1}},p_{n},\bullet^{ m_{n}}\Rightarrow q\] to the pasting of \(E(j,1)\) with the \(2\)-cell \(p_{1},\ldots,p_{n}\Rightarrow q\) given by precomposing by \(\frown_{j}\) for each \(\bullet\). By functoriality of \(\mathbf{Mon}\), there is therefore a functor \(\mathbf{Mon}((-)(j,1))\colon\mathbf{Mon}(\mathbb{X}[\![j]\!])\to\mathbf{Mon}( \mathbb{X}[\![A;A]\!])\) commuting with the forgetful functors, which by definition is a functor \(\curlywedge\mathbf{RM}\mathbf{Ind}(j)\to\curlywedge\mathbf{M}\mathbf{M}(A)\). When \(j\) is the identity, \((-)(j,1)\) is invertible, since restriction is pseudofunctorial. While loose relative monads are of interest in their own right, herein we shall be interested in restricting to those monoids in \(\mathbb{X}[\![j]\!]\) whose underlying loose-cells are representable. **Definition 4.12**.: Let \(\mathbb{X}\) be a virtual equipment with a tight-cell \(j\colon A\to E\). Define \(\mathbb{X}[j]\) to be the (unital) full associative-normal sub-left-skew-multicategory of \(\mathbb{X}[\![j]\!]\) spanned by the representable loose-cells. We shall unwrap the definition of a monoid in \(\mathbb{X}[j]\) to compare it with the classical definition of relative monad. Explicitly, a monoid in \(\mathbb{X}[j]\) comprises 1. a tight-cell \(t\colon A\to E\); 2. a \(2\)-cell \(\mu\colon E(1,t),E(j,1),E(1,t)\Rightarrow E(1,t)\); 3. a \(2\)-cell \(E(1,\eta)\colon E(1,j)\Rightarrow E(1,t)\), satisfying the following equations. \[\begin{array}{ A monoid homomorphism is a 2-cell \(E(1,\tau)\colon E(1,t)\Rightarrow E(1,t^{\prime})\) satisfying the following equations. To compare this with the classical definition of relative monad, we now define a relative monad in an arbitrary equipment. **Definition 4.13**.: Let \(\mathbb{X}\) be an equipment. A _relative monad_ in \(\mathbb{X}\) comprises 1. a tight-cell \(j\colon A\to E\), the _root_; 2. a tight-cell \(t\colon A\to E\), the _underlying tight-cell_; 3. a 2-cell \(\uparrow\colon E(j,t)\Rightarrow E(t,t)\), the _extension operator_; 4. a 2-cell \(\eta\colon j\Rightarrow t\), the _unit_, satisfying the following equations. \( **Example 4.15**.: For any tight-cell \(j\colon A\to E\), the triple \[(j,1_{j},1_{E(j,j)})\] forms a \(j\)-monad, the _trivial \(j\)-monad_, which is furthermore initial in \(\mathbf{RMnd}(j)\) by Proposition 4.9. We may now exhibit \(j\)-monads as monoids in \(\mathbb{X}[j]\): conceptually, the equivalence arises from transposing the loose-cell \(E(1,t)\) in the domain of the multiplication operator \(\mu\) to the loose-cell \(E(t,1)\) in the codomain of the extension operator \(\dagger\), via the loose-adjunction \(E(1,t)\dashv E(t,1)\). String diagramatically, this corresponds to bending the string associated with the tight-cell \(t\) from pointing down to pointing up. **Theorem 4.16**.: _There is an isomorphism of categories rendering the following diagram commutative._ Proof.: Observe that, given a multiplication, we can define an extension operator, and conversely: It is immediate that the laws for a relative monad (morphism) are precisely those for a monoid (homomorphism) under these transformations. **Remark 4.17**.: Levy suggests that relative monads ought to be monoids in skew-multicategories [10, p. 21], but does not give an explicit definition. To compare our definition of relative monad to a similar definition in the literature, we observe that, while \(j\)-monads are precisely monoids in \(\mathbb{X}[j]\), they may also be viewed as particular monoids in \(\mathbb{X}[\![j]\!]\): namely those for which the underlying loose-cell is representable. First, we make the following observation. **Lemma 4.18**.: _Let \(\mathbf{M}\) be a skew-multicategory and let \(\mathbf{M}^{\prime}\hookrightarrow\mathbf{M}\) be a full sub-skew-multicategory. The following square forms a pullback in \(\mathbf{Cat}\)._ Proof.: By definition, a monoid in \(\mathbf{M}^{\prime}\) is a monoid in \(\mathbf{M}\) whose carrier is in \(\mathbf{M}^{\prime}\) and, since \(\mathbf{M}^{\prime}\) is a full sub-skew-multicategory, the homomorphisms are identical. We then have the following, identifying (tight) relative monads as representable loose relative monads. **Theorem 4.19**.: _The following square forms a pullback in \(\mathbf{Cat}\)._ Proof.: Direct by composing Theorem 4.16 with Lemma 4.18, considering the inclusion \(\mathbb{X}[j]\hookrightarrow\mathbb{X}[\![j]\!]\). Consequently, it is evident that relative monads generalise monads. **Corollary 4.20**.: _Let \(A\) be an object of \(\mathbb{X}\). There is an isomorphism of categories rendering the following diagram commutative._ Proof.: We have the following pullbacks by Theorem 4.19 and Lemma 4.18. Hence both categories exhibit pullbacks of the same cospans by Lemma 4.11. **Remark 4.21**.: The data of a monad relative to the identity is reminiscent of the definition of _extension system_ of Marmolejo and Wood [14, p. 2.3], a generalisation of the _algebraic theories in extension form_ of [13, Exercise 1.3.12] to arbitrary \(2\)-categories, each of which comprises a \(1\)-cell \(t\colon A\to A\), a \(2\)-cell \(\eta\colon 1\Rightarrow t\), and a family of functions \[\{\mathbb{X}[\cdot,A](x,ty)\to\mathbb{X}[\cdot,A](tx,ty)\}_{x,y\colon\,\cdots\to A}\] that is well-behaved in the sense of [14, Definition 2.1], and subject to unitality and associativity laws. From [14, Lemma 2.2], it follows that such families are equivalent to \(2\)-cells \(t\circ t\Rightarrow t\), and hence to \(2\)-cells \(A(1,t)\Rightarrow A(t,t)\). Thus, when \(j=1\), extension systems are essentially the same as \(j\)-relative monads (in extension form). In a similar fashion, the definition of monoid in \(\mathbb{X}[1_{A}]\) is reminiscent of (an analogous generalisation of) the definition of _algebraic theory in clone form_ of Manes [13, p. 3.2], which comprises a \(1\)-cell \(t\colon A\to A\), a \(2\)-cell \(\eta\colon 1\Rightarrow t\), and a well-behaved family of functions \[\{\mathbb{X}[\cdot,A](x,ty)\times\mathbb{X}[\cdot,A](y,tz)\to\mathbb{X}[\cdot,A](x,tz)\}_{x,y,z\colon\,\cdots\to A}\] subject to unitality and associativity laws. As above, such families are equivalent to \(2\)-cells \(t\circ t\Rightarrow t\), and hence to \(2\)-cells \(A(1,t),A(1,t)\Rightarrow A(1,t)\). Thus, when \(j=1\), algebraic theories in clone form are essentially the same as \(j\)-relative monads (in monoid form). The approach of Marmolejo and Wood [14] was generalised by Lobbia [15] to capture relative monads in \(\mathbf{Cat}\). However, the failure of an analogue of [14, Lemma 2.2] in that setting means that the definition of _relative monad_ of [15, Definition 2.1] is not in general equivalent to our definition. We will show in Section 8 that, in contrast to the definition of Lobbia [15, Example 2.2(iii)], our definition recovers the expected notion of enriched relative monad. Relative monads behave particularly nicely when their roots are dense. An example of this behaviour is given by the following, which permits the representation of \(j\)-monads as _\(j\)-representable_ loose-monads: i.e. those loose-monads for which the underlying loose-cell is of the form \(E(j,t)\colon A\twoheadrightarrow A\) for some tight-cell \(t\colon A\to E\). **Theorem 4.22**.: _There is a functor \(E(j,-)\colon\mathbf{R}\mathbf{M}\mathbf{n}\mathbf{d}(j)\to\mathcal{N} \mathbf{M}\mathbf{n}\mathbf{d}(A)\), fully faithful if \(j\) is dense, in which case the following forms a pullback square in \(\mathbf{Cat}\)._ Proof.: Using Lemma 4.11 and Theorem 4.16, we have the following diagram in \(\mathbf{Cat}\). The composite functor \(\mathbb{X}[A,E]\to\mathbb{X}[\![A,A]\!]\) is \(E(j,1)\), which, when \(j\) is dense, is fully faithful by Lemma 3.13, and hence the outer rectangle is also a pullback by Lemma 4.18. In this case, since fully faithful functors are stable under pullback, the composite functor \(\mathbf{RMnd}(j)\to\times^{\!}\mathbf{Mnd}(A)\) is also fully faithful. This characterisation will be related in Example 8.10 to several notions appearing in the literature. **Remark 4.23**.: In the terminology of Lack and Street [14], loose relative monads are _formal mv-monads_. In their setting it is not possible to recover (tight) relative monads along the lines of Theorem 4.19, since restricting to the monoids whose underlying loose-cell is left adjoint does not precisely recover the tight-cells (cf. Remark 2.18). Lack and Street note the similarity of their definition to that of relative monads, but do not make this relationship precise. **Remark 4.24**.: Let \(j^{*}\colon E\allowbreak\mathrel{\mathop{\hbox to 0.0pt{\rightarrowfill}}\limits}A\) be a loose-cell. The definition of loose relative monad in Definition 4.10 admits a natural generalisation to a structure comprising a loose-cell \(t_{*}\colon A\allowbreak\mathrel{\mathop{\hbox to 0.0pt{\rightarrowfill}} \limits}E\) admitting a composite \(j^{*}\odot t_{*}\colon A\allowbreak\mathrel{\mathop{\hbox to 0.0pt{ \rightarrowfill}}\limits}A\), equipped with \(2\)-cells \(\mu\colon t_{*},j^{*},t_{*}\Rightarrow t_{*}\) and \(\eta\colon\allowbreak\mathrel{\mathop{\hbox to 0.0pt{\rightarrowfill}} \limits}j^{*}\odot t_{*}\) satisfying unit and associativity laws. This recovers as special cases various generalisations of (relative) monads that have appeared in the literature. * When \(j^{*}\) is corepresentable, this is precisely the definition of loose relative monad in Definition 4.10; when \(t_{*}\) is furthermore representable, this is precisely the definition of relative monad in Definition 4.13. * When \(t_{*}\) is representable, we recover a generalisation of relative monad proposed by Levy [14, p. 20]; when \(j^{*}\) is furthermore representable (rather than corepresentable as might be expected), this is precisely the definition of \(E\)_-monad on \(A\)_ of Spivey [20, Definition 1]. In particular, the correspondence between relative monads with left-adjoint roots and \(E\)-monads on \(A\) with right-adjoint roots observed in [1, SS6] follows immediately. The study of these generalised loose relative monads is deferred to future work. ### Relative monads as monoids in a multicategory While in general we require the generality of skew-multicategories to capture relative monads, it is natural to wonder whether there are situations in which it suffices to consider simpler structures. In this section, we shall show that it often suffices to consider a (non-skew) multicategory; in the following section, we shall show that it often suffices to consider a skew-monoidal category. In proving the latter, we shall give a conceptual explanation for the skew-monoidal categories of functors studied by Altenkirch, Chapman and Uustalu [1]. First, we observe that, to capture relative monads, it suffices to consider monoids in the underlying right-normal sub-left-skew-multicategory of an associative-normal left-skew-multicategory, which is formed by restricting to those multimorphisms where \(\bullet\) may appear only in the first position (cf. Remark 4.2). **Lemma 4.25**.: _Let \(\mathbf{M}\) be an associative-normal left-skew-multicategory, and denote by \(\mathbf{M}_{\rho}\) its wide (associative- and) right-normal sub-skew-multicategory. Then there is an isomorphism of categories rendering the following diagram commutative._ Proof.: The data for a monoid (homomorphism) in \(\mathbf{M}\) only involves the data of the underlying \(\{\alpha,\rho\}\)-normal left-skew-multicategory. While it is not possible to directly represent relative monads as monoids in the underlying (non-skew) multicategory of \(\mathbb{X}[j]\), since the data of a monoid involves a multimorphism with domain \(\bullet\), it is possible to characterise when \(\mathbb{X}[j]_{\rho}\) is equivalent to a multicategory. **Proposition 4.26**.: _Let \(j\colon A\to E\) be a tight-cell. \(\mathbb{X}[j]_{\rho}\) is left-normal if \(j\) is dense._ Proof.: 2-cells \(E(1,f_{1}),\ldots,E(j,f_{n})\Rightarrow E(1,g)\) are in bijection with 2-cells \(E(j,f_{1}),\ldots,E(j,f_{n})\Rightarrow E(j,g)\) when \(j\) is dense by Lemma 3.13, and hence with 2-cells \(E(1,j),E(j,f_{1}),\ldots,E(j,f_{n})\Rightarrow E(1,g)\), exhibiting the left-unitor of \(\mathbb{X}[j]_{\rho}\) as invertible. Therefore, relative monads with dense roots may be represented as monoids in the multicategory \(\mathbb{X}[j]_{\rho}\) whose multimorphisms \(f_{1},\ldots,f_{n}\to g\) are the 2-cells \(E(1,j),E(j,f_{1}),\ldots,E(j,f_{n})\Rightarrow E(1,g)\). **Corollary 4.27**.: _Let \(j\colon A\to E\) be a dense tight-cell. Then there is an isomorphism of categories rendering the following diagram commutative._ Proof.: By Proposition 4.26, a monoid (homomorphism) in \(\mathbb{X}[j]_{\rho}\) is equivalently a monoid (homomorphism) in \(\mathbb{X}[j]_{\rho}\), from which the result follows by Lemma 4.25. ### Relative monads as monoids in a left-skew-monoidal category To relate our characterisation of relative monads as monoids in a skew-multicategory to the characterisation of Altenkirch, Chapman and Uustalu [1, 1] of relative monads as monoids in a skew-monoidal category, we consider representability of the skew-multicategory \(\mathbb{X}[j]_{\rho}\). The appropriate notion of representability turns out to be the _left-representability_ of [1, Definition 4.4]: in particular, when \(\mathbb{X}[j]\) admits a tensor product satisfying a certain universal property with respect to right-normal multimorphisms, it is possible to equip the category \(\mathbb{X}[j]_{1}\) with skew-monoidal structure \((\mathord{\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}},j)\), such that monoids in \((\mathbb{X}[j]_{1},\mathord{\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}},j)\) are equivalent to monoids in \(\mathbb{X}[j]\). Since the definition of monoid in a skew-monoidal category has not yet appeared explicitly in the literature, we give the definition here. **Definition 4.28** (cf. [1, Theorem 5; 1, Theorems 3.4 & 3.5]).: Let \((\mathbf{M},\mathord{\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}},J,\alpha, \lambda,\rho)\) be a left-skew-monoidal category [2, Definition 2.1]. A _monoid_ in \(\mathbf{M}\) comprises 1. an object \(M\in\mathbf{M}\), the _carrier_; 2. a morphism \(m\colon M\otimes M\to M\), the _multiplication_; 3. a morphism \(u\colon J\to M\), the _unit_, rendering commutative the following diagrams: A _monoid homomorphism_ from \((M,u,m)\) to \((M^{\prime},u^{\prime},m^{\prime})\) is a morphism \(f\colon M\to M^{\prime}\) rendering commutative the following diagrams: Monoids in \(\mathbf{M}\) and their homomorphisms form a category \(\mathbf{Mon}(\mathbf{M})\) functorial in \(\mathbf{M}\). Denote by \(U_{\mathbf{M}}\colon\mathbf{Mon}(\mathbf{M})\to\mathbf{M}\) the faithful functor sending each monoid \((M,m,u)\) to its carrier \(M\). **Theorem 4.29**.: _Suppose that \(\mathbb{X}\) admits left extensions of tight-cells \(A\to E\) along a tight-cell \(j\colon A\to E\). Then the category \(\mathbb{X}[j]_{1}\) is equipped with left-skew-monoidal structure for which there is an isomorphism of categories rendering the following diagram commutative._ _Furthermore, \(\mathbb{X}[j]_{1}\) is_ 1. _associative-normal if every such left extension_ \(j\vDash f\colon E\to E\) _is_ \(j\)_-absolute;_ 2. _left-normal if_ \(j\) _is dense;_ 3. _right-normal if_ \(j\) _is fully faithful._ Proof.: By the universal property of the left extension \(j\vDash f\), there is a \(2\)-cell \(E(1,f),E(j,1)\Rightarrow E(1,j\vDash f)\), and hence a \(2\)-cell \(E(1,f),E(j,1),E(1,g)\Rightarrow E(1,(j\vDash f)g)\) for all tight-cells \(f,g\colon A\to E\). \(2\)-cells \(g\colon(j\vDash f)\Rightarrow h\) are in bijection with \(2\)-cells \(E(1,f),E(j,1),E(1,g)\Rightarrow E(1,h)\) by Lemma3.11. Thus, \((j\vDash-)\circ(-)\) together with the unit \(E(1,j)\) exhibits \(\mathbb{X}[j]_{\rho}\) as left-representable in the sense of [1, Definition 4.4]. Thus, by [1, Theorem 6.1], \((\mathbb{X}[j]_{\rho})_{1}=\mathbb{X}[j]_{1}\) is left-skew-monoidal. The data of a monoid (homomorphism) in \(\mathbb{X}[j]_{1}\) coincides with the data of a monoid (homomorphism) in \(\mathbb{X}[j]\) by the above; that the laws coincide follows from the definitions of the structural transformations in the left-skew-monoidal category induced by an associative-normal left-skew-multicategory, observing that the laws in Definition4.28 are precisely the internalisations of the laws in Definition4.8. Futhermore, 1. the associator \((f\vDash g)\otimes h\to f\otimes(g\otimes h)\) is given by the canonical \(2\)-cell \(h\colon(j\vDash(g;(j\vDash f)))\Rightarrow(h\,;(j\vDash g))\,;(j\vDash f)\) induced by precomposition of \(j\vDash(g\,;(j\vDash f))\Rightarrow(j\vDash g)\,;(j\vDash f)\) by \(h\), which is hence invertible if left extensions along \(j\) are \(j\)-absolute, since \(j\vDash f\) preserves \(j\)-absolute colimits by Lemma3.16; 2. invertibility of the left-unitor follows from Proposition4.26, since in this case \(\mathbb{X}[j]_{\rho}\) is a multicategory, and left-representable multicategories are left-normal by [1, Theorem 6.3]; 3. the right-unitor \(f\to f\vDash j\) is given by the canonical \(2\)-cell \(f\to j\,;(j\vDash f)\), which is hence invertible if \(j\) is fully faithful by Lemma3.26. **Remark 4.30**.: From Theorem4.29, we recover [1, Theorem 4; 1, Theorems 3.1] regarding skew-monoidality of \(\mathbf{Cat}[A,E]\) given a well-behaved functor \(j\colon A\to E\), [1, Theorem 6; 1, Theorem 4.4] regarding sufficient conditions for monoidality, and [2, Example 3.6] regarding sufficient conditions for normality; and in conjunction with Theorem4.16 recover [1, Theorem 5; 1, Theorems 3.4 & 3.5] regarding the equivalence between \(j\)-monads and monoids in \(\mathbf{Cat}[A,E]\). Note that [2, Example 3.6] states that the sufficient conditions for normality are also necessary. However, this is not true. For example, for a counterexample for right-normality, consider the following diagram in \(\mathbf{Cat}\). The unique (identity) \(2\)-cell \(\langle\rangle\Rightarrow\langle\rangle;1_{1}\) exhibits \(1_{1}\) as the (pointwise) left extension \(\langle\rangle\vDash\langle\rangle\). Therefore, the skew-monoidal category \(\mathbf{Cat}[\langle\rangle]\) is right-normal. However \(\langle\rangle\) is not fully faithful. **Remark 4.31**.: The conditions of Theorem4.29 correspond to the _well-behavedness_ conditions of [1, Definition 4; 1, Definition 4.1], and the _eleuthericity_ conditions of [1, SS7.3]. We observe in passing that these conditions essentially characterise cocompletions under classes of weights, as observed by Szlachanyi [1, SS8] for well-behavedness, and by Lucyshyn-Wright [1, SS8] Theorem 7.8] for eleuthericity. We shall prove in future work that this holds more generally in our formal setting, and thereby deduce that relative monads in such cases are equivalent to monads preserving classes of colimits. ## 5. Relative adjunctions The study of monads is inseparable from the study of adjunctions. An adjunction is the structure obtained by splitting the underlying endo-\(1\)-cell \(A\xrightarrow{t}A\) of a monad into a composable pair of \(1\)-cells \(A\xrightarrow{\ell}C\xrightarrow{r}A\) in such a way that the monad structure may be recovered from corresponding structure on \((\ell,r)\). It is often convenient to study properties of monads in terms of the adjunctions that induce them: in this way, an adjunction acts as a notion of presentation for a monad. In this section, we examine the concept of relative adjunction and show that it behaves in many ways analogously to the non-relative concept, though there are subtleties in the theory not present in the non-relative setting. **Definition 5.1**.: Let \(\mathbb{X}\) be an equipment. A _relative adjunction_ in \(\mathbb{X}\) comprises 1. a tight-cell \(j\colon A\to E\), the _root_; 2. a tight-cell \(\ell\colon A\to C\), the _left (relative) adjoint_; 3. a tight-cell \(r\colon C\to E\), the _right (relative) adjoint_; 4. an isomorphism \(\sharp\colon C(\ell,1)\cong E(j,r)\colon\flat\), the _(left- and right-) transposition operators_. We denote by \(\ell\)_j\({}^{\dashv}r\)_such data (by convention leaving the transposition operators implicit), and call \(C\) the _apex_. A _\(j\)-relative adjunction_ (alternatively _adjunction relative to \(j\)_, or simply _\(j\)-adjunction_) is a relative adjunction with root \(j\). **Remark 5.2**.: Our definition of relative adjunction coincides with that of [20, SS3] in a representable equipment. **Remark 5.3**.: Relative adjunctions whose roots are fully faithful are sometimes called _partial adjunctions_ (e.g. in [18, SS1.11]), as in this case we may view the left adjoint \(\ell\colon A\to C\) as being a _partial morphism_ from \(E\) to \(C\). **Example 5.4**.: Consider tight-cells \(\ell,j\colon A\to E\). We have that \(\ell\)_j\({}^{\dashv}1_{E}\)_if and only if \(\ell\cong j\). If \(j\) is fully faithful, then \(1_{A}\)_j\({}^{\dashv}j\)_(though the converse does not hold). There are several equivalent formulations of adjunctions [14, Theorem IV.1.2], for which analogues exist for relative adjunctions. A subtlety is that the definition of counit is not immediately evident for relative adjunctions. We may resolve this difficulty via the techniques of Section 4, using the loose-cell \(E(j,1)\colon E\xrightarrow{}A\) to facilitate composition of tight-cells \(r\colon C\to E\) and \(\ell\colon A\to C\). **Lemma 5.5**.: _Let \(j\colon A\to E\), \(\ell\colon A\to C\), and \(r\colon C\to E\) be tight-cells. The following data are equivalent, exhibiting a relative adjunction \(\ell\)_j\({}^{\dashv}r\)._ 1. _(Hom isomorphism) An isomorphism_ \(\sharp\colon C(\ell,1)\cong E(j,r)\colon\flat\)_._ 2. _(Universal arrow) A 2-cell_ \(\eta\colon j\Rightarrow\ell\,;r\)_, the_ unit_, and a 2-cell_ \(\flat\colon E(j,r)\Rightarrow C(\ell,1)\) _rendering the following diagrams commutative._ \(\begin{CD}A(1,1)@>{\sim_{j}}>{}>E(j,j)\\ @V{\sim_{\ell}}V{}V@V{E(j,\eta)}V{}V@V{}V{C(\ell,1)}V\\ C(\ell,\ell)@V{\sharp,C(1,\ell)}V{\sharp,C(1,\ell)}E(j,r\ell)\end{CD}\)__ 3. _(Unit-counit) A 2-cell_ \(\eta\colon j\Rightarrow\ell\,;r\)_, the_ unit_, and a 2-cell_ \(\varepsilon\colon C(1,\ell),E(j,r)\Rightarrow C(1,1)\)_, the_ counit_, satisfying the following equations._ \(\begin{CD}A@>{C(1,\ell)}>{}>A\\ @V{}V{\sim_{j}}V@V{}V{=}V\\ A\;\;E(j,j)\;A\;C(1,\ell)\;C\\ A\;\;E(j,r)\;A\;\;C(1,\ell)\;C\\ @V{\varepsilon,C(1,\ell)}V{}V@V{}V{C}V\\ \end{CD}\)__ \(\begin{CD}A@>{C(1,\ell)}>{}>C\\ @V{}V{}V@V{}V{=}V\\ A\;\;E(j,r)\;A\;\;C(1,\ell)\;C\\ A\;\;E(j,r)\;A\;\;C(1,\ell)\;C\\ @V{}V{C(1,\ell)}V\\ \end{CD}\)__ \(\begin{CD}A@>{C(1,\ell)}>{}>C\\ @V{}V{}V@V{}V{=}V\\ A\;\;E(j,1)\;\;E(j,\eta)\;\;E(1,\eta)\;\;E\\ @V{}V{}V@V{}V{=}V\\ C\;\;-E(1,r)\;\;E\;-E(j,1)\;\;A\;\;-E(1,r\ell)\;\;E\\ @V{}V{E(1,r)}V@V{}V{E}V\\ \end{CD}\)__ \(\begin{CD}C@>{E(1,r)}>{}>E(j,1)\\ @V{}V{}V@V{}V{=}V\\ C\;\;-E(1,r)\;\;E\;-E(j,1)\;\;A\;-E(1,r\ell)\;\;E\\ @V{}V{E(1,r)}V@V{}V{E}V\\ \end{CD}\)__ \(\begin{CD}C@>{E(1,r)}>{}>E(j,1)\\ @V{}V{}V@V{}V{=}V\\ C\;\;-E(1,r)\;\;E\;-E(j,1)\;\;A\;-E(1,r\ell)\;\;E\\ @V{}V{E(1,r)}V@V{}V{E}V\\ \end{CD}\)__ \(\begin{CD}C@>{E(1,r)}>{}>E(j,1)\\ @V{}V{}V@V{}V{=}V\\ C\;\;-E(1,r)\;\;E\;-E(j,r)\;\;A\;-C(\ell,\ell)\;\;A\\ C\;\;-E(j,r)\;\;A\;-C(\ell,\ell)\;\;A\\ C\;\;-E(j,r)\;\;A\;-C(\ell,\ell)\;\;A\\ C\;\;-E(j,r)\;\;A\;-C(\ell,\ell)\;\;A\\ C\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; Conversely, given a 2-cell \(\eta\colon j\Rightarrow\ell\,;\,r\), we define a left-transposition operator \(\sharp\colon C(\ell,1)\Rightarrow E(j,r)\) by the 2-cell on the left below; and given a 2-cell \(\varepsilon\colon C(1,\ell),E(j,r)\Rightarrow C(1,1)\), we define a right-transposition operator \(\flat\colon E(j,r)\Rightarrow C(\ell,1)\) by the 2-cell on the right below. That these definitions induce a bijection between 2-cells of the form \(\sharp\) and \(\eta\), and \(\flat\) and \(\varepsilon\), follows from the zig-zag laws for restriction. That the conditions (1) - (4) are then equivalent follows by elementary string diagrammatic reasoning. (5) and (6) are equivalent to (1) by definition. Finally, since \(E(j,r\ell)\cong E(j,r)\odot C(1,\ell)\), by essential uniqueness of adjoints, \(C(1,\ell)\dashv E(j,r)\) if and only if \(C(\ell,1)\cong E(j,r)\), so that (7) is equivalent to (1). Henceforth, in the context of a relative adjunction \(\ell\,_{j}\dashv r\), we shall use \(\sharp\), \(\eta\), \(\flat\), and \(\varepsilon\) to denote the 2-cells defined above. When \(j\) is the identity, we should anticipate that \(j\)-adjunctions are precisely (non-relative) adjunctions. This is indeed so. **Corollary 5.6**.: _Let \(\ell\colon A\to C\) and \(r\colon C\to A\) be tight-cells. The following are equivalent._ 1. \(\ell\,_{1_{A}}\dashv r\) _(Definition_ 5.1_)._ 2. \(\ell\dashv r\) _(Definition_ 2.15_)._ Proof.: When \(j\) is the identity, condition (3) of Lemma 5.5 is precisely the classical \(\eta\)-\(\varepsilon\) definition of adjunction in a 2-category. As with ordinary adjunctions, left relative adjoints are unique up to isomorphism, though, in general, it is not true that relative right adjoints are essentially unique: for instance every functor \(r\colon C\to E\) (for arbitrary categories \(C\) and \(E\)) is right adjoint to the unique functor \([\!]_{C}\colon 0\to C\) relative to the unique functor \([\!]_{E}\colon 0\to E\), but there are typically many such (non-isomorphic) functors. However, when the root \(j\) is dense, right \(j\)-adjoints are unique up to isomorphism. In practice, many results of interest for relative adjunctions and relative monads hold only for those with dense roots. **Lemma 5.7**.: _If \(\ell\,_{j}\dashv r\) and \(\ell^{\prime}\,_{j}\dashv r\), then \(\ell\cong\ell^{\prime}\). If \(\ell\,_{j}\dashv r\) and \(\ell\,_{j}\dashv r^{\prime}\) and \(j\) is dense, then \(r\cong r^{\prime}\)._ Proof.: If \(\ell\,_{j}\dashv r\) and \(\ell^{\prime}\,_{j}\dashv r\), then \(B(\ell,1)\cong E(j,r)\cong B(\ell^{\prime},1)\), hence \(\ell\cong\ell^{\prime}\). If \(\ell\,_{j}\dashv r\) and \(\ell\,_{j}\dashv r^{\prime}\), then \(E(j,r)\cong B(\ell,1)\cong E(j,r^{\prime})\), hence if \(j\) is dense then \(r\cong r^{\prime}\) by Lemma 3.13. Non-relative adjoints may be computed by means of absolute lifts and extensions [11, Proposition 2]. An analogous statement is true of relative adjoints, though we must replace the notion of _absolute (nonpointwise) lift_ in a 2-category with the notion of _(pointwise) lift_ in an equipment. **Proposition 5.8**.: _Let \(j\colon A\to E\) and \(r\colon C\to E\) be tight-cells. A tight-cell \(\ell\colon A\to C\) is left \(j\)-adjoint to \(r\) if and only if there is a 2-cell \(\eta\colon j\Rightarrow\ell\,;\,r\) exhibiting \(\ell\) as the left lift \(j\trianglelefteq r\) of \(j\) through \(r\)._ Proof.: Immediate from Lemma 3.23. **Remark 5.9**.: The distinction between pointwise and nonpointwise extensions is well appreciated in the categorical literature (cf. [10]). However, the notion of _pointwise lift_ has not been explicitly identified in the literature. Proposition 5.8 provides an explanation for this seeming omission: pointwise lifts are precisely relative adjoints. (Conversely, pointwise extensions are not always relative adjoints, though often are in practice, cf. Proposition 5.10.) Furthermore, observe that, by Proposition 3.24, for every relative adjunction \(\ell\,_{j}\dashv r\), the left adjoint is the absolute left lift of \(j\) through \(r\) in the tight \(2\)-category. In **Cat**, the converse also holds: that is, \(\ell\) is the left \(j\)-adjoint of \(r\) if and only if \(\ell\) is the absolute left lift of \(j\) through \(r\). In other words, every absolute not-necessarily-pointwise left lift is automatically pointwise in **Cat**. However, this is not true for a general virtual equipment (cf. [11, Proposition 7]). The definition of _relative adjunction_ of [10, Definition 1.4], which is equivalent to an absolute left lift [10, Proposition 1.6], therefore suffers from similar issues to that of relative monads ibid. (cf. Remark 4.21). Proposition 5.8 gives a way to compute a left relative adjoint given the right relative adjoint. We should like a converse. A subtlety is that relative right adjoints are not unique when the root is not dense. **Proposition 5.10**.: _Let \(j\colon A\to E\) and \(\ell\colon A\to C\) be tight-cells, and suppose that \(j\) is fully faithful._ 1. _Suppose that the left extension_ \(\ell\vartriangleleft j\) _exists and is_ \(j\)_-absolute. Then_ \(\ell\,_{j}\dashv\ell\vartriangleleft j\)_._ 2. _Suppose that_ \(j\) _is dense and that_ \(\ell\) _has a right_ \(j\)_-adjoint_ \(r\)_. Then_ \(r\) _exhibts the left extension_ \(\ell\vartriangleleft j\) _and this extension is_ \(j\)_-absolute._ Proof.: First observe that, since \(j\) is fully faithful, there is an isomorphism \[C(\ell,1)\cong E(j,j)\odot C(\ell,1)\] For (1), \(j\)-absoluteness of \(\ell\vartriangleleft j\) implies there is an isomorphism \(E(j,\ell\vartriangleleft j)\cong E(j,j)\odot C(\ell,1)\), so we have \(C(\ell,1)\cong E(j,\ell\vartriangleleft j)\) as required. For (2), if \(\ell\,_{j}\dashv r\) then there is an isomorphism \(E(j,j)\odot C(\ell,1)\cong E(j,r)\), so we can conclude by applying Lemma 3.17. Left relative adjoints preserve those colimits preserved by the root; while right relative adjoints preserve all limits when the root is dense. **Proposition 5.11**.: _If \(\ell\colon A\to C\) is a \(j\)-relative left adjoint, then \(\ell\) preserves every colimit that \(j\) preserves._ Proof.: Suppose that \(\ell\,_{j}\dashv r\), let \(p\colon Y\twoheadrightarrow Z\) be a loose-cell, and let \(f\colon Z\to A\) be a tight-cell admitting a \(p\)-colimit \(p*f\colon Y\to A\). If \(j\) preserves \(p*f\), then we have the following isomorphisms. \[C(\ell(p*f),1) \cong E(j(p*f),r) (\ell\,_{j}\dashv r)\] \[\cong E(p*(jf),r) (j\text{ preserves }p*f)\] \[\cong E(jf,r)\blacktriangleleft p\] (Lemma 3.7) \[\cong C(\ell f,1)\blacktriangleleft p\] (\[\ell\,_{j}\dashv r\] ) Hence \((p*f)\,;\,\ell\) forms the colimit \(p*(f\,;\,\ell)\); a simple calculation shows the universal \(2\)-cell is the canonical one. **Proposition 5.12**.: _If \(j\) is dense and \(r\colon C\to E\) is a \(j\)-relative right adjoint, then \(r\) preserves limits._ Proof.: Suppose that \(\ell\ _{j}\dashv r\), let \(p\colon X\xrightarrow{\text{\rm\tiny{$\bullet$}}}Y\) be a loose-cell, and let \(f\colon X\xrightarrow{\text{\rm\tiny{$\bullet$}}}C\) be a tight-cell admitting a \(p\)-limit \(\{p,f\}\colon Y\xrightarrow{\text{\rm\tiny{$\bullet$}}}C\). We have the following isomorphisms. \[E(1,r\{p,f\}) \cong E(j,r\{p,f\})\blacktriangleleft E(j,1)\] (Lemma 3.10, using density of \[j\] ) \[\cong C(\ell,\{p,f\})\blacktriangleleft E(j,1)\] ( \[\ell\ _{j}\dashv r\] ) \[\cong(p\blacktriangleright C(\ell,f))\blacktriangleleft E(j,1)\] (Lemma 3.4) \[\cong(p\blacktriangleright E(j,rf))\blacktriangleleft E(j,1)\] ( \[\ell\ _{j}\dashv r\] ) \[\cong p\blacktriangleright(E(j,rf)\blacktriangleleft E(j,1))\] (Lemma 3.19) \[\cong p\blacktriangleright E(1,rf)\] (Lemma 3.10, using density of \[j\] ) Hence \(\{p,f\}\;;r\) forms the limit \(\{p,(f\;;r)\}\); a simple calculation using the density of \(j\) shows the universal \(2\)-cell is the canonical one. **Remark 5.13**.: From Proposition 5.11, we recover [11, Theorem 2.13] and, in particular, the fact that left adjoints preserve colimits, since identities trivially preserve colimits. From Proposition 5.12, we recover footnote 13 of [11, p. 90] and, in particular, the fact that right adjoints preserve limits, since identities are trivially dense. ### Morphisms of relative adjunctions Just as relative monads are presented by relative adjunctions, so too are morphisms of relative monads presented by morphisms of relative adjunctions. In fact, there are two natural notions of morphisms of relative adjunctions, corresponding to each of the two tight-cells \(\ell\) and \(r\), which we term _left-morphisms_ and _right-morphisms_ respectively. **Definition 5.14**.: Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. A _left-morphism_ of \(j\)-adjunctions from \(\ell\ _{j}\dashv r\) to \(\ell^{\prime}\ _{j}\dashv r^{\prime}\) comprises 1. a \(1\)-cell \(c\colon C\xrightarrow{\text{\rm\tiny{$\bullet$}}}C^{\prime}\) such that \(r=c\;;r^{\prime}\); 2. a \(2\)-cell \(\lambda\colon\ell^{\prime}\Rightarrow\ell\;;c\), rendering the following diagram commutative. It is _strict_ when \(\lambda\) is the identity. \(j\)-adjunctions and their left-morphisms form a category \(\mathbf{RAdj}_{l}(j)\). While the data of a left-morphism in Definition 5.14 involves both a tight-cell \(c\) and a \(2\)-cell \(\lambda\), the following lemma shows that the \(2\)-cell \(\lambda\) is redundant, being uniquely determined by the tight-cell \(c\). However, the analogous statement for right-morphisms is not true in general; we make the \(2\)-cell \(\lambda\) explicit for symmetry with Definition 5.18. **Definition 5.15**.: For each object \(E\) of \(\mathbb{X}\), denote by \(\underline{\mathbb{X}}/E\) the category of strict slices over \(E\), whose objects are tight-cells \(\cdot\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) and whose morphisms are commutative triangles. **Lemma 5.16**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. The functor \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its tight-cell \(c\), is fully faithful._ Proof.: We have the following lemma. **Lemma 5.17**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. Then \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its right-cell \(c\), is fully faithful._ Proof.: We have the following lemma. **Lemma 5.18**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. Then \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its right-cell \(c\), is fully faithful._ Proof.: We have the following lemma. **Lemma 5.19**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. Then \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its right-cell \(c\), is fully faithful._ Proof.: We have the following lemma. **Lemma 5.10**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. Then \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its right-cell \(c\), is fully faithful._ Proof.: We have the following lemma. **Lemma 5.11**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. Then \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its right-cell \(c\), is fully faithful._ Proof.: We have the following lemma. **Lemma 5.12**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. Then \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its right-cell \(c\), is fully faithful._ Proof.: We have the following lemma. **Lemma 5.13**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. Then \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its right-cell \(c\), is fully faithful._ Proof.: We have the following lemma. **Lemma 5.14**.: _Let \(j\colon A\xrightarrow{\text{\rm\tiny{$\bullet$}}}E\) be a tight-cell. Then \(\mathbf{RAdj}_{l}(j)\xrightarrow{\mathbb{X}}/E\) sending each \(j\)-adjunction \(\ell\ _{j}\dashv r\) to its right adjoint \(r\), and sending each left-morphism \((c,\lambda)\) to its right-cell \(c\), is fully faithful._ Proof.: By pasting \(\mathfrak{y}^{\prime}\) and bending \(c\), the compatibility condition for a left-morphism states that \(\lambda\) is equal to the following 2-cell. Thus, for any tight-cell \(c\colon C\to C^{\prime}\) between the apices of \(j\)-adjunctions \(\ell\,\,_{j}\dashv r\) and \(\ell^{\prime}\,\,_{j}\dashv r^{\prime}\), the 2-cell above defines a unique left-morphism \((\ell\,\,_{j}\dashv r)\to(\ell^{\prime}\,\,_{j}\dashv r^{\prime})\) with tight-cell \(c\). Each left-morphism induces a relative adjunction. **Proposition 5.17**.: _Let \(j\colon A\to E\) be a tight-cell, and let \((c,\lambda)\colon(\ell\,\,_{j}\dashv r)\to(\ell^{\prime}\,\,_{j}\dashv r^{ \prime})\) be a left-morphism between \(j\)-adjunctions. Then \(\ell\,\,_{\ell^{\prime}}\dashv c\)._ Proof.: \[C(\ell,1)\cong E(j,r)=E(j,r^{\prime}c)\cong C^{\prime}(\ell^{\prime},c)\qed\] **Definition 5.18**.: Let \(j\colon A\to E\) be a tight-cell in \(\mathbb{X}\). A _right-morphism_ of \(j\)-adjunctions from \(\ell\,\,_{j}\dashv r\) to \(\ell^{\prime}\,\,_{j}\dashv r^{\prime}\) comprises 1. a tight-cell \(c\colon C\to C^{\prime}\) such that \(\ell\,;c=\ell^{\prime}\); 2. a 2-cell \(\rho\colon r\Rightarrow c\,;r^{\prime}\), rendering commutative the following diagram: It is _strict_ when \(\rho\) is the identity. \(j\)-adjunctions and right-morphisms form a category \(\mathbf{RAdj}_{r}(j)\). As mentioned above, the 2-cell \(\rho\) in the data of a right-morphism \((c,\rho)\) is not uniquely determined by the tight-cell \(c\) in general. However, in analogy with the essential uniqueness of relative adjoints (Lemma 5.7), it is uniquely determined when \(j\) is dense. **Definition 5.19**.: For each object \(A\) of \(\mathbb{X}\), denote by \(A/\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The compatibility condition between \(\sharp\) and \(\sharp^{\prime}\) in the definitions of left-morphisms and right-morphisms may be reexpressed in terms of \(\flat\), \(\eta\), or \(\varepsilon\) as in Lemma 5.5: we leave the elementary details to the reader. **Remark 5.21**.: Our definitions of left- and right-morphisms of relative adjunctions coincide with those of [1, Definitions 5.2.20, 5.2.12] in a representable equipment by the preceding remark. Strict morphisms of (relative) adjunctions appear more commonly in the literature (e.g. [11, SSIV.7]) than general left- and right-morphisms, and play an important role in the study of relative monads as shall be shown in the following section. **Definition 5.22**.: A _strict morphism_ of relative adjunctions is a strict left- (equivalently, right-) morphism of relative adjunctions. Denote by \(\mathbf{RAdj}(j)\) the category of \(j\)-adjunctions and their strict morphisms. ### Resolutions of relative monads Our motivation for introducing relative adjunctions is our interest in relative monads. The connection between relative adjunctions and relative monads is analogous to the connection between non-relative adjunctions and monads: just as every adjunction induces a monad, every relative adjunction induces a relative monad. The converse is not necessarily true in an arbitrary equipment, though in Section 6 we give sufficient conditions for a relative monad to be induced by a canonical relative adjunction, which in particular hold in \(\mathbf{Cat}\). **Theorem 5.23**.: _Every relative adjunction \(\ell\,\,_{j}\vdash r\) induces a relative monad \(\wedge_{j}(\ell\,\,_{j}\dashv r)\) with underlying tight-cell \(\ell\,\,;\,r\). Furthermore, this assignment extends to functors_ \[\Diamond_{j} \colon\mathbf{RAdj}_{l}(j)\to\mathbf{RMnd}(j)^{\mathrm{op}}\] \[\Diamond_{j} \colon\mathbf{RAdj}_{r}(j)\to\mathbf{RMnd}(j)\] Proof.: Define \(\eta\) as in Lemma 5.5 and \(\dagger\colon E(j,r\ell)\Rightarrow E(r\ell,r\ell)\) to be The unit laws follow from the \(\sharp\)-\(\flat\) isomorphism. The associativity law follows by elementary string diagrammatic reasoning, in particular observing the following two identities. Given a left-morphism of \(j\)-adjunctions \((c,\lambda)\), define \(\tau\colon\ell^{\prime}\);\(r^{\prime}\Rightarrow\ell;r\) by \(\lambda\);\(r^{\prime}\). Given a right-morphism of \(j\)-adjunctions \((c,\rho)\), define \(\tau\colon\ell\,\,;\,r\Rightarrow\ell^{\prime}\,\,;\,r^{\prime}\) by \(\ell^{\prime}\,\,;\rho\). In both cases, \(\tau\) is a \(j\)-monad morphism, the unit preservation condition following from the reformulation of the compatibility condition for the \(j\)-adjunction morphism in terms of \(\eta\) and \(\eta^{\prime}\); and the extension operator preservation condition following from the reformulation in terms of \(\flat\) and \(\flat^{\prime}\). Preservation of identities and composites in both cases is trivial. **Definition 5.24**.: Let \(j\colon A\to E\) be a tight-cell, and let \(T\) be a \(j\)-monad. A _resolution3_ of \(T\) is a \(j\)-adjunction \(\ell\;_{j}\dashv r\) for which \(T\) is equal to the \(j\)-monad \(\wedge_{j}(\,\not\,_{j}\dashv r)\) constructed in Theorem5.23. Footnote 3: Resolutions of monads were introduced by [20], called there _factorisations_ of a monad. Resolutions of relative monads were called _splittings_ in [1]. We follow the terminology of Lambek and Scott [18, Definition 0.6.4]. A _morphism_ of resolutions of \(T\) from \(\ell\;_{j}\dashv r\) to \(\ell^{\prime}\;_{j}\dashv r^{\prime}\) is a tight-cell \(c\colon C\to C^{\prime}\) between the apices making the following diagram commute. \[\diagram{c}\node{A}\node{C}\node{C^{\prime}}\node{C^{\prime}}\node{C^{ \prime}}\node{C^{\prime}}\node{C^{\prime}}\node{C^{\prime}}\node{C^{\prime}} \node{C^{\prime}}\node{C^{\prime}}\node{C^{\prime}}\node{C^{\prime}}\node{C ^{\prime}}\node{C^{\prime}}\node{C^{\prime}}\node{C^{\prime}}\node{C^{\prime}} \node{C^{\prime}}\node{C^{\prime}}\node{C^{\prime}}\node{C^{\prime}}\node{C^ Proof.: By Lemma 5.26, \(E(j,T)\) is induced by the loose-adjunction \(C(1,\ell)\dashv E(j,r)\). By definition, \(E(j,r)\cong C(\ell,1)\). Hence, \(E(j,T)\) is isomorphic to the loose-monad induced by \(C(1,\ell)\dashv C(\ell,1)\), which is precisely \(C(\ell,\ell)\). ### Composition of relative adjunctions In contrast to non-relative adjunctions, we cannot in general compose relative adjunctions simply by composing the left adjoints and the right adjoints. However, we may compose relative adjunctions when one of the left adjoints factors through the root of the other relative adjunction. **Proposition 5.28**.: _Let \(\ell\,{}_{j^{\dashv}}\!\dashv r\) and \(\ell^{\prime}\,;\,j\,{}_{j^{\dashv}}\!\dashv r^{\prime}\) be relative adjunctions as below._ _Then \(\ell^{\prime}\,;\,\ell\,{}_{j^{\dashv}}\!\dashv r\,;\,r^{\prime}\). Furthermore, this assignment extends to functors_ \[\ell^{\prime}\,;\,(-)\,;\,r^{\prime}\colon\mathbf{RAdj}_{\!\! \!j}(j) \to\mathbf{RAdj}_{\!\!\!j}(j^{\prime})\] \[\ell^{\prime}\,;\,(-)\,;\,r^{\prime}\colon\mathbf{RAdj}_{\!\! \!j}(r) \to\mathbf{RAdj}_{\!\!\!j}(j^{\prime})\] Proof.: We have \(C(\ell\ell^{\prime},1)\cong D(j\ell^{\prime},r)\cong E(j^{\prime},r^{\prime}r)\) using that \(\ell\,{}_{j^{\dashv}}\!\dashv r\) and \(\ell^{\prime}\,;\,j\,{}_{j^{\dashv}}\!\dashv r^{\prime}\). Given a left-morphism \((c,\lambda)\) or right-morphism \((c,\rho)\) from \(\ell_{1}\,{}_{j}\dashv r_{1}\) to \(\ell_{2}\,{}_{j}\dashv r_{2}\), the pairs \((c,(\ell^{\prime}\,;\lambda))\) and \((c,(\rho\,;\,r^{\prime}))\) respectively define left- and right-morphisms from \(\ell^{\prime}\,;\,\ell_{1}\,{}_{j^{\dashv}}\!\dashv r_{1}\,;\,r^{\prime}\) to \(\ell^{\prime}\,;\ell_{2}\,{}_{j^{\dashv}}\!\dashv r_{2}\,;\,r^{\prime}\), the compatibility condition following from that of \((c,\lambda)\) and \((c,\rho)\) respectively. Preservation of identities and composites is trivial in both cases. **Example 5.29**.: There are several useful special cases of Proposition 5.28. 1. Taking \(j=1\) above, it follows that we may compose relative adjunctions with adjunctions on their apices, recovering [10, Lemma 2.10; 17, Lemma 18]. 2. Taking \(j^{\prime}=\ell^{\prime};j\) and \(r^{\prime}=1\) (Example 5.4), it follows that every \(j\)-adjunction \(\ell\,{}_{j^{\dashv}}\!\dashv r\) induces an \((\ell^{\prime}\,;\,j)\)-adjunction \(\ell^{\prime}\,;\,\ell\,{}_{\ell^{\prime};\,j^{\dashv}}\!\dashv r\) by precomposition, recovering [10, Lemma 2.6]. 3. Taking \(\ell^{\prime}=1\), \(j^{\prime}=j\,;\,r^{\prime}\), and \(r^{\prime}\) fully faithful (Example 5.4), it follows that every \(j\)-adjunction induces a \((j\,;\,r^{\prime})\)-adjunction \(\ell\,{}_{j^{\dashv};r^{\prime}}\!\dashv r\,;\,r^{\prime}\) by postcomposition. Given a relative adjunction \(\ell^{\prime}\,;\,j\,{}_{j^{\dashv}}\!\dashv r^{\prime}\) as in Proposition 5.28, the proposition in particular provides a construction of a \(j^{\prime}\)-monad from any \(j\)-monad that admits a resolution \(\ell\,{}_{j^{\dashv}}\!\dashv r\), taking the \(j^{\prime}\)-monad to be that induced by the \(j^{\prime}\)-adjunction \(\ell^{\prime}\,;\,\ell\,{}_{j^{\dashv}}\!\dashv r\,;\,r^{\prime}\). We show in the following proposition that the assumption that the \(j\)-monad admits a resolution may be dropped. **Proposition 5.30**.: _Let \(\ell^{\prime}\,;\,j\,{}_{j^{\dashv}}\!\dashv r^{\prime}\) be a relative adjunction, and let \(T=(t,\dagger,\eta)\) be a \(j\)-monad._ Proof.: The extension operator of the induced \(j^{\prime}\)-monad is the \(2\)-cell on the left below; the unit is the \(2\)-cell on the right below. The first and second unit laws follow from the first and second unit laws for \(T\) respectively, together with the \(\sharp^{\prime}\)-\(\flat^{\prime}\) isomorphism. The associativity law follows from the associativity law for \(T\). Functoriality of the assignment, given by precomposing \(\ell^{\prime}\) and postcomposing \(r^{\prime}\), is trivial. The square on the left and on the right agree on objects, so to show commutativity of the object assignments, it suffices to show that the following diagram commutes in \(\mathbf{Set}\). That the assignments in the diagram above agree on the underlying tight-cell and unit is trivial; that they agree on the extension operator follows from the \(\sharp^{\prime}\)-\(\flat^{\prime}\) isomorphism. Finally, that the two squares commute on morphisms is trivial. **Example 5.31**.: As with Proposition5.28, there are several useful special cases of Proposition5.30. 1. Taking \(j=1\) above, it follows that every monad on the apex of a \(j^{\prime}\)-adjunction induces a \(j^{\prime}\)-monad. Taking furthermore \(j^{\prime}=1\), we recover [10, Theorem 4.2]. 2. Taking \(j^{\prime}=\ell^{\prime}\;;j\) and \(r^{\prime}=1\) (Example5.4), it follows that every \(j\)-monad \(T=(t,\dagger,\eta)\) induces an \((\ell^{\prime}\;;j)\)-monad structure on \(\ell^{\prime}\;;t\) by precomposition, recovering [10, Definition 1.3.1; Voe23, Construction 2.1.15]. Taking furthermore \(j=1\), we recover [1, Theorem 1; 1, Proposition 2.3]. 3. Taking \(\ell^{\prime}=1\), \(j^{\prime}=j\,;r^{\prime}\), and \(r^{\prime}\) fully faithful (Example5.4), it follows that every \(j\)-monad \(T=(t,\dagger,\eta)\) induces a \((j\;;r^{\prime})\)-monad structure on \(t\;;r^{\prime}\) by postcomposition. 4. Taking \(j=j^{\prime}\), we recover [1, Theorem 5.5]. ## 6. Algebras and opalgebras Algebras and opalgebras for a monad are \(1\)-cells equipped with actions compatible with the monad structure. Recall that a monad \(T\) on an object \(A\) of a \(2\)-category \(\mathcal{K}\) is a monoid in \(\mathcal{K}[A,A]\). For any object \(D\in\mathcal{K}\), the hom-category \(\mathcal{K}[D,A]\) forms a left-\(\mathcal{K}[A,A]\)-category via precomposition. A \(T\)-action in \(\mathcal{K}[D,A]\) is precisely a \(T\)-algebra with domain \(D\). Conversely, for any object \(B\in\mathcal{K}\), the hom-category \(\mathcal{K}[A,B]\) forms a right-\(\mathcal{K}[A,A]\)-category via postcomposition. A \(T\)-action in \(\mathcal{K}[A,B]\) is precisely a \(T\)-opalgebra with codomain \(B\). We should like to define algebras and opalgebras for relative monads similarly, following our treatment of relative monads as monoids in skew-multicategories (Section4). However, we must generalise the notion of action accordingly, to account for skewness: in particular, we introduce the notion of _skew-multiacetgory_. A skew-multiacetegory may be thought of as that which acts on a skew-multicategory, in the same way that an actegery acts on a monoidal category. Since a skew-multicategory has multimorphisms, rather than a tensor \(\otimes\colon\mathbf{M}\times\mathbf{M}\to\mathbf{M}\), a skew-multiacetegory also has multimorphisms, rather than an action \(\odot\colon\mathbf{M}\times\mathbf{A}\to\mathbf{A}\). In the following, Sections 6.1 and 6.4, which treat algebras and algebra-objects respectively, proceed analogously to Sections 6.2 and 6.5, which treat opalgebras and opalgebra-objects respectively. The sections are structured analogously to Section 4, which treated relative monads. However, since opalgebras are not formally dual to algebras, they must be treated separately. This is reflected in their theory, which, though similar, is not identical. **Remark 6.1**.: It appears likely there exists a 2-dimensional treatment of relative monads, in which algebras and opalgebras become special cases of relative monad morphisms, analogously to the formal theory of monads [10]. However, since the setting of virtual equipments and skew-multicategories already incurs a significant increase in complexity over 2-categories and monoidal categories, we defer such a 2-dimensional treatment to future work. ### Algebras We start by introducing the analogue of an category for a skew-multicategory. There are two variants, acting on the left and on the right respectively. Since the definitions are almost identical, we define them simultaneously. **Definition 6.2**.: Let \(\mathbf{M}\) be an associative-normal left-skew-multicategory. A _left-_ (respectively _right-_) \(\mathbf{M}\)_-multicategory_\(\mathbf{A}\) comprises 1. a class \(|\mathbf{A}|\) of _objects_; 2. a class \(\mathbf{A}(A,M_{1},\ldots,M_{n};A^{\prime})\) of _multimorphisms_ for each \(n\geq 0\), \(M_{1},\ldots,M_{n}\in|\mathbf{M}|+\{\bullet\}\) and \(A,A^{\prime}\in|\mathbf{A}|\); 3. for each multimorphism \(g\colon A,\bullet^{m_{0}},M_{1},\bullet^{m_{1}},\ldots,\bullet^{m_{n-1}},M_{ n},\bullet^{m_{n}}\to A^{\prime}\) where \(M_{1},\ldots,M_{n}\in|\mathbf{M}|\), \(A,A^{\prime}\in|\mathbf{A}|\) and \(n,m_{i}\geq 0\), and multimorphisms \(f_{1}\colon\overrightarrow{M_{1}}\to M_{1}\),..., \(f_{n}\colon\overrightarrow{M_{n}}\to M_{n}\) in \(\mathbf{M}\), a _composite_ multimorphism \((f_{1},\ldots,f_{n})\,;g\colon A,\bullet^{m_{0}},\overrightarrow{M_{1}}, \bullet^{m_{1}},\ldots,\bullet^{m_{n-1}},\overrightarrow{M_{n}},\bullet^{m_ {n}}\to A^{\prime}\) in \(\mathbf{A}\); 4. a _left-unitor_ function \[\lambda_{(A,\overrightarrow{M};A^{\prime}),k}\colon\mathbf{A}(A,M_{1},\ldots,M_{k},M_{k+1},\ldots,M_{n};A^{\prime})\to\mathbf{A}(A,M_{1},\ldots,M_{k}, \bullet,M_{k+1},\ldots,M_{n};A^{\prime})\] for each \(0\leq k\leq n\) (respectively for each \(0\leq k<n\)); 5. a _right-unitor_ function \[\rho_{(A,\overrightarrow{M};A^{\prime}),k}\colon\mathbf{A}(A,M_{1},\ldots,M_{k},\bullet,M_{k+1},\ldots,M_{n};A^{\prime})\to\mathbf{A}(A,M_{1},\ldots,M_{k},M_{k+1},\ldots,M_{n};A^{\prime})\] for each \(0<k\leq n\) (respectively for each \(0\leq k\leq n\)), such that composition in \(\mathbf{A}\) coheres with identities and composites in \(\mathbf{M}\) in the following sense, \[(1_{M_{1}},\ldots,1_{M_{n}})\,;g=g\] \[((f_{1}^{1},\ldots,f_{1}^{m_{1}})\,;f_{1},\ldots,(f_{1}^{n},\ldots,f_{n}^{m_{n}})\,;f_{n})\,;g=(f_{1}^{1},\ldots,f_{1}^{m_{1}},\ldots,f_{1}^{n}, \ldots,f_{n}^{m_{n}})\,;(f_{1},\ldots,f_{n})\,;g\] and that the left- and right-unitors cohere with composition, and the right-unitor is inverse to the left-unitor, in the sense of Definition 4.1. \(\mathbf{A}\) is _left-normal_ when \(\lambda\) is invertible; and is _right-normal_ when \(\rho\) is invertible. A _functor_ between left/right-\(\mathbf{M}\)-multiacetgeories is a homomorphism of left/right-\(\mathbf{M}\)-multiacetgeories. By notational convention, we will write the hom-sets of a right-multiacetogy \(\mathbf{A}\) in the form \(\mathbf{A}(M_{1},\ldots,M_{n},A;A^{\prime})\). **Definition 6.3**.: Given a left/right-skew-multiacetogy \(\mathbf{A}\), denote by \(\mathbf{A}_{1}\) the category of unary multimorphisms, i.e. the category whose objects are those of \(\mathbf{A}\) and whose hom-set \(\mathbf{A}_{1}(X,Y):=\mathbf{A}(X;Y)\). **Definition 6.4**.: Let \(\mathbf{M}\) be an associative-normal left-skew-multicategory and let \(\mathbf{A}\) be a left-\(\mathbf{M}\)-multiacetogy. An _action_ in \(\mathbf{A}\) for a monoid \((M,m,u)\) (or simply \((M,m,u)\)_-action_) in \(\mathbf{M}\) comprises 1. an object \(A\in\mathbf{A}\), the _carrier_; 2. a multimorphism \(a\colon M,A\to A\), the _action_, satisfying the following equations. An _action homomorphism_ from \((A,a)\) to \((A^{\prime},a^{\prime})\) is a multimorphism \(f\colon A\to A^{\prime}\) satisfying the following equation. \[a\,;f=(1_{M},f)\,;a^{\prime}\] \((M,m,u)\)-actions and their homomorphisms form a category \(\mathbf{Act}((M,m,u),\mathbf{A})\) functorial covariantly in \(\mathbf{A}\) and contravariantly in \((M,m,u)\). Denote by \(U_{\mathbf{A},(M,m,u)}\colon\mathbf{Act}(\mathbf{A},(M,m,u))\to\mathbf{A}_{1}\) the faithful functor sending each left-action \((A,a)\) to its carrier \(a\). Any skew-multicategory acts on itself trivially. **Proposition 6.5**.: _Let \(\mathbf{M}\) be an associative-normal left-skew-multicategory. Then \(\mathbf{M}\) forms a left-\(\mathbf{M}\)-multiacetogy. Furthermore any monoid \((M,m,u)\) in \(\mathbf{M}\) forms a \((M,m,u)\)-action therein._ Proof.: The left-\(\mathbf{M}\)-multiacetogy structure is defined to have the same objects, multimorphisms, and composition as \(\mathbf{M}\), from which the laws hold trivially. Given a monoid \((M,m,u)\), we define an action \((M,m)\): the unit and multiplication laws follow from those of the monoid. **Proposition 6.6**.: _Let \(\mathbb{X}\) be a virtual double category with a loose-adjunction \(j_{*}\dashv j^{*}\colon E\twoheadrightarrow A\) and an object \(D\). The loose-cells \(D\twoheadrightarrow E\) in \(\mathbb{X}\) together with 2-cells of the form \(p_{1},j^{*},p_{2},j^{*},\ldots,j^{*},p_{n},j^{*},e\Rightarrow e^{\prime}\) form a left-\(\mathbb{X}[j]\)-multiacetogy._ Proof.: We define a left-\(\mathbb{X}[j_{*}\dashv j^{*}]\)-multiacetegory \(\mathbb{X}[D,j_{*}\dashv j^{*}]\) as follows. The class of objects is given by \(\mathbb{X}[D,E]\). The left- and right-normal multimorphisms \(p_{1},\ldots,p_{n},e\to e^{\prime}\) (\(n\geq 0\)) are 2-cells \(p_{1},j^{*},p_{2},j^{*},\ldots,j^{*},p_{n},j^{*},e\Rightarrow e^{\prime}\). The general multimorphisms, composition structure, and left- and right-unitors are defined as in Theorem 4.4, and satisfy the laws for the same reasons. Functoriality in \(D\) follows from pasting on the left. **Definition 6.7**.: Let \(\mathbb{X}\) be an equipment with a tight-cell \(j\colon A\to E\). Denote by \(\mathbb{X}[D,j]\) the \(\mathbb{X}[j]\)-multiacetegory \(\mathbb{X}[D,E(1,j)\dashv E(j,1)]\). Define \(\mathbb{X}[D,j]\) to be the full sub-multiacetegory of \(\mathbb{X}[D,j]\) spanned by the representable loose-cells. In particular \(\mathbb{X}[D,j]_{1}=\mathbb{X}[D,E]\) and \(\mathbb{X}[D,j]_{1}=\mathbb{X}[D,E]\). We shall unwrap the definition of an action in \(\mathbb{X}[D,j]\) to motivate the definition of algebra for a relative monad. Explicitly, an action in \(\mathbb{X}[D,j]\) comprises 1. a tight-cell \(e\colon D\to E\); 2. a 2-cell \(\lambda\colon E(1,t),E(j,1),E(1,e)\Rightarrow E(1,e)\), satisfying the following equations. \(\begin{CD}D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ =\Big{\|}=\Big{\|}=\Big{\|}=\Big{\|}=\Big{\|}=\Big{\|}=\Big{\|}=\Big{\|}\\ D@>{E(1,e)}>{}>E@>{E(1,e)}>{}>A@>{E(1,t)}>{}>E@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{ }>E\\ D@>{E(1,e)}>{}>E@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(1,t)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(1,t)}>{}>E@>{E(j,1)}>{}>E(1,t)\end{CD}\) An action homomorphism comprises a 2-cell \(E(1,\epsilon)\colon E(1,e)\Rightarrow E(1,e^{\prime})\) satisfying the following equation. \(\begin{CD}D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(1,\epsilon)}>{}>E\\ D@>{E(1,e^{\prime})}>{}>E@>{E(1,e)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(1,t)}>{}>E\\ D@>{E(1,e^{\prime})}>{}>E@>{E(1,t)}>{}>E(1,e)\end{CD}\) \(\begin{CD}D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>E(1,e)\end{CD}\) \(\begin{CD}D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A@>{E(1,t)}>{}>E\\ D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>E(1,e)\end{CD}\) [MISSING_PAGE_POST] \(\begin{CD}D@>{E(1,e)}>{}>E@>{E(j,1)}>{}>A\\ D@>{E(1,e)}>{}>E\\ D@>{E(j,1)}>{}>E(1,e)\end{CD}\) [MISSING_PAGE_POST] \(T\)-algebras with domain \(D\) and their morphisms form an category \(T\)-\(\mathbf{Alg}_{D}\) functorial contravariantly in \(D\) and \(T\). Denote by \(U_{T,D}\colon T\text{-}\mathbf{Alg}_{D}\to\mathbb{X}[D,E]\) the faithful functor sending each \(T\)-algebra \((e,\rtimes)\) to its underlying tight-cell \(e\). **Example 6.9**.: Let \(T\) be a relative monad. Then \((t,\dagger)\) forms a \(T\)-algebra by Proposition6.5. **Remark 6.10**.: Algebras for relative monads, and their morphisms, have been studied under that name by Maillard [16, Definitions 3.5.3 & 3.5.4] and as _relative left modules_ by Lobbia [14, Definitions 4.1 & 4.2]. We may now exhibit \(T\)-algebras as actions in \(\mathbb{X}[D,j]\). **Theorem 6.11**.: _There is an isomorphism of categories rendering the following diagram commutative, natural in \(D\) and \(T\)._ Proof.: Observe that given an action, we can define an algebra structure, and conversely: It is immediate that the laws for an algebra (morphism) are precisely those for an action (homomorphism) under these transformations. **Remark 6.12**.: While we shall not formally introduce representability for (skew) multicategories, the evident generalisation of Theorem 4.29 suggests that the skew-multiacetgory \(\mathbb{X}[D,j]\) of Definition 6.7 will be representable in an appropriate sense when \(\mathbb{X}\) admits left extensions of tight-cells \(A\to E\) along \(j\), in which case we recover [1, Theorem 3.7]. **Corollary 6.13**.: _An algebra (morphism) for a \(1_{E}\)-monad is precisely an algebra (morphism) for the corresponding monad on \(E\) (Corollary 4.20)._ Proof.: By Theorem 6.11, it suffices to consider action (morphisms) in place of algebra (morphisms). An action in \(\mathbb{X}[D,1_{E}]\) for a monad \(T\) comprises a tight-cell \(e\colon D\to E\) and a \(2\)-cell \(\lambda\colon e\;;t\Rightarrow e\) rendering the following diagrams commutative. An action morphism in \(\mathbb{X}[D,1_{E}]\) comprises a \(2\)-cell \(\epsilon\colon e\Rightarrow e^{\prime}\) rendering the following diagram commutative. This is precisely the definition of an algebra (morphism) for a monad (cf. [1, SS3.1]). ### Opalgebras **Definition 6.14**.: Let \(\mathbf{M}\) be an associative-normal left-skew-multicategory and let \(\mathbf{A}\) be a right-\(\mathbf{M}\)-multiacetegory. An _action_ in \(\mathbf{A}\) for a monoid \((M,m,u)\) (or simply \((M,\mu,u)\)_-action_) in \(\mathbf{M}\) comprises * an object \(A\in\mathbf{A}\), the _carrier_; * a multimorphism \(a\colon A,M\to A\), the _action_, satisfying the following equations. \[\rho_{(A;A),1}((1_{A},u)\,;a)=1_{A}\hskip 56.905512pt(a,1_{M})\,;a=(1_{A},m)\,;a\] An _action homomorphism_ from \((A,a)\) to \((A^{\prime},a^{\prime})\) is a multimorphism \(f\colon A\to A^{\prime}\) satisfying the following equation. \[a\,;f=(f,1_{M})\,;a^{\prime}\] \((M,m,u)\)-actions and their homomorphisms form a category \(\mathbf{Act}(\mathbf{A},(M,m,u))\) functorial covariantly in \(\mathbf{A}\) and contravariantly in \((M,m,u)\). Denote by \(U_{\mathbf{A},(M,m,u)}\colon\mathbf{Act}(\mathbf{A},(M,m,u))\to\mathbf{A}_{1}\) the faithful functor sending each right-action \((A,a)\) to its carrier \(a\). **Proposition 6.15**.: _Let \(\mathbf{M}\) be an associative-normal left-skew-multicategory. Then \(\mathbf{M}\) forms a right-\(\mathbf{M}\)-multiacetgory. Furthermore any monoid \((M,m,u)\) in \(\mathbf{M}\) forms a \((M,m,u)\)-action therein._ Proof.: The right-\(\mathbf{M}\)-multiacetgory structure is defined to have the same objects, multimorphisms, and composition as \(\mathbf{M}\), from which the laws hold trivially. Given a monoid \((M,m,u)\), we define an action \((M,m)\): the unit and multiplication laws follow from those of the monoid. **Proposition 6.16**.: _Let \(\mathbb{X}\) be a virtual double category with a loose-adjunction \(j_{*}\dashv j^{*}\colon E\xrightarrow{}A\) and an object \(B\). The loose-cells \(A\xrightarrow{}B\) in \(\mathbb{X}\) together with 2-cells of the form \(a,j^{*},p_{1},j^{*},p_{2},j^{*},\ldots,j^{*},p_{n}\Rightarrow a^{\prime}\) form a right-\(\mathbb{X}[\![j_{*}\dashv j^{*}]\!]\)-multiacetgory._ Proof.: We define a right-\(\mathbb{X}[\![j_{*}\dashv j^{*}]\!]\)-multiacetgory \(\mathbb{X}[\![j_{*}\dashv j^{*},B]\!]\) as follows. The class of objects is given by \(\mathbb{X}[\![A,B]\!]\). The left- and right-normal multimorphisms \(a,p_{1},\ldots,p_{n}\to a^{\prime}\) (\(n\geq 0\)) are 2-cells \(a,j^{*},p_{1},j^{*},\ldots,j^{*},p_{n}\Rightarrow a^{\prime}\). The general multimorphisms, composition structure, and left- and right-unitors are defined as in Theorem 4.4, and satisfy the laws for the same reasons. Functoriality in \(B\) follows from pasting on the right. **Definition 6.17**.: Let \(\mathbb{X}\) be an equipment with a tight-cell \(j\colon A\to E\). Denote by \(\mathbb{X}[\![j,B]\!]\) the \(\mathbb{X}[\![j]\!]\)-multiacetgory \(\mathbb{X}[\![E(1,j)\dashv E(j,1),B]\!]\). Define \(\mathbb{X}[\![j,B]\!]\) to be the full sub-multiacetgory of \(\mathbb{X}[\![j,B]\!]\) spanned by the representable loose-cells. In particular \(\mathbb{X}[\![j,B]\!]_{1}=\mathbb{X}[\![A,B]\!]\) and \(\mathbb{X}[\![j,B]\!]_{1}=\mathbb{X}[\![A,B]\!]\). We shall unwrap the definition of an action in \(\mathbb{X}[\![j,B]\!]\) to motivate the definition of opalgebra for a relative monad. Explicitly, an action in \(\mathbb{X}[\![j,B]\!]\) comprises 1. a tight-cell \(a\colon A\to B\), 2. a 2-cell \(\rho\colon B(1,a),E(j,1),E(1,t)\Rightarrow B(1,a)\), satisfying the following equations. An action homomorphism comprises a \(2\)-cell \(B(1,\alpha)\colon B(1,a)\Rightarrow B(1,a^{\prime})\) satisfying the following equation. \(A\)\(B(1,a)\)\(B(1,\alpha)\)\(B(1,a^{\prime})\)\(B\)\(A Let \(B\) be an object of \(\mathbb{X}\). Suppose that \((a,\ltimes)\) and \((a^{\prime},\ltimes^{\prime})\) are \(T\)-opalgebras with codomain \(B\). A _\(T\)-opalgebra morphism_ from \(a\) to \(a^{\prime}\) is a \(2\)-cell \(\alpha\colon a\Rightarrow a^{\prime}\) satisfying the following equation. \(T\)-opalgebras with codomain \(B\) and their morphisms form a category \(T\)-\(\mathbf{Opalg}_{B}\) functorial covariantly in \(B\) and contravariantly in \(T\). Denote by \(U_{T,B}\colon T\mathbf{-Opalg}_{B}\to\mathbb{X}[A,B]\) the faithful functor sending each \(T\)-opalgebra \((a,\ltimes)\) to its underlying tight-cell \(a\). **Remark 6.19**.: By Theorem 4.22, every \(j\)-monad \(T\) induces a loose-monad \(E(j,T)\). A \(T\)-opalgebra is then precisely a tight-cell \(b\colon A\to B\) equipped with a loose-monad morphism \(E(j,T)\Rightarrow B(a,a)\). Therefore \(j\)-\(\mathbf{Opalg}_{B}\) forms the category of _extraordinary transformations from \(j\)_ of [22, p. 369]. The connection to the _bijective-on-objects_ tight-cells loc. cit. will be discussed in Section 6.5. **Example 6.20**.: Let \(T\) be a relative monad. Then \((t,\dagger)\) forms a \(T\)-opalgebra by Proposition 6.15. **Remark 6.21**.: Opalgebras for relative monads, and their morphisms, have been studied as _modules over a relative monad_ by Ahrens [1, Definitions 2.90 & 2.94; 16, Definitions 9 & 14], as _Kleisli algebras_ by Maillard [19, Definitions 3.5.6 & 3.5.7], and as _relative right modules_ by Lobbia [18, Definitions 6.1 & 6.2]. We may now exhibit \(T\)-opalgebras as actions in \(\mathbb{X}[j,B]\). **Theorem 6.22**.: _There is an isomorphism of categories rendering the following diagram commutative, natural in \(B\) and \(T\)._ Proof.: Observe that, given an action, we can define an opalgebra structure, and conversely: It is immediate that the laws for an opalgebra (morphism) are precisely those for an action (homomorphism) under these transformations. **Remark 6.23**.: Similarly to Remark 6.12, the evident generalisation of Theorem 4.29 suggests that the skew-multiacetegory \(\mathbb{X}[j,B]\) of Definition 6.17 will be representable in an appropriate sense when \(\mathbb{X}\) admits left extensions of tight-cells \(A\to B\) along \(j\). However, Altenkirch, Chapman and Uustalu [1] do not give a characterisation of opalgebras for relative monads as right-actions for monoids in the skew-monoidal \(\mathbb{X}[j]\). **Corollary 6.24**.: _An opalgebra (morphism) for a \(1_{A}\)-monad is precisely an opalgebra (morphism) for the corresponding monad on \(A\) (Corollary 4.20)._ Proof.: By Theorem6.22, it suffices to consider action (morphisms) in place of opalgebra (morphisms). An action in \(\mathbb{X}[1_{A},B]\) for a monad \(T\) comprises a tight-cell \(a\colon A\to B\) and a \(2\)-cell \(\rho\colon t\,;a\Rightarrow a\) rendering the following diagrams commutative. An action morphism in \(\mathbb{X}[1_{A},B]\) comprises a \(2\)-cell \(\alpha\colon a\Rightarrow a^{\prime}\) rendering the following diagram commutative. This is precisely the definition of an opalgebra (morphism) for a monad (cf. [10, SS3.1]). ### Relative adjunctions and (op)algebras Just as a relative adjunction induces a relative monad (Theorem5.23), so too does it induce an algebra and opalgebra for the induced relative monad. **Proposition 6.25**.: _Let \(\ell\,\,j^{-}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: Given a \(T\)-opalgebra \((b,\ltimes)\), define a \(2\)-cell \(E(j,r^{\prime}t\ell^{\prime})\Rightarrow(a\ell^{\prime},a\ell^{\prime})\) as on the left below; given a \(T\)-algebra \((e,\ltimes)\), define a \(2\)-cell \(E(j,r^{\prime}d)\Rightarrow E(r^{\prime}t\ell^{\prime},r^{\prime}d)\) as on the right below. The proof that these \(2\)-cells define a \((\ell^{\prime}\,;T\,;r^{\prime})\)-opalgebra and a \((\ell^{\prime}\,;T\,;r^{\prime})\)-algebra respectively is analogous to the proof that \((\ell^{\prime}\,;T\,;r^{\prime})\) forms a \(j\)-monad (Proposition 5.30). Functoriality of the assignments, given by precomposing \(\ell^{\prime}\) and postcomposing \(r^{\prime}\) respectively, together with naturality, is trivial. That the specified diagram commutes follows directly from the definitions of the respective opalgebra and algebra structures. **Remark 6.27**.: From Proposition 6.26, taking \(j^{\prime}=\ell^{\prime}\;;j\) and \(r=1\) (Example 5.4), we recover [23, Construction 2.2.10]. The converse to Proposition 6.25 is not generally true: that is, not every (op)algebra arises from a relative adjunction. However, we might be led to wonder whether there are any natural conditions on an algebra or opalgebra that ensure they arise from a relative adjunction. In the case of non-relative monads, the answer is affirmative: namely, an algebra-object for a monad \(T\) is always induced by a resolution of \(T\)[10, Theorem 2 & Theorem 3]; and the same is true for an opalgebra-object by duality. We should like to deduce something similar for relative monads. However, the naive definitions of (op)algebra-objects, defined to be (op)algebras universal with respect to (op)algebra morphisms turns out to be insufficient. Instead, it is necessary to give a stronger universal property making use of the equipment structure. ### Algebra-objects The definition of algebra morphism in Definition 6.8 is given only between algebras with the same domain. We now give a more general definition of (graded) morphism between any algebras of a relative monad, which is necessary to express the universal property of algebra-objects for relative monads. **Definition 6.28**.: Let \((e\colon D\to E,\rtimes)\) and \((e^{\prime}\colon D^{\prime}\to E,\rtimes^{\prime})\) be \(T\)-algebras. A \((p_{1},\dots,p_{n})\)_-graded \(T\)-algebra morphism_ from \((e,\rtimes)\) to \((e^{\prime},\rtimes^{\prime})\) is a \(2\)-cell \[\epsilon\colon E(1,e),p_{1},\dots,p_{n}\Rightarrow E(1,e^{\prime})\] satisfying the following equation (defining \(\lambda\) and \(\lambda^{\prime}\) as in Theorem 6.11). \[\begin{CD}D^{\prime}@>{p_{n}}>{}>\dots @>{p_{1}}>{}>D@>{E(j,e)}>{}>A@>{E(1,t)}>{}>E\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ D^{\prime}@>{p_{n}}>{}>\dots @>{p_{1}}>{}>D@>{E(1,e)}>{}>E\\ @V{}V{}V@V{}V{}V\\ D^{\prime}@>{E(1,e^{\prime})}>{}>E\end{CD}\] \[\begin{CD}D^{\prime}@>{p_{n}}>{}>\dots @>{p_{1}}>{}>D@>{E(j,e)}>{}>A@>{E (1,t)}>{}>E\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ \downarrow\\ E(1,e)\end{CD}\] \[\begin{CD}D^{\prime}@>{p_{n}}>{}>\dots @>{p_{1}}>{}>D@>{E(1,t)}>{}>E \\ @V{}V{}V@V{}V{}V\\ E(1,e)\end{CD}\] \[\begin{CD}D^{\prime}@>{p_{n}}>{}>\dots @>{p_{1}}>{}>D@>{E(1,t)}>{}>E\\ @V{}V{}V@V{}V{}V\\ E(1,e^{\prime})\end{CD}\] When \(n=0\), we call such a morphism _ungraded_. In particular, ungraded algebra morphisms are precisely those given in Definition 6.8. **Remark 6.29**.: It will be convenient to have the following alternative description. A \((p_{1},\dots,p_{n})\)-graded \(T\)-algebra morphism is equivalently a \(2\)-cell satisfying the following equation. **Remark 6.30**.: For each relative monad \(T\), the \(|\mathbb{X}|\)-indexed family of categories \(T\)-\(\mathbf{Alg}_{(-)}\) (Definition 6.8) assembles into category _locally graded_ by \(\mathbb{X}\). Its morphisms, which are graded by chains of loose-cells in \(\mathbb{X}\), are the graded algebra morphisms of Definition 6.28; the same is true of the graded opalgebra morphisms of Definition 6.41. The construction of an \((\ell^{\prime}\,;T\,;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ **Remark 6.32**.: As observed in Corollary 6.13, algebras for \(1_{A}\)-monads, and their ungraded morphisms, are algebras for monads, and their morphisms, in the classical sense. However, the definition of graded algebra morphism cannot be stated in an arbitrary \(2\)-category. Therefore, algebra-objects in the sense of Definition 6.31 have a stronger universal property than the classical notion of algebra-object for a monad in a \(2\)-category [10, SS3.3]. In Section 8, we will show that algebra-objects for relative monads exist in \(\mathbb{V}\)-\(\mathbf{Cat}\), and thus that algebra-objects for enriched (relative) monads satisfy a stronger universal property with respect to distributors than has traditionally been recognised. Similar considerations apply to the definition of opalgebra-object given in the next section. **Definition 6.33**.: Let \(T\) be a relative monad admitting an algebra-object. Denote by \(f_{T}\colon A\to\mathbf{Alg}(T)\) the mediating tight-cell \(\left\langle\right\rangle_{\dagger}\) induced by the \(T\)-algebra \((t,\dagger)\) (Example 6.9). **Lemma 6.34**.: _Let \(T\) be a relative monad. If \(T\) admits an algebra-object, then \(T\) admits a resolution._ Proof.: The unit of \(T\) is a \(2\)-cell \(\eta\colon j\Rightarrow t=f_{T}\,;u_{T}\). By Theorem 6.11, we may express \(\rtimes_{T}\) as a \(2\)-cell \(\lambda_{T}\colon E(1,t),E(j,u_{T})\Rightarrow E(1,u_{T})\). The compatibility law between \(\rtimes_{T}\) and \(\dagger=\rtimes_{T},\mathbf{Alg}(T)(1,f_{T})\) is then precisely the condition that \(\lambda_{T}\) be an \((E(j,u_{T}))\)-graded algebra morphism from \((t,\dagger)\) to \((u_{T},\rtimes_{T})\). Hence, the universal property of the algebra-object induces a \(2\)-cell \(\left\langle\right\rangle_{\rtimes_{T}}\colon\mathbf{Alg}(T)(1,f_{T}),E(j,u_{ T})\Rightarrow\mathbf{Alg}(T)(1,1)\). We prove \(\eta\) together with \(\left\langle\right\rangle_{\rtimes_{T}}\) forms a \(j\)-adjunction (in unit-counit form). First, we have \[=\] by bending \(f_{T}\); using the definition of \(\rtimes_{T}\); and the unit law for the algebra \((u_{T},\rtimes_{T})\). Second, we have \[=\] \(=\)\ **Theorem 6.37**.: _Let \(j\colon A\to E\) be a tight-cell. The functor \(\odot_{j}\colon\mathbf{RAdj}_{l}(j)\to\mathbf{RMnd}(j)^{\mathrm{op}}\) admits a partial right-adjoint section, defined on those \(j\)-monads admitting algebra-objects._ \[\mathbf{RAdj}_{l}(j)\xrightarrow[f_{(-)}\xrightarrow{j!u_{(-)}}]{\otimes_{j}} \mathbf{RMnd}(j)^{\mathrm{op}}\] _Moreover, a left-morphism is strict if and only if its transpose is the identity \(j\)-monad morphism._ Proof.: We shall use Lemma 6.36, which permits us to elide details of functoriality and naturality. Lemma 6.34 gives a partial assignment \(\mathbf{RMnd}(j)^{\mathrm{op}}\to\mathbf{RAdj}_{l}(j)\) on objects. Let \(T\) and \(T^{\prime}\) be \(j\)-monads, and denote by \(\ell\,_{j}{\dashv}\,r\) a resolution of \(T\). Assume that \(T^{\prime}\) admits an algebra-object. We aim to define an inverse to the function \[(\odot_{j})_{(\ell\;\,j^{\mathrm{tr}}),(f_{T^{\prime}}\;\,j^{\mathrm{lu}}\,_{T ^{\prime}})}\colon\mathbf{RAdj}_{l}(j)(\ell\;_{j}{\dashv}\,r,f_{T^{\prime}}\; _{j}{\dashv}\,u_{T^{\prime}})\to\mathbf{RMnd}(j)(T^{\prime},T)\] Recall that \(r\) and \(t\) form \(T\)-algebras by Proposition 6.25 and Example 6.9 respectively, so that a \(j\)-monad morphism \(\tau\colon T^{\prime}\to T\) induces \(T^{\prime}\)-algebra structures on each by functoriality of \((-)\)-\(\mathbf{Alg}\). The universal property of \(\mathbf{Alg}(T^{\prime})\) thus induces a unique tight-cell \(\langle\rangle_{\ell\;\,j^{\mathrm{tr}}}\colon C\to\mathbf{Alg}(T^{\prime})\) such that \(r=\langle\rangle_{\ell\;\,j^{\mathrm{tr}}}\colon u_{T^{\prime}}\colon u_{T^{ \prime}}\) and \(\mathfrak{p}\;\,;r\,;E(r,r)=\langle\rangle_{\ell\;\,j^{\mathrm{tr}}}\,;\, \mathfrak{k}_{T^{\prime}}\). Furthermore, the \(2\)-cell \(\tau\) forms an ungraded \(T^{\prime}\)-algebra morphism from \((t^{\prime},\mathfrak{k}^{\prime})\) to the induced \(T^{\prime}\)-algebra structure on \(t\), the compatibility law following from the extension operator law for \(\tau\), and hence induces a \(2\)-cell \(\langle\rangle_{\tau}\colon f_{T^{\prime}}\Rightarrow\ell\,;\langle\rangle_{ \ell\;\,j^{\mathrm{tr}}}\) by the universal property of \(\mathbf{Alg}(T^{\prime})\). The pair \((\langle\rangle_{\ell\;\,j^{\mathrm{tr}}},\langle\rangle_{\tau})\) forms a left-morphism, the compatibility law following from the unit law for \(\tau\). This assignment defines a function \[\langle\rangle_{(-)}\colon\mathbf{RMnd}(j)(T^{\prime},T)\to\mathbf{RAdj}_{l}( j)(\ell\;_{j}{\dashv}\,r,f_{T^{\prime}}\;_{j}{\dashv}\,u_{T^{\prime}})\] To establish that these functions are inverse, let \((c,\lambda)\) be a left-morphism from \(\ell\;_{j}{\dashv}\,r\) to \(f_{T^{\prime}}\;_{j}{\dashv}\,u_{T^{\prime}}\), inducing the \(j\)-monad morphism \(\lambda;u_{T^{\prime}}\). We have that \(r=c;u_{T^{\prime}}\) and \(\mathfrak{p};r\,;E(\lambda;u_{T^{\prime}},r)=c;\mathfrak{q}_{T^{\prime}}\) by definition of a left-morphism, so that \(c=\langle\rangle_{\ell\;\,j^{\mathrm{tr}}}\) by uniqueness of the universal property; that \(\langle\rangle_{\lambda;u_{T^{\prime}}}=\lambda\) is trivial. Conversely, let \(\tau\) be a \(j\)-monad morphism from \(T^{\prime}\) to \(T\), inducing a left-morphism \((\langle\rangle_{\ell\;\,j^{\mathrm{tr}}},\langle\rangle_{\tau})\). We have that \(\langle\rangle_{\tau}\,;u_{T^{\prime}}=\tau\) by definition. Thus \(\odot_{j}\) admits a partial right-adjoint section. Finally, let \((c,\lambda)\) be a left-morphism from \(\ell\;_{j}{\dashv}\,r\) to \(f_{T^{\prime}}\;_{j}{\dashv}\,u_{T^{\prime}}\). If \(\lambda\) is the identity, then the induced \(j\)-monad morphism is trivially also the identity. Conversely, suppose that the induced \(j\)-monad morphism is the identity. Then we have \((\ell\;;c)\;;u_{T^{\prime}}=\ell\;;(c\;;u_{T^{\prime}})=\ell\;;r=t^{\prime}\) and \((\ell\;;c)\;;\mathfrak{q}_{T^{\prime}}=\ell\;;(c\;;\mathfrak{q}_{T^{\prime}})= \mathfrak{p},\sim_{r}),C(1,\ell)=\mathfrak{q}^{\prime}\), so that \(\ell\,;c=f_{T^{\prime}}\) by uniqueness of the mediating tight-cell for \(\mathbf{Alg}(T^{\prime})\). The universal property of \(\mathbf{Alg}(T^{\prime})\) on algebra-morphisms thus implies that \(\lambda\) is the identity, so that the left-morphism is necessarily strict. **Corollary 6.38**.: _Let \(j\colon A\to E\) be a tight-cell. The partial functor \(u_{(-)}\colon\mathbf{RMnd}(j)^{\mathrm{op}}\to\underline{\mathbb{X}}/E\), defined on those \(j\)-monads \(T\) admitting algebra-objects, is fully faithful._ Proof.: Direct by composing the fully faithful functors of Theorem 6.37 and Lemma 5.16. **Corollary 6.39**.: _Let \(T\) be a relative monad admitting an algebra-object. The resolution \(f_{T}\;_{j}{\dashv}\,u_{T}\) is \(j\)-monadic._ Proof.: Suppose \(\ell\;_{j}{\dashv}\,r\) is a resolution of \(T\). From Theorem 6.37, we have that strict morphisms from \(\ell\;_{j}{\dashv}\,r\) to \(f_{T}\;_{j}{\dashv}\,u_{T}\) necessarily correspond via transposition to the identity morphism on \(T\), hence are unique. Theorem 6.37 justifies our study of left-morphisms of relative adjunctions: in particular, the well-known universal property of algebra-objects as terminal resolutions (Corollary 6.39) is a consequence of a more general universal property that is functorial in the relative monad. As is to be expected from the non-relative setting, algebra-objects for trivial relative monads are trivial. **Proposition 6.40**.: _Let \(j\colon A\to E\) be a tight-cell. Then \((1_{E},1_{E(j,1)})\) exhibits an algebra-object for the trivial \(j\)-monad._ Proof.: Let \((e\colon D\to E,\rtimes)\) be a \(j\)-algebra. \(e\) trivially exhibits a unique mediating tight-cell \(\langle\rangle_{\rtimes}\colon D\to E\). Trivially, every graded algebra-morphism factors uniquely through the identity on \(E\) ### Opalgebra-objects The definition of opalgebra morphism in Definition 6.18 is given only between algebras with the same codomain. We now give a more general definition of (graded) morphism between any opalgebras of a relative monad, which is necessary to express the universal property of opalgebra-objects for relative monads. **Definition 6.41**.: Let \((a\colon A\to B,\ltimes)\) and \((a^{\prime}\colon A\to B^{\prime},\ltimes^{\prime})\) be \(T\)-opalgebras. A \((p_{1},\ldots,p_{n})\)-graded \(T\)-opalgebra morphism from \((a,\ltimes)\) to \((a^{\prime},\ltimes^{\prime})\) is a \(2\)-cell \[\alpha\colon p_{1},\ldots,p_{n},B(1,a)\Rightarrow B^{\prime}(1,a^{\prime})\] satisfying the following equation (defining \(\rho\) and \(\rho^{\prime}\) as in Theorem 6.22). When \(n=0\), we call such a morphism _ungraded_. In particular, ungraded opalgebra morphisms are precisely those given in Definition 6.18. **Definition 6.42**.: Let \(T\) be a relative monad. A \(T\)-opalgebra \((k_{T}\colon A\to\mathbf{Opalg}(T),\ltimes_{T})\) is called an _opalgebra-object_ for \(T\) when 1. for every \(T\)-opalgebra \((a\colon A\to B,\ltimes)\), there is a unique tight-cell \([]_{\ltimes}\colon\mathbf{Opalg}(T)\to B\) such that \(k_{T}\,;\,[]_{\ltimes}=a\) and \(\ltimes_{T}\,;\,[]_{\ltimes}=\ltimes\); 2. for every graded \(T\)-opalgebra morphism \(\alpha\colon p_{1},\ldots,p_{n},B(1,a)\Rightarrow B^{\prime}(1,a^{\prime})\) there is a unique \(2\)-cell \([]_{\alpha}\colon p_{1},\ldots,p_{n},B(1,[]_{\ltimes})\Rightarrow B^{\prime}(1, []_{\ltimes^{\prime}})\) such that \[\alpha\ =\] As with algebras, being an opalgebra-object for a (non-relative) monad in the sense of Definition 6.42 is a stronger condition than being an opalgebra-object in the classical sense. **Definition 6.43**.: Let \(T\) be a relative monad admitting an opalgebra-object. Denote by \(v_{T}\colon\mathbf{Opalg}(T)\to E\) the mediating tight-cell \([]_{\dagger}\) induced by the \(T\)-opalgebra \((t,\dagger)\) (Example 6.20). **Lemma 6.44**.: _Let \(T\) be a relative monad. If \(T\) admits an opalgebra-object, then \(T\) admits a resolution._ Proof.: The unit of \(T\) provides a \(2\)-cell \(\eta\colon j\Rightarrow t=k_{T}\,;\,v_{T}\). By Theorem 6.22, we may express \(\ltimes_{T}\) as a \(2\)-cell \(\rho_{T}\colon\mathbf{Opalg}(T)(1,k_{T}),E(j,t)\Rightarrow\mathbf{Opalg}(T)( 1,k_{T})\). The compatibility law between \(\ltimes_{T}\) and \(\dagger=\ltimes_{T}\,;\,v_{T}\) is then precisely the condition that \(\rho_{T}\) be an \((\mathbf{Opalg}(T)(1,k_{T}),E(j,1))\)-graded opalgebra morphism from \((t,\dagger)\) to \((k_{T},\ltimes_{T})\). Hence, the universal property of the opalgebra-object induces a 2-cell \(\left\|{}_{\ltimes_{T}}\colon\mathbf{Opalg}(T)(1,k_{T}),E(j,v_{T})\to\mathbf{ Opalg}(T)(1,1)\). We prove \(\eta\) together with \(\left\|{}_{\ltimes_{T}}\right.\) forms a \(j\)-adjunction (in unit-count form). First we have \[=\] using the definition of \(\ltimes_{T}\); bending \(v_{T}\); the definition of \(\dagger\); and the left unit law for \(T\). Second, we have \[=\] by bending \(k_{T}\); the definition of \(\ltimes_{T}\); and the unit law for the opalgebra \((k_{T},\ltimes_{T})\). Hence the zig-zag laws are satisfied, and so \(k_{T}\dashv v_{T}\). The induced operator is given by \[=\] using the definition of \(\ltimes_{T}\); and that \(\ltimes_{T};v_{T}=\dagger\). Therefore \(k_{T}\dashv^{\dagger}v_{T}\) is a resolution of \(T\). **Corollary 6.45**.: _Let \(T\) be a relative monad admitting an opalgebra-object \((k_{T},\ltimes_{T})\). Then \(\ltimes_{T}\) is necessarily invertible._ Proof.: From Lemma 6.44, we have that \(k_{j}\dashv v_{T}\), and hence that \(\mathbf{Opalg}(T)(k_{T},k_{T})\cong E(j,v_{T}k_{T})=E(j,t)\). By construction of the relative adjunction, this invertible 2-cell is precisely \(\ltimes_{T}\) **Theorem 6.46**.: \(\oslash_{j}\colon\mathbf{RAdj}_{r}(j)\to\mathbf{RMnd}(j)\) _admits a partial left-adjoint section, defined on those \(j\)-monads admitting opalgebra-objects._ \[\mathbf{RAdj}_{r}(j)\,\frac{k_{(-)}\,\overset{j\,\text{\'{\'{\'{\'{\'{\'{\'{\'{ \'{\ddots}}}}}}}}}}{\oslash_{j}}}{\mathbf{RMnd}(j)}\] _Moreover, a right-morphism is strict if and only if its transpose is the identity \(j\)-monad morphism._ Proof.: We shall use (the dual of) Lemma 6.36, which permits us to elide details of functoriality and naturality. Lemma 6.34 gives a partial assignment \(\mathbf{RMnd}(j)\to\mathbf{RAdj}_{r}(j)\) on objects. Let \(T\) and \(T^{\prime}\) be \(j\)-monads, and denote by \(\ell^{\prime}\,_{j}\dashv r\) a resolution of \(T^{\prime}\). Assume that \(T\) admits an opalgebra-object. We aim to define an inverse to the function \[(\oslash_{j})_{(k_{T}\,\,j\vdash v_{T}),(\ell^{\prime}\,\,j\vdash r^{\prime})} \colon\mathbf{RAdj}_{r}(j)((k_{T}\,\,_{j}\dashv v_{T}),(\ell^{\prime}\,\,_{j }\dashv r^{\prime}))\to\mathbf{RMnd}(j)(T,T^{\prime})\] Recall that \(\ell^{\prime}\) and \(t\) form \(T\)-opalgebras by Proposition 6.25 and Example 6.20 respectively, so that a \(j\)-monad morphism \(\tau\colon T\to T^{\prime}\) induces \(T\)-opalgebra structures on each by functoriality of \((-)\)**-Opalg**. The universal property of \(\mathbf{Opalg}(T)\) thus induces a unique tight-cell \(\|_{\ell^{\prime}\,\,j\vdash r^{\prime}}\colon\mathbf{Opalg}(T)\to C\) such that \(\ell=k_{T}\,\,\{\|_{\ell^{\prime}\,\,j\vdash r^{\prime}}\) and \(E(j,\tau)\,\,;\,(\lx@converttounder{\text{\text{\'{\'{\'{\'{\'{\'{\'{\'{\ **Remark 6.50**.: Our definition of opalgebra-object rectifies an inadequacy in the definition of the _relative Kleisli objects_ of [10, Definition 6.4], which do not appear to form relatively opmonadic resolutions, in contrast to the _relative EM objects_ ibid., which do form relatively monadic resolutions (cf. [10, Remark 6.7]). **Remark 6.51**.: As a consequence of Corollary 6.48 together with Corollary 6.39, for any relative monad admitting both an opalgebra- and an algebra-object, there is a unique _comparison_ tight-cell \(i_{T}\colon\mathbf{Opalg}(T)\to\mathbf{Alg}(T)\) (given equivalently by \([\hskip-1.0pt]_{f_{T}\;\smash{\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}} \hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}}\hskip-1.0pt\raisebox{0.0pt}{ \scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}} \hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{ \scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$ \cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{ \scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$ \cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{ \scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$ \cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{ \scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$ \cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{ \scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$ \cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0pt}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0 }{$\cdot$}}\hskip-1.0pt\raisebox{0.0}{\scalebox{1.0}{$\cdot$}}\hskip-1.0pt \raisebox{0.0pt}{\scalebox{1.0}{$\cdot$}}\hskip-1. diagram commutative. In particular, this holds when \(T\) admits a resolution \(\ell\;_{j}\dashv r\)._ Proof.: From Remark 6.19, \(T\)-opalgebras are precisely tight-cells \(a\colon A\to B\) equipped with loose-monad morphisms \(E(j,T)\Rightarrow B(a,a)\), while \(\ell\)-opalgebras are tight-cells \(a\colon A\to B\) equipped with loose-monad morphisms \(C(\ell,\ell)\Rightarrow B(a,a)\). Hence, since \(E(j,T)\) and \(C(\ell,\ell)\) are isomorphic as loose-monads, the two opalgebra-objects satisfy the same universal property, exhibiting them as isomorphic. In particular, when \(T\) admits a resolution \(\ell\;_{j}\dashv r\), Corollary 5.27 implies that \(\ell\) is such a tight-cell. In practice, this means that demonstrating the existence of opalgebra-objects in general may often be reduced to demonstrating the existence of opalgebra-objects for trivial relative monads. **Corollary 6.54**.: _If every loose-monad in \(\mathbb{X}\) is induced by a tight-cell, every relative monad admits an opalgebra-object if and only every trivial relative monad admits an opalgebra-object._ Proof.: Assume that every loose-monad in \(\mathbb{X}\) is induced by a tight-cell. Then in particular, for any \(j\)-monad \(T\), the assumptions of Proposition 6.53 are satisfied, so that if trivial relative monads admit opalgebra-objects, then \(T\) admits an opalgebra-object. The converse is trivial. The assumption that every loose-monad is induced by a tight-cell is verified, for instance, in **Cat**[10, p. 6.22; 11, Proposition 39], and more generally in any equipment that is _exact_ in the sense of Schultz [10, Definition 5.1]. ### (Op)algebra-objects and composition of relative adjunctions Suppose we have the following situation, as in Proposition 5.30. Let \(T\) be a \(j\)-monad. Suppose that \(T\) and \(\ell^{\prime}\;;T\;;r^{\prime}\) admit opalgebra-objects. Then \((k_{T},\ltimes_{T})\) induces an \((\ell^{\prime}\;;T\;;r^{\prime})\)-opalgebra structure on \(\ell^{\prime}\;;k_{T}\) by Proposition 6.26, and consequently the universal property of \(\mathbf{Opalg}(\ell^{\prime}\;;T\;;r^{\prime})\) induces a tight-cell \([]_{T}\colon\mathbf{Opalg}(\ell^{\prime}\;;T\;;r^{\prime})\to\mathbf{Opalg}(T)\) under \(A\). Similarly, suppose that \(T\) and \(\ell^{\prime}\;;T\;;r^{\prime}\) admit algebra-objects. Then \((u_{T},\rtimes_{T})\) induces an \((\ell^{\prime}\;;T\;;r^{\prime})\)-algebra structure on \(u_{T}\;;r^{\prime}\) by Proposition 6.26, and consequently the universal property of \(\mathbf{Alg}(\ell^{\prime}\;;T\;;r^{\prime})\) induces a tight-cell \(\left\langle\right\rangle_{T}\colon\mathbf{Alg}(T)\to\mathbf{Alg}(\ell^{ \prime}\;;T\;;r^{\prime})\) over \(E\). When both opalgebra- and algebra-objects exist, we have a commutative diagram as follows. Furthermore, in this situation, the opalgebra-object and algebra-object for \(\ell^{\prime}\;;T\;;r^{\prime}\) satisfy a universal property with respect to the opalgebra-object and algebra-object for \(T\), as follows. **Proposition 6.55**.: _Let \(\ell^{\prime}\;;j\;_{j}\dashv r^{\prime}\) be a relative adjunction, and let \(T\) be a \(j\)-monad, as in Proposition 5.30._ 1. _Suppose_ \(j^{\prime}\) _is dense. If_ \(T\) _and_ \(\ell^{\prime}\;;T\;;r^{\prime}\) _admit opalgebra-objects, then, for every_ \(j^{\prime}\)_-monad_ \(T^{\prime}\) _admitting an opalgebra-object and for every tight-cell_ \(\mathbf{Opalg}(T^{\prime})\to\mathbf{Opalg}(T)\) _under_ \(A\)_, there is a unique tight-cell_ \(\mathbf{Opalg}(T^{\prime})\to\mathbf{Opalg}(\ell^{\prime}\;;T\;;r^{\prime})\) _rendering the following diagram_ commutative._ 2. _If_ \(T\) _and_ \(\ell^{\prime}\,;T\,;r^{\prime}\) _amit algebra-objects, then, for every_ \(j^{\prime}\)_-monad_ \(T^{\prime}\) _admitting an algebra-object and for every tight-cell_ \(\mathbf{Alg}(T)\to\mathbf{Alg}(T^{\prime})\) _over_ \(E\)_, there is a unique tight-cell_ \(\mathbf{Alg}(\ell^{\prime}\,;T\,;r^{\prime})\to\mathbf{Alg}(T^{\prime})\) _rendering the following diagram commutative._ Proof.: The proofs for (1) and (2) proceed similarly. 1. \[A/\mathbb{X}(k_{T^{\prime}},(\ell^{\prime}\,;k_{T})) \cong\mathbf{RADj}_{r}(j)((k_{T^{\prime}}\,\,;_{j^{\prime}\!\!-\! \!!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! while relative monads in \(\mathbb{X}^{\mathrm{co}}\) are precisely relative comonads in \(\mathbb{X}\), the concept of (co)algebra is formally distinct to that of (co)opalgebra. While this bifurcation in the relative setting may appear surprising, the reason is clear from the perspective of skew-multicategories. For a relative monad qua monoid in the skew-multicategory \(\mathbb{X}[j]\), we have the notions of action both in a left-\(\mathbb{X}[j]\)-multiacetegory (Definition 6.4) and in a right-\(\mathbb{X}[j]\)-multiacetegory (Definition 6.14). When \(j=1\), these notions are formally dual, so that actions in a left-multiacetogy may be defined in terms of actions in a right-multiacetogy, and conversely. However, in general the two notions are not dual: for instance, the definition of action in a skew-left-multiacetogy involves the left-unitor \(\lambda\), while the definition of action in a skew-right-multiacetegory involves the right-unitor \(\rho\). We shall briefly review the theory of relative comonads and relative coadjunctions. However, since the theory is entirely dual to the theory of relative monads and relative adjunctions, we shall give only definitions, and leave the reader to dualise the theorems as desired. We omit the string diagram presentations of the axioms, which are obtained simply by horizontally reflecting those for relative monads and relative adjunctions. **Remark 7.1**.: While the study of relative coadjunctions and comonads may be reduced to the study of relative adjunctions and monads via duality, it appears likely that it is worthwhile to study the interaction between relative adjunctions and coadjunctions, and between relative monads and comonads, which cannot be thus reduced, though we shall not do so here (cf. [11, SS2.2 & SS2.4]). **Definition 7.2**.: Let \(\mathbb{X}\) be a virtual equipment. A _relative comonad_ in \(\mathbb{X}\) is a relative monad in \(\mathbb{X}^{\mathrm{co}}\). Explicitly, this comprises 1. a tight-cell \(i\colon Z\to V\), the _coroot_; 2. a tight-cell \(d\colon Z\to V\), the _underlying tight-cell_; 3. a 2-cell \(\downarrow\colon V(d,i)\Rightarrow V(d,d)\), the _coextension operator_, 4. a 2-cell \(\varepsilon\colon d\Rightarrow i\), the _counit_, satisfying the following three equations. An _\(i\)-relative comonad_ (alternatively _comonad on \(i\)_, _comonad relative to \(i\)_, or simply _\(i\)-comonad_) is a relative comonad with coroot \(i\). A _morphism_ of \(i\)-comonads from \((d,\downarrow,\varepsilon)\) to \((d^{\prime},\downarrow^{\prime},\varepsilon^{\prime})\) is a morphism of the corresponding relative monads in \(\mathbb{X}^{\mathrm{co}}\). Explicitly, this is a 2-cell \(\delta\colon d^{\prime}\Rightarrow d\) rendering commutative the following diagrams. \(i\)-comonads and their morphisms form a category \(\mathbf{RCmnd}(i)\). One may view relative comonads as monoids in a _right_-skew-multicategory of tight-cells \(Z\to V\) in \(\mathbb{X}\), dually to the theory of Section4. In this case, the multimorphisms are given by \(2\)-cells after which we restrict to the corepresentable loose-cells. Note that relative comonads, like relative monads, are _monoids_ in skew-multicategories, rather than comonoids. Consequently, an \(i\)-comonad \(D\) induces a loose-monad \(V(D,i)\). **Definition 7.3**.: A _relative coadjunction_6 is a relative adjunction in \(\mathbb{X}^{\mathrm{co}}\). Footnote 6: In older works on category theory, the terms _adjoint_ and _coadjoint_ are occasionally encountered, typically meaning _left adjoint_ and _right adjoint_ respectively (e.g. [11, 12]). The terms _adjunction_ and _coadjunction_ are also occasionally found to refer to the unit and counit of an adjunction (e.g. [11]). Since this terminology has fallen out of usage, and is particularly convenient in the context of relative adjoints, we feel there is no danger of confusion in repurposing the terminology. \[\tikzfig{fig:cord}\] 1. a tight-cell \(i\colon Z\to V\), the _coroot_; 2. a tight-cell \(\ell\colon Y\to V\), the _left (relative) coadjoint_; 3. a tight-cell \(r\colon Z\to Y\), the _right (relative) coadjoint_; 4. an isomorphism \(\sharp\colon V(\ell,i)\cong Y(1,r)\mathrel{\mathop{:}}\!\flat\), the _(left- and right-) transposition operators_. We denote by \(r\mathrel{{}_{i}}\vdash\ell\) such data (by convention leaving the transposition operators implicit), and call \(E\) the _nadir_. An _\(i\)-relative coadjunction_ (or simply _\(i\)-coadjunction_) is a relative coadjunction with coroot \(i\). We leave the reader to dualise the definitions of left- and right-morphisms of relative adjunctions (Definitions5.14 and 5.18). The distinction between relative adjunctions and relative coadjunctions disappears when the (co)root is the identity: that is, \(\ell\mathrel{{}_{1}}\dashrightarrow r\) if and only if \(r\mathrel{{}_{1}}\vdash\ell\). For this reason, relative adjunctions and relative coadjunctions have not been adequately distinguished in the literature, and authors have often used the term _relative adjunction_ to refer to either concept, disambiguating only via context7. However, it is helpful to distinguish between the two concepts: for instance, while a relative adjunction induces a relative monad, a relative coadjunction induces a relative comonad. Footnote 7: The naming convention of Ulmer [13] suggests \(j\)_-left adjunction_ for our \(j\)-adjunction, and \(i\)_-right adjunction_ for our \(i\)-coadjunction. In the terminology of Ulmer, \(\ell\) is \(j\)_-left adjoint_ to \(r\) when \(\ell\mathrel{{}_{j}}\vdash r\); and \(r\) is \(i\)_-right adjoint_ to \(\ell\) when \(r\mathrel{{}_{i}}\vdash\ell\). This has the significant shortcoming that it leaves no convenient terminology for the right \(j\)-adjoint or left \(i\)-coadjoint. **Remark 7.4**.: A tight-cell \(\ell\) is left-adjoint to \(r\) in \(\mathbb{X}\) if and only if \(\ell\) is right-coadjoint to \(r\) in \(\mathbb{X}^{\mathrm{co}}\). Furthermore, if \(\mathbb{X}\) is equipped with a _symmetry_ - that is, a bijective-on-objects equivalence \((-)^{\bullet}\colon\mathbb{X}\simeq\mathbb{X}^{\mathrm{co}}\) - this is the case if and only if \(\ell^{\bullet}\) is right-adjoint to \(r^{\bullet}\) in \(\mathbb{X}\). ## 8. Enriched relative monads A motivating setting for this paper is that of enriched category theory: while the theory of relative monads in ordinary category theory has been developed to some extent [1, 2], the theory of relative monads in enriched category theory remains largely undeveloped. The formal theory we have developed herein allows us to deduce the theorems of interest for enriched categories by specialising to equipments of enriched categories. For simplicity, we work with enrichment in monoidal categories [1, 10, 11], though we shall not need to impose symmetry, closure, or cocompleteness assumptions. In future work, we shall show that the results of interest hold for much more general bases of enrichment. Throughout this section, we assume a fixed monoidal category \((\mathbb{V},\otimes,I)\). To simplify the notation we work as though \(\mathbb{V}\) is strict, but occasionally make explicit the unitors \(\lambda_{v}\colon I\otimes v\to v\) and \(\rho_{v}\colon v\to v\otimes I\) for clarity. **Definition 8.1**.: The virtual double category \(\mathbb{V}\)-**Cat** of _categories enriched in \(\mathbb{V}\)_ (or simply \(\mathbb{V}\)-_categories_) is defined as follows. 1. An object is a _\(\mathbb{V}\)-category_[1, SS1.2], comprising a class \(|C|\) of objects, an object \(C(x,y)\) of \(\mathbb{V}\) for each \(x,y\in|C|\), a morphism \(\mathfrak{l}_{x}\colon I\to C(x,x)\) in \(\mathbb{V}\) for each \(x\in|C|\), and a morphism \(\circ_{x,y,z}\colon C(x,y)\otimes C(y,z)\to C(x,z)\) in \(\mathbb{V}\) for each \(x,y,z\in|C|\), subject to unitality and associativity. 2. A tight-cell \(f\colon C\to D\) is a _\(\mathbb{V}\)-functor_[1, SS1.2], comprising a function \(|f|\colon|C|\to|D|\), together with a morphism \(f_{x,y}\colon C(x,y)\to D(|f|x,|f|y)\) in \(\mathbb{V}\) for each \(x,y\in|C|\), preserving identities and composites. 3. A loose-cell \(p\colon D\twoheadrightarrow C\) is a _\(\mathbb{V}\)-distributor_8[1, SS3.1.c], comprising an object \(p(x,y)\) of \(\mathbb{V}\) for each \(x\in|C|\) and \(y\in|D|\), and morphisms \(\circ_{x^{\prime},x,y}\colon C(x^{\prime},x)\otimes p(x,y)\to p(x^{\prime},y)\) and \(\circ_{x,y,y^{\prime}}\colon p(x,y)\otimes D(y,y^{\prime})\to p(x,y^{ \prime})\) in \(\mathbb{V}\) compatible with each other, and with composition and identities in \(C\) and \(D\). Footnote 8: V-distributors are alternatively called _\(\mathbb{V}\)-profunctors_ or _\(\mathbb{V}\)-(bi)modules_. 4. A 2-cell \[\begin{CD}C_{n}@>{p_{n}}>{}>\cdots @>{p_{1}}>{}>C_{0}\\ @V{g}V{}V@V{\phi}V{f}V\\ D^{\prime}@V{\phi}V{q}V\\ \end{CD}\] is a _\(\mathbb{V}\)-form9_, comprising a morphism \[\phi_{x_{0},\ldots,x_{n}}\colon p_{1}(x_{0},x_{1})\otimes\cdots\otimes p_{n}(x _{n-1},x_{n})\to q(|f|x_{0},|g|x_{n})\] in \(\mathbb{V}\) for each \(x_{0}\in|C_{0}|,\ldots,x_{n}\in|C_{n}|\), rendering the following _\(\mathbb{V}\)-naturality_ diagrams commutative. Footnote 9: V-forms were introduced by Street and Day [13, p. 134] in the special case where \(f\) and \(g\) are taken to be identities. Note that the definition loc. cit. is incomplete, as it omits the coherence condition for nullary \(\mathbb{V}\)-forms. \[\begin{CD}I\otimes C_{0}(x,x^{\prime})@>{\lambda_{C_{0}(x,x^{\prime});\rho_{C_ {0}(x,x^{\prime})}}}>{}>C_{0}(x,x^{\prime})\otimes I\\ @V{\phi_{\epsilon}\otimes g_{x,x^{\prime}}}V{}V@V{}V{f_{x,x^{\prime}}\otimes \phi_{x^{\prime}}}V\\ q(|f|x,|g|x^{\prime})@>{D(|f|x,|g|x^{\prime})}V{}V@V{}V{(|f|x,|g|x^{\prime}) }V\\ C_{0}(x,x_{0})\otimes p_{1}(x_{0},x_{1})\otimes\cdots\otimes p_{n}(x_{n-1},x_{n}) @V{\phi_{\epsilon,x_{0},x_{1}}\otimes\cdots\otimes p_{n}(x_{n-1},x_{n})}V{}V_{1}(x_{1}) \otimes\cdots\otimes p_{n}(x_{n-1},x_{n})\\ D(|f|c,|f|x_{0})\otimes p_{1}(x_{0},x_{1})\otimes\cdots\otimes p_{n}(x_{n-1},x_{n}) \\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},|g|x_{n})@>{\phi_{\epsilon,x_{1},\ldots,x_{n}}} (n\geq 1)\end{CD}\] \[\begin{CD}D(|f|c,|f|x_{0})\otimes p_{1}(x_{0},x_{1})\otimes\cdots \otimes p_{n}(x_{n-1},x_{n})@>{\phi_{\epsilon,x_{0},x_{1}}\otimes\cdots}V{}V\\ D(|f|c,|f|x_{0})\otimes\cdots\otimes p_{n}(x_{n-1},x_{n})\\ D(|f|c,|f|x_{0})\otimes\cdots\otimes p_{n}(x_{n-1},x_{n})\\ D(|f|c,|f|x_{0})\otimes\phi_{x_{0},\ldots,x_{n}}\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},|g|x_{n})@>{q^{\prime}}>{(|f|c,|f|x_{0},|g|x_{ n})}V{}V\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},|g|x_{n})@>{q^{\prime}}>{(|f|c,|f|x_{0},|g|x_{ n})}V{}V\\ D(|f|c,|f|x_{0})\otimes p_{1}(x_{0},x_{1})\otimes\cdots\otimes p_{n}(x_{n-1},x_{n}) \\ D(|f|c,|f|x_{0})\otimes\cdots\otimes p_{n}(x_{n-1},x_{n})\\ D(|f|c,|f|x_{0})\otimes\phi_{x_{0},\ldots,x_{n}}\\ D(|f|c,|f|x_{0})\otimes\phi_{x_{0},\ldots,x_{n}}\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},|g|x_{n})@>{q(|f|x_{0},|g|x_{n})}>{(|f|c,|f|x_{0},|g|x_{ n})}V{}V\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},|g|x_{n})@>{q(|f|c,|f|x_{0},|g|x_{n})}>{(|f|c,|f|x_{0},|g|x_{ n})}V{}V\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},|g|x_{n})@>{q^{\prime}}>{(|f|c,|f|x_{0},|g|x_{ n})}V{}V\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},g|x_{n})@>{q(|f|c,|f|x_{0},|g|x_{n})}>{(|f|c,f|x_{0},|g|x_{ n})}V{}V\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},g|x_{n})@>{q(|f|c,f|x_{0},|g|x_{ n})}>{(|f|c,f|x_{0},|g|x_{n})}>{(|f|c,f|x_{0},|g|x_{n})}V{}V\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},g|x_{n})@>{q(|f|c,f|x_{0},|g|x_{ n})}>{(|f|c,f|x_{0},|g|x_{n})}>{(|f|c,f|x_{0},|g|x_{n})}>{(|f|c,f|x_{0},|g|x_{ n})}>{(|f|c,f|x_{0},|g|x_{n})}V{}V\\ D(|f|c,|f|x_{0})\otimes q(|f|x_{0},g|x_{n})@>{q(|f|c,f|x_{0},|g|x_{ n})}>{(|f|c,f|x_{0},|g|x_{n})}>{(|f|c,f|x_{0},|g|x_{n})}>{(|f|c,f|x_{0},|g|x_{ n})}>{(|f|c,f|x_{0},|g \[p_{1}(x_{0},x_{1})\otimes\cdots\otimes p_{n}(x_{n-1},x_{n})\otimes C _{n}(x_{n},c)\xrightarrow[p_{1}(x_{0},x_{1})\otimes\cdots\otimes p_{n}(x_{n-1},c)\] \[p_{1}(x_{0},x_{1})\otimes\cdots\otimes p_{n}(x_{n-1},x_{n}) \otimes g_{x_{n},c}\] \[p_{1}(x_{0},x_{1})\otimes\cdots\otimes p_{n}(x_{n-1}^{\prime},x_{ n})\otimes D^{\prime}(|g|x_{n},|g|c)\] \[q(|f|x_{0},|g|x_{n})\otimes D^{\prime}(|g|x_{n},|g|c)\] \[q(|f|x_{0},|g|x_{n})\otimes\overset{\overset{\cdot\cdots\otimes \circ p_{n}}{\vee}}{\otimes}\overset{\overset{\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\ is \(\mathbb{V}\)_-natural_ in \(z\in|Z|\) when the following diagram commutes for each pair of objects \(z,z^{\prime}\in|Z|\). If there exists an object \(\mathcal{P}Z(p,q)\) of \(\mathbb{V}\), equipped with a universal \(\mathbb{V}\)-natural family \[\{\varpi_{z}\colon p(z)\otimes\mathcal{P}Z(p,q)\to q(z)\}_{z\in|Z|}\] then we call \(\mathcal{P}Z(p,q)\) the _object of \(\mathbb{V}\)-natural transformations from \(p\) to \(q\)_. Explicitly, the universal property of \(\mathcal{P}Z(p,q)\) states that, for every \(\mathbb{V}\)-natural family \(\phi\) as above, there is a unique morphism \(\tilde{\phi}\colon v\to\mathcal{P}Z(p,q)\) in \(\mathbb{V}\) such that, for each object \(z\in|Z|\), the morphism \(\phi_{z}\) is equal to \[p(z)\otimes v\xrightarrow{p(z)\otimes\tilde{\phi}}p(z)\otimes\mathcal{P}Z(p, q)\xrightarrow{\varpi_{z}}q(z)\] This universal property of \(\mathcal{P}Z(p,q)\) is the same as the (second form of the) universal property given by [1, SS3]. When \(\mathbb{V}\) is symmetric and closed, a presheaf \(p\) on \(Z\) is the same as a \(\mathbb{V}\)-functor \(p\colon Z^{\mathrm{op}}\to\mathbb{V}\), in which case \(\mathcal{P}Z(p,q)\) is exhibited by the hom-object \([Z^{\mathrm{op}},\mathbb{V}](p,q)\) of the \(\mathbb{V}\)-functor category \([Z^{\mathrm{op}},\mathbb{V}]\) when it exists (cf. [1, SS2]). In general, when \(\mathcal{P}Z(p,q)\) exists for all presheaves \(p\) and \(q\), these objects form the hom-objects of a \(\mathbb{V}\)-category \(\mathcal{P}Z\). As is the usual convention, however, we use the notation \(\mathcal{P}Z(p,q)\) even when the \(\mathbb{V}\)-category \(\mathcal{P}Z\) does not exist. **Example 8.5**.: The _Yoneda embedding_ of an object \(x\in|Z|\) is the \(\mathbb{V}\)-presheaf \(Z(-,x)\), with the action of \(Z\) given by composition. For each presheaf \(q\) on \(Z\), the object \(\mathcal{P}Z(Z(-,x),q)\) is isomorphic to \(q(x)\), since, for a fixed object \(v\) of \(\mathbb{V}\), \(\mathbb{V}\)-natural families \(\{Z(z,x)\otimes v\to q(z)\}_{z\in|Z|}\) are in bijection with morphisms \(v\to q(x)\). For every \(\mathbb{V}\)-functor \(j\colon A\to E\) and object \(x\in|E|\), there is a \(\mathbb{V}\)-presheaf \(E(|j|-,x)\) on \(A\), the _nerve of \(j\) at \(x\)_. The action of \(A\) on \(E(|j|-,x)\) is given by \[A(z^{\prime},z)\otimes E(|j|z,x)\xrightarrow{j_{z^{\prime},z}\otimes E(|j|z,x )}E(|j|z^{\prime},|j|z)\otimes E(|j|z,x)\xrightarrow{\circ_{|j|z^{\prime},|j|z, x}}E(|j|z^{\prime},x)\] The object \(\mathcal{P}A(E(|j|-,x),q)\) exists in particular for small \(\mathbb{V}\)-categories \(A\) when \(\mathbb{V}\) is complete and (left- and right-) closed [1, SS3]. Denote by \(\star\) the \(\mathbb{V}\)-category with a single object \(\star\), and hom-object \(\star(\star,\star):=I\). An object \(z\) of a \(\mathbb{V}\)-category \(Z\) is then equivalently a \(\mathbb{V}\)-functor \(z\colon\star\to Z\), while a \(\mathbb{V}\)-presheaf \(p\) on \(Z\) is equivalently a \(\mathbb{V}\)-distributor \(p\colon\star\to Z\). An object \(v\) of \(\mathbb{V}\) is equivalently a \(\mathbb{V}\)-distributor \(v\colon\star\to\star\), and a \(\mathbb{V}\)-form \(\phi\colon p,v_{1},\ldots,v_{n}\Rightarrow q\) is equivalently a \(\mathbb{V}\)-natural family of morphisms. It follows that the object \(\mathcal{P}Z(p,q)\) is then exactly the right lift \(q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.8}{$\bullet$}}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}}p\colon I\to I\). If \(r\colon X\to Y\) is a \(\mathbb{V}\)-distributor, then the component \(r(y,x)\), viewed as a \(\mathbb{V}\)-distributor \(r(y,x)\colon\star\to\star\), is the restriction of \(r\) along the \(\mathbb{V}\)-functors \(x\colon\star\to X\) and \(y\colon\star\to Y\). **Lemma 8.6**.: _Let \(p\colon Y\twoheadrightarrow Z\) and \(q\colon X\twoheadrightarrow Z\) be \(\mathbb{V}\)-distributors. If the objects \(\mathcal{P}Z(p(-,y),q(-,x))\) exist for every \(x\in|X|\) and \(y\in|Y|\) then they form the right lift \(q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.8}{$\bullet$}}}}}p\colon X\to Y\) in \(\mathbb{V}\)-\(\mathbf{Cat}\)._ \[(q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.8}{$\bullet$}}}}}p)(y,x):=\mathcal{P}Z(p(-,y),q(-,x))\] _The actions of \(X\) and \(Y\) on \(q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}p\) are unique such that the universal \(\mathbb{V}\)-natural families \(\varpi_{z}\colon p(z,y)\otimes\mathcal{P}Z(p(-,y),q(-,x))\to q(z,x)\) constitute a \(\mathbb{V}\)-form \(p,q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}p\Rightarrow q\)._ _The converse also holds: if \(q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.8}{$\bullet$}}}}}p\) exists, then \((q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}p)(y,x)\) satisfies the universal property of \(\mathcal{P}Z(p(-,y),q(-,y))\)._ Proof.: Suppose that the objects \(\mathcal{P}Z(p(-,y),q(-,x))\) exist, and define \((q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}{\vbox{ \hbox{\scalebox{.8}{$\bullet$}}}}}p)(y,x)\) as above. We first show that \(q\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}p\) canonically forms a \(\mathbb{V}\)-distributor. The universal property of \(\mathcal{P}Z(p(-,y),q(-,x))\) defines unique morphisms for each \(x,x^{\prime}\in|X|\) and \(y,y\in|Y|\), \[\circ_{y,x,x^{\prime}}\colon(q\mathbin{\mathchoice{\vbox{\hbox{ \scalebox{.8}{$\bullet$}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}}p)(y,x)\otimes X(x,x^{\prime})\to(q \mathbin{\mathchoice{\vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{ \vbox{\hbox{\scalebox{.8}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.8}{$ \bullet$}}}}}p)(y,x^{\prime})\] \[\circ_{y^{\prime},y,x^{\prime}}\colon Y(y^{\prime}, rendering the following diagrams, natural in \(z\in Z\), commutative. These are compatible with identities and composition because \(p\) and \(q\) are; that they are compatible with each other is immediate from the definitions. Hence \(q\mathbin{\mathchoice{\vbox{\hbox{\kern 0.0pt\vrule height 6.299904pt width 0.4pt depth 0.0pt\kern-0.4pt\vrule height 6.299904pt width 0. 5. A _\(j\)-relative adjunction_ in the sense of McDermott and Uustalu [14, Definition 17], for \(\mathbb{V}\) a small monoidal category and \(j\) a functor between locally \(\mathbb{V}\)-graded categories, is precisely a \([\mathbb{V}^{\mathrm{op}},\mathbf{Set}]\)-enriched \(j\)-adjunction. To show that Definition 4.13 specialises to notions of enriched relative monad in the literature requires a little more work. To do so, we show that the definition of relative monad may be simplified in the setting of \(\mathbb{V}\)-\(\mathbf{Cat}\). In particular, it is only necessary to specify the action of the underlying \(\mathbb{V}\)-functor on objects: functoriality then follows automatically from the relative monad laws. **Theorem 8.8**.: _A relative monad in \(\mathbb{V}\)-\(\mathbf{Cat}\) is equivalently specified by_ 1. \(a\) \(\mathbb{V}\)_-functor_ \(j\colon A\to E\)_;_ 2. _a function_ \(|t|\colon|A|\to|E|\)_;_ 3. _a morphism_ \(\eta_{x}\colon I\to E(|j|x,|t|x)\) _in_ \(\mathbb{V}\) _for each_ \(x\in|A|\)_;_ 4. _a morphism_ \(\dagger_{x,y}\colon E(|j|x,|t|y)\to E(|t|x,|t|y)\) _in_ \(\mathbb{V}\) _for each_ \(x,y\in|A|\)_,_ _satisfying the following equations for each \(x,y,z\in|A|\)._ _A \(j\)-monad morphism in \(\mathbb{V}\)-\(\mathbf{Cat}\) is equivalently specified by a morphism \(\tau_{x}\colon|t|x\to|t^{\prime}|x\) in \(\mathbb{V}\) for each \(x\in|A|\), satisfying the following equations for each \(x,y\in|A|\)._ Before proceeding with the proof, we first note that \(\mathbb{V}\)-naturality of \(\eta\) expresses commutativity of the following diagram in \(\mathbb{V}\) for each \(x,y\in|A|\), while \(\mathbb{V}\)-naturality of \(\dagger\) expresses commutativity of the following two diagrams in \(\mathbb{V}\) for each \(x,y,z\in|A|\). \[\begin{CD}A(x,y)\otimes E(|j|y,|t|z)@>{A(x,y)\otimes\dagger_{y,z}}>{}>A(x,y) \otimes E(|t|y,|t|z)\\ @V{j_{x,y}\otimes E(|j|y,|t|z)}V{}V@V{t_{x,y}\otimes E(|t|y,|t|z)}V{}V\\ E(|j|x,|j|y)\otimes E(|j|y,|t|z)@V{}V{E(|t|x,|t|y)\otimes E(|t|y,|t|z)}V\\ @V{\circ_{|j|x,|j|y,|t|z}}V{}V@V{\circ_{|t|x,|t|y,|t|z}}V{}V@V{\circ_{|t|x,| t|z}}V{}V\\ E(|j|x,|t|z)@V{}V{t_{x,z}}V{}V@V{}V{t_{x,z}}V{}V\\ E(|t|x,|t|z)\end{CD}\] \[\begin{CD}E(|j|x,|t|y)\otimes A(y,z)@>{\dagger_{x,y}\otimes A(y,z)}>{}>E(|t|x,|t|y) \otimes A(y,z)\\ E(|j|x,|t|y)\otimes t_{y,z}\end{CD}\] \[\begin{CD}E(|j|x,|t|y)\otimes E(|t|y,|t|z)@>{\dagger_{x,y}\otimes A(y,z )}>{}>E(|t|x,|t|y)\otimes E(|t|y,|t|z)\\ E(|j|x,|t|y,|t|z)@>{\circ_{|j|x,|t|y,|t|z}}>{}>E(|j|x,|t|z)\end{CD}\] \[\begin{CD}E(|j|x,|t|y)\otimes E(|t|y,|t|z)@>{\circ_{|t|x,|t|y,|t|z}}>{}> E(|t|x,|t|y)\otimes E(|t|y,|t|z)\\ E(|t|x,|t|y)\otimes E(|t|z,|t|y,|t|z)\end{CD}\] Proof.: Since it is trivial that every relative monad in \(\mathbb{V}\)-**Cat** specifies the given data, it is enough to show that, given the specified data, \(|t|\colon|A|\to|E|\) extends to a \(\mathbb{V}\)-functor \(t\colon A\to E\), for which \(\{\eta_{x}\}_{x\in|A|}\) and \(\{\{\mathfrak{z}_{x,y}\}_{x,y}\in|A|}\) are \(\mathbb{V}\)-forms, and furthermore that this extension is the unique such that the relative monad laws are satisfied. Supposing \(|t|\) thus extends, the following diagram must commute in \(\mathbb{V}\) for each \(x,y\in|A|\), using \(\mathbb{V}\)-naturality of \(\dagger\), the second unit law, and the right unit law of composition in \(E\). Thus any such extension is necessarily unique. Conversely, given the specified data, define \[t_{x,y}:=j_{x,y}\,;E(|j|x,\eta_{y})\,;\,\dagger_{x,y}\] for each \(x,y\in|A|\). (Observe that this is precisely the outside composite in the diagram above, by commutativity of the following diagram.) This is a \(\mathbb{V}\)-functor: preservation of identities follows by commutativity of using preservation of identities of \(j\), and the second unit law; while preservation of composites follows by commutativity of using preservation of composites of \(j\), the first unit law, and the associativity law. \(\eta\) is then \(\mathbb{V}\)-natural, the following diagram commuting by the first unit law. \(\dagger\) is also \(\mathbb{V}\)-natural, the left-compatibility law following from commutativity of using the first unit law, and the associativity law; and the right-compatibility law following from commutativity of using the associativity law. Given \(j\)-monads \(T=(t,\dagger,\eta)\) and \(T^{\prime}=(t^{\prime},\dagger^{\prime},\eta^{\prime})\) in \(\mathbb{V}\)-**Cat**, it is trivial that every relative monad morphism \(T\to T^{\prime}\) specifies the given data. Thus it is enough to show that, given the specified data, \(\{\tau_{x}\}_{x\in|A|}\) forms a \(\mathbb{V}\)-form. This follows by commutativity of the following diagram, using the preservation of the unit and extension operators by \(\tau\). **Remark 8.9**.: Theorem 8.8 is asserted without proof for **Set**-enriched relative monads in [1, p. 300; 1, p. 4], and for \(\mathbb{V}\)-enriched relative monads with fully faithful roots in [12, Remark 8.2]. Walters [20, Theorems 1.4.1 & 1.5.2] gives a proof for **Set**-enriched monads relative to the identity (there called _full devices_[20, Definition 1.1.1]). **Example 8.10**.: The explicit definition of \(\mathbb{V}\)-enriched relative monad stated in Theorem 8.8 does not appear to have been given in complete generality in the literature, and subsumes various prior definitions. 1. A _device_ in the sense of Walters [20, SS1] is precisely a **Set**-enriched relative monad whose root is injective-on-objects and has discrete domain. A _device_ in the sense of [20, Definition 1.1.1], which is equivalent to a _Kleisli structure_ in the sense of Altenkirch and Reus [1, Definition 4], is precisely a **Set**-enriched relative monad whose root has discrete domain. 2. An _algebraic theory in extension form_ in the sense of Manes [20, Exercise 1.3.12] (called a _full device_ in [20, Definition 1.1.1], a _Kleisli triple_ in [20, Definition 1.2], and a _monad in extension form_ in [20, Definition 2.13]) is precisely a **Set**-enriched relative monad whose root is the identity. 3. A _(Manes-style) relative monad_ in the sense of Altenkirch, Chapman and Uustalu [1, Definition 1; 1, Definition 2.1] is precisely a **Set**-enriched relative monad. 4. A _\(\mathbb{V}\)-enriched clone_ in the sense of Staton [21, Definition 4] is precisely a \(\mathbb{V}\)-enriched relative monad whose root is fully faithful. 5. A _\(j\)-abstract \(\mathbb{V}\)-clone_ in the sense of Fiore [15, Definition 1.1], for \(j\) having codomain \(\mathbb{V}\) a monoidal category with powers of objects in the image of \(j\), is equivalent when \(\mathbb{V}\) is left-closed to a \(\mathbb{V}\)-enriched \(j\)-monad whose root has discrete domain (cf. [15, Remark 1.2]). 6. An _enriched relative monad_ in the sense of Staton and Rennela [20, SS2.1] is (the underlying functor of) an enriched relative monad in our sense (technically, the definition ibid. requires the relative monad to admit a resolution, but this follows from Theorem 8.17). 7. A _relative 2-monad_ in the sense of Fiore, Gambino, Hyland and Winskel [15, Definition 3.1] is precisely a **Cat**-enriched relative monad. 8. An _\(A\)-relative \(\mathbb{V}\)-monad (on \(E\))_ in the sense of Lucyshyn-Wright and Parker [14, Definition 8.1] is precisely a \(\mathbb{V}\)-enriched relative monad whose root is fully faithful. 9. A _\(j\)-relative monad_ in the sense of McDermott and Uustalu [13, Definition 14], for \(\mathbb{V}\) a small monoidal category and \(j\) a functor between locally \(\mathbb{V}\)-graded categories, is precisely a \([\mathbb{V}^{\mathrm{op}},\mathbf{Set}]\)-enriched \(j\)-monad. Theorem 4.22 also recovers several independent definitions of relative monad in the literature. 1. A _\(j\)-monad_ in the sense of Diers [14, Definitions 1.0] is precisely a **Set**-enriched relative monad whose root is dense and fully faithful. 1. Since loose-monads in **Cat** are equivalently cocontinuous monads on presheaf categories (as the bicategory of distributors is the Kleisli bicategory for the presheaf construction [15]), this characterisation also justifies the approach of Lee, who represents relative monads by cocontinuous monads on presheaf categories [13, Chapter 2]. Precisely, a _monad associated to a relative adjointness situation_ in the sense of [13, Chapter 2] is a **Set**-enriched relative monad whose root is dense and fully faithful. 2. A _copresheaf-representable monad_ in the sense of Lucyshyn-Wright [13, Theorem 10.5] is precisely a \(\mathbb{V}\)-enriched relative monad whose root is a cocompletion (_copresheaf-representable_\(\mathbb{V}\)-distributors [13, Definition 9.2] are precisely \(j\)-representable \(\mathbb{V}\)-distributors in the sense of Definition 2.9). **Example 8.11**.: The more general notion of relative monad enriched in a bicategory was defined in [1, p. 88]. While we shall not treat this case in detail, we note that Definition 8.1 may be generalised to a virtual double category of categories enriched in a bicategory, relative monads in which coincide with those loc. cit. By virtue of the virtual double categorical setting in which we work, the admissibility condition on the roots required ibid. may be dropped in our setting. ### Existence of algebra-objects We show that enriched relative monads admit algebra-objects, assuming the existence of enough structure in \(\mathbb{V}\). As preparation for this, we define a notion of _Eilenberg-Moore algebra_ (after Eilenberg and Moore [10]) for a relative monad \(T\) in \(\mathbb{V}\)**-Cat**. These will be the objects of the _Eilenberg-Moore \(\mathbb{V}\)-category_\(\mathbf{EM}(T)\), which forms the algebra-object for \(T\). **Definition 8.12**.: Let \(j\colon A\to E\) be a \(\mathbb{V}\)-functor, and let \(T=(t,\dagger,\eta)\) be a \(j\)-monad in \(\mathbb{V}\)**-Cat**. An _Eilenberg-Moore \(T\)-algebra_ comprises 1. an object \(e\in|E|\), the _carrier_; 2. a family of morphisms in \(\mathbb{V}\), the _extension operator_ \[\{\rtimes_{x}\colon E(|j|x,e)\to E(|t|x,e)\}_{x\in|A|}\] rendering commutative the following diagrams in \(\mathbb{V}\) for all \(x,y\in|A|\). Let \((e,\rtimes)\) and \((e^{\prime},\rtimes^{\prime})\) be Eilenberg-Moore \(T\)-algebras, and let \(v\) be an object of \(\mathbb{V}\). A _\(v\)-graded homomorphism_ from \((e,\rtimes)\) to \((e^{\prime},\rtimes^{\prime})\) is a morphism \(h\colon v\to E(e,e^{\prime})\) in \(\mathbb{V}\) rendering commutative the following diagram in \(\mathbb{V}\) for each \(x\in|A|\). The morphism \(\mathfrak{l}_{e}\colon I\to E(e,e)\) is an \(I\)-graded algebra homomorphism as it is the identity for composition. Two graded algebra homomorphisms \(h\colon v\to E(e,e^{\prime})\) and \(h^{\prime}\colon v^{\prime}\to E(e^{\prime},e^{\prime\prime})\) compose to give a graded algebra homomorphism \[v\otimes v^{\prime}\xrightarrow{h\otimes h^{\prime}}E(e,e^{\prime})\otimes E( e^{\prime},e^{\prime\prime})\xrightarrow{\circ_{e,e^{\prime},e^{\prime\prime}}}E(e,e^{ \prime\prime})\] using associativity of composition in \(E\). Eilenberg-Moore \(T\)-algebras hence form a locally \(\mathbb{V}\)-graded category. **Lemma 8.13**.: _Let \(j\colon A\to E\) be a \(\mathbb{V}\)-functor, and let \(T\) be a \(j\)-monad. A \(T\)-algebra \((e,\rtimes)\) is equivalently specified by a \(\mathbb{V}\)-functor \(e\colon D\to E\), together with a family of morphisms_ \[\{\rtimes_{x,z}\colon E(|j|x,|e|z)\to E(|t|x,|e|z)\}_{x\in A,z\in D}\] _such that_ 1. \((|e|z,\rtimes_{-,z})\) _is an Eilenberg-Moore_ \(T\)_-algebra for all_ \(z\in|D|\)_;_ 2. \(e_{y,z}\) _is a_ \(D(y,z)\)_-graded homomorphism from_ \((|e|y,\rtimes_{-,y})\) _to_ \((|e|z,\rtimes_{-,z})\) _for all_ \(y,z\in|D|\)_._ _Moreover, a \((p_{1},\ldots,p_{n})\)-graded \(T\)-algebra morphism from \((e,\rtimes)\) to \((e^{\prime},\rtimes^{\prime})\) is equivalently a \(\mathbb{V}\)-form \(\epsilon\colon p_{1},\ldots,p_{n}\Rightarrow E(e,e^{\prime})\) such that each morphism_ \[\phi_{z_{0},\ldots,z_{n}}\colon p_{1}(z_{0},z_{1})\otimes\cdots\otimes p_{n}( z_{n-1},z_{n})\to E(|e|z_{0},|e^{\prime}|z_{n})\] _is a graded homomorphism from \((|e|z_{0},\rtimes_{-,z_{0}})\) to \((|e^{\prime}|z_{n},\rtimes^{\prime}_{-,z_{n}})\)._ Proof.: The two \(T\)-algebra laws are precisely the two laws required for (1), while one of the two laws required for \(\rtimes\) to be a \(\mathbb{V}\)-form, namely \(\mathbb{V}\)-naturality in \(z\), is (2). Hence for the characterisation of \(T\)-algebras it remains to show that (1) and (2) together imply the other \(\mathbb{V}\)-form law, namely \(\mathbb{V}\)-naturality of \(\rtimes\) in \(x\). This proof is analogous to that of \(\mathbb{V}\)-naturality of the extension operator \(\dagger\) of \(T\) in its first component (Theorem 8.8). The characterisation of graded \(T\)-algebra morphisms is trivial from Remark 6.29. **Remark 8.14**.: Consequently, Eilenberg-Moore \(T\)-algebras and their \(I\)-graded morphisms subsume several notions in the literature. 1. A _\(T\)-algebra_ in the sense of Walters [26, SS1; 26], Definitions 1.1.3] is precisely an Eilenberg-Moore \(T\)-algebra, for \(T\) as in Example 8.10(1). 2. An _EM-algebra of \(T\)_ in the sense of Altenkirch, Chapman and Uustalu [1, Definition 3; ACU15, Definition 2.11] is precisely an Eilenberg-Moore \(T\)-algebra, for \(T\) as in Example 8.10(3). 3. A _\(T\)-algebra_ in the sense of Lucyshyn-Wright and Parker [24, Definition 8.4] is precisely an Eilenberg-Moore \(T\)-algebra, for \(T\) as in Example 8.10(8). 4. A _\(T\)-algebra_ in the sense of McDermott and Uustalu [26, Definition 16] is precisely an Eilenberg-Moore \(T\)-algebra, for \(T\) as in Example 8.10(9). **Theorem 8.15**.: _Let \(j\colon A\to E\) be a \(\mathbb{V}\)-functor, and let \(T\) be a \(j\)-monad. \(T\) admits an algebra-object exactly when, for all Eilenberg-Moore \(T\)-algebras \((e,\rtimes)\) and \((e^{\prime},\rtimes^{\prime})\), there is a graded homomorphism_ \[(u_{T})_{(e,\rtimes),(e^{\prime},\rtimes^{\prime})}\colon\mathbf{EM}(T)((e, \rtimes),(e^{\prime},\rtimes^{\prime}))\to E(e,e^{\prime})\] _universal in the sense that every graded homomorphism \(v\to E(e,e^{\prime})\) factors uniquely through \((u_{T})_{(e,\rtimes),(e^{\prime},\rtimes^{\prime})}\) as a morphism \(v\to\mathbf{EM}(T)((e,\rtimes),(e^{\prime},\rtimes^{\prime}))\)._ Proof.: We first show that the algebra-object exists assuming that \((u_{T})_{(e,\rtimes),(e^{\prime},\rtimes^{\prime})}\) does. The _Eilenberg-Moore \(\mathbb{V}\)-category_\(\mathbf{EM}(T)\) of \(T\) has as objects Eilenberg-Moore \(T\)-algebras, and as homobjects the domains \(\mathbf{EM}(T)((e,\rtimes),(e^{\prime},\rtimes^{\prime}))\) of the universal homomorphisms. Since graded homomorphisms compose, identities and composition in \(\mathbf{EM}(T)\) are inherited from identities and composition in \(E\) via the universal property of the hom-objects. \[\begin{CD}I@>{\mathsf{l}_{(e,\rtimes)}}>{}>\mathbf{EM}(T)((e,\rtimes),(e, \rtimes))\\ @V{}V{\mathsf{l}_{e}}V@V{}V{(u_{T})_{(e,\rtimes),(e,\rtimes)}}V\\ E(e,e)\otimes E(e^{\prime},e^{\prime\prime})@>{\mathsf{o}_{e,e^{\prime},e^{ \prime\prime}}}>{}>E(e,e^{\prime\prime})\end{CD}\] \[\begin{CD}\mathbf{EM}(T)((e,\ddagger),(e^{\prime},\dagger^{\prime}))\otimes \mathbf{EM}(T)((e^{\prime},\ddagger^{\prime}),(e^{\prime\prime},\ddagger^{ \prime\prime}))@>{\mathsf{o}_{(e,\ddagger),(e^{\prime},\ddagger^{\prime}),(e^ {\prime\prime},\ddagger^{\prime\prime})}}>{}>\mathbf{EM}(T)((e,\ddagger),(e^{ \prime\prime},\ddagger^{\prime\prime}))\\ @V{}V{\mathsf{l}_{e}}V@V{}V{(u_{T})_{(e,\ddagger),(e^{\prime\prime}, \ddagger^{\prime\prime})}}V\\ E(e,e^{\prime})\otimes E(e^{\prime},e^{\prime\prime})@>{\mathsf{o}_{e,e^{ \prime},e^{\prime\prime}}}>{}>E(e,e^{\prime\prime})\end{CD}\] Unitality and associativity of composition in \(E\) clearly imply the corresponding properties for \(\mathbf{EM}(T)\), so that \(\mathbf{EM}(T)\) is a \(\mathbb{V}\)-category. The morphisms \((u_{T})_{(e,\rtimes),(e^{\prime},\rtimes^{\prime})}\) form a \(\mathbb{V}\)-functor \(u_{T}\colon\mathbf{EM}(T)\to E\), given on objects by \(|u_{T}|(e,\rtimes)=e\). The extension operators of Eilenberg-Moore \(T\)-algebras make \(u_{T}\) into a \(T\)-algebra by Lemma8.13. To show that this \(T\)-algebra satisfies the universal property of the algebra-object, consider an arbitrary \(T\)-algebra \((e\colon D\to E,\rtimes)\). Using the characterisation of \(T\)-algebras in Lemma8.13, we obtain a \(\mathbb{V}\)-functor \(\langle\rangle_{\rtimes}\colon D\to\mathbf{EM}(T)\). This is given on objects by \(|\langle\rangle_{\rtimes}|z=(|e|z,\rtimes_{-,z})\), and \[(\langle\rangle_{\rtimes})_{y,z}\colon D(y,z)\to\mathbf{EM}(|\langle\rangle_{ \rtimes}|y,|\langle\rangle_{\rtimes}|z)\] is given by the unique factorisation of \(e_{y,z}\colon D(y,z)\to E(|e|y,|e|z)\) through \((u_{T})_{|\langle\rangle_{\rtimes}|y,|\langle\rangle_{\rtimes}|z}\). This preserves identities and composition because \(e\) does. Moreover, \(\langle\rangle_{\rtimes}\) is clearly unique such that composing with \(u_{T}\) recovers the \(T\)-algebra \((e,\rtimes)\). It remains to show that \(T\)-algebra morphisms factor uniquely though \(u_{T}\). By Lemma8.13, such a morphism is equivalently a \(\mathbb{V}\)-form \[\epsilon\colon p_{1},\ldots,p_{n}\Rightarrow E(e,e^{\prime})\] each component of which is a graded homomorphism. Since \(u_{T}\) consists of the universal graded homomorphisms, it is immediate that \(\epsilon\) factors uniquely though \(u_{T}\) as a \(\mathbb{V}\)-form \[p_{1},\ldots,p_{n}\Rightarrow\mathbf{EM}(T)(\langle\rangle_{\rtimes},\langle \rangle_{\rtimes^{\prime}})\] For the converse, assume that the algebra-object \((u_{T}\colon\mathbf{Alg}(T)\to E,\rtimes_{T})\) exists. Each Eilenberg-Moore \(T\)-algebra \((e,\rtimes)\) can be viewed as a \(T\)-algebra \((e\colon\star\to E,\rtimes)\), which, by the universal property of the algebra-object, induces an object \(\langle\rangle_{\rtimes}\) of \(\mathbf{Alg}(T)\). We show that \[(u_{T})_{\langle\rangle_{\rtimes},\langle\rangle_{\rtimes^{\prime}}}\colon \mathbf{Alg}(T)(\langle\rangle_{\rtimes},\langle\rangle_{\rtimes^{\prime}}) \to E(e,e^{\prime})\] is the universal graded homomorphism. Each graded homomorphism \(h\colon v\to E(e,e^{\prime})\) can be viewed as a \((p)\)-graded \(T\)-algebra morphism from \((e,\rtimes)\) to \((e^{\prime},\rtimes^{\prime})\), by viewing \(v\) as a \(\mathbb{V}\)-distributor \(v\colon\star\to\star\). The universal property of the algebra-object implies that \(h\) then factors uniquely through \((u_{T})_{\langle\rangle_{\rtimes},\langle\rangle_{\rtimes^{\prime}}}\), as required. **Corollary 8.16**.: _Let \(j\colon A\to E\) be a \(\mathbb{V}\)-functor. If \(\mathbb{V}\) has equalisers, and the object \(\mathcal{P}A(E(|j|-,e),q)\) of \(\mathbb{V}\)-natural transformations exists for each \(e\in|E|\) and \(\mathbb{V}\)-presheaf \(q\) on \(A\), then every \(j\)-relative monad admits an algebra-object._ Proof.: By Theorem8.15, it suffices to show that the universal graded homomorphisms \[(u_{T})_{(e,\rtimes),(e^{\prime},\rtimes^{\prime})}\colon\mathbf{EM}(T)((e, \rtimes),(e^{\prime},\rtimes^{\prime}))\to E(e,e^{\prime})\] exist. Observe that both families of morphisms \[E(|j|x,e)\otimes E(e,e^{\prime})@>{\rtimes_{\alpha}\otimes E(e,e^{\prime})}>{} \leq E(|t|x,e)\otimes E(e,e^{\prime})@>{\circ_{|t|x,e,e^{\prime}}}>{}>E(|t|x,e^{ \prime})\] \[E(|j|x,e)\otimes E(e,e^{\prime})@>{\circ_{|j|x,e,e^{\prime}}}>{}>E(|j|x,e^{ \prime})@>{\rtimes^{\prime}_{\alpha}}>{}>E(|t|x,e^{\prime})\] are \(\mathbb{V}\)-natural in \(x\in|A|\) because \(\rtimes\) and \(\rtimes^{\prime}\) are (cf. Lemma 8.13). Hence there are corresponding morphisms, \[\zeta_{1},\zeta_{2}\colon E(e,e^{\prime})\to\mathcal{P}A(E(|j|-,e),E(|t|-,e^{ \prime}))\] and a morphism \(h\colon v\to E(e,e^{\prime})\) is a graded homomorphism exactly when \(h\,;\zeta_{1}=h\,;\zeta_{2}\). It follows that the equaliser of \(\zeta_{1}\) and \(\zeta_{2}\) is the universal graded homomorphism. In particular, the assumptions of Corollary 8.16 hold for \(\mathbb{V}\)-functors \(j\colon A\to E\) with small domain when \(\mathbb{V}\) is complete and closed. ### Existence of opalgebra-objects The existence of opalgebra-objects in \(\mathbb{V}\)**-Cat** is simpler than that of algebra-objects, and in particular requires no conditions on \(\mathbb{V}\). **Theorem 8.17**.: _Every relative monad in \(\mathbb{V}\)-\(\mathbf{Cat}\) admits an opalgebra-object._ Proof.: Let \(j\colon A\to E\) be a \(\mathbb{V}\)-functor and let \(T=(t,\eta,\dagger)\) be a \(j\)-monad. We define a \(\mathbb{V}\)-category \(\mathbf{Kl}(T)\), the _Kleishi \(\mathbb{V}\)-category of \(T\)_ after [10], as follows. \[|\mathbf{Kl}(T)| :=|A| \mathbf{Kl}(T)(x,y) :=E(|j|x,|t|y)\] \[\mathfrak{l}_{x} :=\eta_{\varepsilon} \circ_{x,y,z} :=\] \[E(|j|x,|t|y)\otimes E(|j|y,|t|z)\xrightarrow{E(|j|x,|t|y)\otimes\mathfrak{l}_{ y,z}}E(|j|x,|t|y)\otimes E(|t|y,|t|z)\xrightarrow{\circ_{|j|x,|t|y,|t|z}}E(|j|x,|t|z)\] Unitality and associativity of composition follows from the unitality and associativity laws for \(T\). We define an identity-on-objects \(\mathbb{V}\)-functor \(k_{T}\colon A\to\mathbf{Kl}(T)\), the _Kleishi inclusion of \(T\)_, whose action on hom-objects is given by \[(k_{T})_{x,y}:= A(x,y)\xrightarrow{j_{x,y}}E(|j|x,|j|y)\xrightarrow{E(|j|x, \eta_{\eta})}E(|j|x,|t|y)\] with preservation of identities following from commutativity of the following diagram, and preservation of composites following from commutativity of the following diagram, using the first unit law for \(T\). We shall show that \(k_{T}\colon A\to\mathbf{Kl}(T)\) together with the identity \(\mathbb{V}\)-natural transformation \(E(j,t)=\mathbf{Kl}(T)(k_{T},k_{T})\) forms an opalgebra-object for \(T\). That \((k_{T},1)\) forms a \(T\)-opalgebra follows from the definition of identities and composition in \(\mathbf{Kl}(T)\). Let \((a,\ltimes)\) be a \(T\)-opalgebra. The \(\mathbb{V}\)-form \(\ltimes\colon E(j,t)\Rightarrow B(a,a)\) defines the action on hom-objects of a \(\mathbb{V}\)-functor \([]_{\ltimes}\colon\mathbf{Kl}(T)\to B\) with object-function \(|[]_{\times}|:=|a|\), preservation of identities and composites following from the unitality and extension laws of the opalgebra. We trivially have commutativity of since \(k_{T}\) is identity-on-objects. Let \(\alpha\) be a \((p_{1},\dots,p_{n})\)-graded \(T\)-opalgebra morphism from \((a,\ltimes)\) to \((a^{\prime},\ltimes^{\prime})\), hence a family of morphisms \[\{\phi_{x_{0},\dots,x_{n},y}\colon p_{1}(x_{0},x_{1})\otimes\dots\otimes p_{n} (x_{n-1},x_{n})\otimes B(x_{n},|a|y)\to B^{\prime}(x_{0},|a^{\prime}|y)\}_{x_{ 0},\dots,x_{n},y}\] in \(\mathbb{V}\). Since \(k_{T}\) is identity-on-objects, this is equivalently a family of morphisms \[\{[]_{\alpha_{x_{0},\dots,x_{n},y}}\colon p_{1}(x_{0},x_{1})\otimes\dots \otimes p_{n}(x_{n-1},x_{n})\otimes B(x_{n},|[_{\times}|y)\to B^{\prime}(x_{0}, |[_{\ltimes^{\prime}}|y)\}_{x_{0},\dots,x_{n},y}\] That this family forms a \(\mathbb{V}\)-form follows from the fact that \(\alpha\) is a \(\mathbb{V}\)-form, together with the \(T\)-opalgebra morphism compatibility law. Hence \(\alpha\) factors uniquely through \([]_{\alpha}\). ### Existence of coalgebra- and coopalgebra-objects Given a monoidal category \(\mathbb{V}\), we denote by \(\mathbb{V}^{\text{rev}}\) the monoidal category with the same objects and unit as \(\mathbb{V}\) and whose tensor product is defined by \(x\otimes_{\mathbb{V}^{\text{rev}}}y:=y\otimes_{\mathbb{V}}x\). To deduce sufficient conditions for the existence of coopalgebra- and coalgebra-objects for relative comonads, the following observation is useful. **Proposition 8.18**.: _There is an isomorphism of virtual double categories_ \[\mathbb{V}\text{-}\mathbf{Cat}^{\text{co}}\cong\mathbb{V}^{\text{rev}}\text{-} \mathbf{Cat}\] Proof.: For each \(\mathbb{V}\)-category \(C\), we may define a \(\mathbb{V}^{\text{rev}}\)-category \(C^{\text{op}}\), its dual, given by \[|C^{\text{op}}|:=|C|\qquad\qquad C^{\text{op}}(x,y):=C(y,x)\qquad\qquad\mathsf{ l}_{x}^{C^{\text{op}}}:=\mathsf{l}_{x}^{C}\qquad\quad\circ_{x,y,z}^{C^{ \text{op}}}:=\circ_{z,y,x}^{C}\] Unitality and associativity of composition in \(C^{\text{op}}\) follows from that of \(C\). For each \(\mathbb{V}\)-functor \(f\colon C\to D\), we may define a \(\mathbb{V}^{\text{rev}}\)-functor \(f^{\text{op}}\colon C^{\text{op}}\to D^{\text{op}}\) given by \[|f^{\text{op}}|=|f|\qquad\qquad f^{\text{op}}_{x,y}=f_{y,x}\] Preservation of identities and composites follows from that of \(f\). For each \(\mathbb{V}\)-distributor \(p\colon C\xrightarrow{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text family of morphisms \(\{\rtimes_{x}\colon U(u,|i|x)\to U(u,|d|x)\}_{z\in|Z|}\) satisfying counitality and compatibility laws. A _\(v\)-graded homomorphism_ from a coalgebra \((u,\rtimes)\) to a coalgebra \((u^{\prime},\rtimes^{\prime})\) is then a morphism \(h\colon v\to U(u,u^{\prime})\) preserving the coextension operators. **Theorem 8.19**.: _Let \(i\colon Z\to U\) be a \(\mathbb{V}\)-functor, and let \(D\) be an \(i\)-comonad. \(D\) admits a coalgebra-object exactly when there is a universal graded homomorphism between any two Eilenberg-Moore \(D\)-coalgebras. In particular, every \(i\)-comonad admits a coalgebra-object when \(\mathbb{V}\) has equalisers and \(\mathcal{Q}Z(U(u,|i|-),q)\) exists for all objects \(u\in|U|\) and copresheaves \(q\)._ Proof.: By Proposition 8.18, \(D\) admits a coalgebra-object if and only if the corresponding relative monad in \(\mathbb{V}^{\mathrm{co}}\)-\(\mathbf{Cat}\) admits an algebra-object, so the result follows from Theorem 8.15 and Corollary 8.16. In particular, the assumptions of Theorem 8.19 hold for \(\mathbb{V}\)-functors \(i\colon Z\to U\) with small domain when \(\mathbb{V}\) is complete and closed. **Theorem 8.20**.: _Every relative comonad in \(\mathbb{V}\)-\(\mathbf{Cat}\) admits a coopalgebra-object._ Proof.: By Proposition 8.18, a relative comonad admits a coopalgebra-objects if and only if the corresponding relative monad in \(\mathbb{V}^{\mathrm{co}}\)-\(\mathbf{Cat}\) admits an opalgebra-object, so the result follows from Theorem 8.17.
2310.14205
Machine-learning-assisted analysis of transition metal dichalcogenide thin-film growth
In situ reflective high-energy electron diffraction (RHEED) is widely used to monitor the surface crystalline state during thin-film growth by molecular beam epitaxy (MBE) and pulsed laser deposition. With the recent development of machine learning (ML), ML-assisted analysis of RHEED videos aids in interpreting the complete RHEED data of oxide thin films. The quantitative analysis of RHEED data allows us to characterize and categorize the growth modes step by step, and extract hidden knowledge of the epitaxial film growth process. In this study, we employed the ML-assisted RHEED analysis method to investigate the growth of 2D thin films of transition metal dichalcogenides (ReSe2) on graphene substrates by MBE. Principal component analysis (PCA) and K-means clustering were used to separate statistically important patterns and visualize the trend of pattern evolution without any notable loss of information. Using the modified PCA, we could monitor the diffraction intensity of solely the ReSe2 layers by filtering out the substrate contribution. These findings demonstrate that ML analysis can be successfully employed to examine and understand the film-growth dynamics of 2D materials. Further, the ML-based method can pave the way for the development of advanced real-time monitoring and autonomous material synthesis techniques.
Hyuk Jin Kim, Minsu Chong, Tae Gyu Rhee, Yeong Gwang Khim, Min-Hyoung Jung, Young-Min Kim, Hu Young Jeong, Byoung Ki Choi, Young Jun Chang
2023-10-22T06:52:39Z
http://arxiv.org/abs/2310.14205v1
## Machine-learning-assisted analysis of transition metal dichalcogenide thin-film growth ## Abstract _In situ_ reflective high-energy electron diffraction (RHEED) is widely used to monitor the surface crystalline state during thin-film growth by molecular beam epitaxy (MBE) and pulsed laser deposition. With the recent development of machine learning (ML), ML-assisted analysis of RHEED videos aids in interpreting the complete RHEED data of oxide thin films. The quantitative analysis of RHEED data allows us to characterize and categorize the growth modes step by step, and extract hidden knowledge of the epitaxial film growth process. In this study, we employed the ML-assisted RHEED analysis method to investigate the growth of 2D thin films of transition metal dichalcogenides (ReSe\({}_{2}\)) on graphene substrates by MBE. Principal component analysis (PCA) and K-means clustering were used to separate statistically important patterns and visualize the trend of pattern evolution without any notable loss of information. Using the modified PCA, we could monitor the diffraction intensity of solely the ReSe\({}_{2}\) layers by filtering out the substrate contribution. These findings demonstrate that ML analysis can be successfully employed to examine and understand the film-growth dynamics of 2D materials. Further, the ML-based method can pave the way for the development of advanced real-time monitoring and autonomous material synthesis techniques. Machine learning, RHEED, principal component analysis, K-means clustering, TMDC, ReSe\({}_{2}\) ## 1 Introduction Advanced thin-film synthesis methods, such as molecular beam epitaxy (MBE), pulsed laser deposition (PLD), and atomic layer deposition (ALD), have allowed the formation of atomically sharp interfaces and precise surface engineering in transition metal oxides, III-V semiconductors, and two-dimensional (2D) transition metal dichalcogenides (TMDCs) [1, 2, 3, 4]. _In situ_ monitoring techniques, such as reflection high-energy electron diffraction (RHEED), spectroscopic ellipsometry, and Auger electron spectroscopy, enable us to monitor the physical properties during the film growth in real time [5, 6, 7]. Such _in situ_ monitoring techniques have drastically improved our understanding of the growth dynamics. Notably, _in situ_ RHEED, which involves the use of high-energy electrons along the grazing incident angle, is sensitive to the topmost surface. Its image data carry a wealth of physical information, such as surface crystallinity, surface morphology, growth rate, in-plane lattice spacing, strain effect, degree of disorder, and changes in surface reconstruction [8, 9, 10, 11]. Although the advanced RHEED technique is widely used for the growth of thin films as well as nanostructures, such as nanodots and nanorods [12], only a small fraction of the RHEED data is used. This minute fraction contains static diffraction patterns obtained at a specific time or intensity profile from several diffraction points during the thin-film growth. With the development of artificial intelligence technology, one should consider adopting machine learning (ML) methods for analyzing the complete RHEED data to advance the existing thin-film growth methods and design fully autonomous material synthesis techniques [13, 14, 15, 16]. Deep learning models, such as convolutional neural networks, classified the surface pattern and reconstruction of GaAs [17] and FexOy [18] with a high accuracy based on the RHEED data. The surface evolution and transitions in an entire RHEED data sequence were also examined for various oxide materials using unsupervised ML methods such as principal component analysis (PCA) and K-means clustering [19, 20, 21]. They are advantageous for distinguishing the film-growth dynamics and investigating the time-dependent growth mechanisms and transitions of surface crystalline phases. PCA is an orthogonal linear transformation that defines new orthonormal basis vectors called principal components. Each principal component corresponds to an extracted pattern with a statistical significance (Fig. 1(b)). For the oxide film growth, PCA facilitates the identification of growth modes and reduction of data dimensionality [19, 20]. K-means clustering is a vector quantization method in which the RHEED image sequence is partitioned into \(K\) clusters based on statistical similarity (Fig. 1(c)). This method allows the identification of stoichiometric changes, strain relaxation, surface reconstruction, and growth mode transitions [19, 21]. The ML-assisted RHEED analysis has been applied to analyze the film growth of many oxide materials [18, 19, 20, 21], but not for 2D materials. Understanding the growth mechanisms of ultrathin 2D TMDCs is vital for investigating the unique physical properties arising from their 2D van der Waals layered structures. The film growth mechanism of 2D materials is significantly different from that of other oxides, whose interlayer bonding at the interfaces is strong. Typically, 2D materials can grow epitaxially even for a large lattice mismatch between the film and the substrate, because of their weak van der Waals bonding at the interfaces [1]. The growth mechanism of 2D materials has been investigated using _ex situ_ characterizations, such as Raman spectroscopy, photoelectron spectroscopy, scanning tunneling microscopy, and transmission electron microscopy [22, 23, 24, 25]. These _ex situ_ approaches provide limited information on the real-time film growth dynamics, and thus, it is imperative to adopt a suitable method for investigating the entire RHEED video of the film growth of 2D materials. In this study, we demonstrate the ML-assisted RHEED analysis of TMDC thin-film growth based on unsupervised ML approaches, including PCA and K-means clustering. Using these methods, we can isolate the RHEED patterns based on their statistical importance and then separately monitor the film contributions. The ML-assisted RHEED analysis was primarily conducted on 1T-ReSe\({}_{2}\) thin films grown on graphene substrates by MBE. We developed a modified version of the PCA to detect the thickness oscillation of the 2D thin films by eliminating the strong substate contributions and by reconstructing the RHEED intensity profile of only the thin films. Furthermore, compression of the first thickness oscillation suggested an abrupt change in the film growth rate during the initial growth period. These findings reveal that implementing ML analysis is suitable for attaining a deeper understanding of the film-growth dynamics of 2D materials and for developing advanced real-time film monitoring techniques. ## 2 Results We prepared ReSe\({}_{2}\) thin films, with varied thicknesses, on graphene substrates. Figure 2(a) shows the atomic structure of the distorted 1T (1T') ReSe\({}_{2}\). Figure 2(b-d) show the schematic models of the graphene substrate and ReSe\({}_{2}\) thin films with 0.3 and 3 unit cells (UC), respectively. We monitored the growth of ReSe\({}_{2}\) with _in situ_ RHEED measurements and then compared the results with _ex situ_ atomic force microscopy (AFM) data, as shown in Fig. 2(e-j). Initially, the bilayer graphene substrate was prepared with a sharp RHEED pattern (Fig. 2(e)) and a very flat surface with wide terraces (Fig. 2(h)). After 4 min of film growth, additional Figure 1: Overview of ML-assisted growth analysis. (a) Schematic of the growth of 2D layered thin films by MBE and acquisition of _in situ_ RHEED video, (b,c) Processes of PCA and K-means clustering. streaks of the ReSe\({}_{2}\) lattice emerged in the RHEED pattern, indicated by red arrows in Fig. 2(f). Further, small ReSe\({}_{2}\) islands were nucleated in the topography (Fig. 2(i)). After 62 min of deposition, the RHEED pattern of graphene completely disappeared, leaving only the ReSe\({}_{2}\) streaks, as shown in Fig. 2(g). The vertically elongated ReSe\({}_{2}\) streaks indicated a flat surface topography of the ReSe\({}_{2}\) thin film [9]. The in-plane lattice parameter of the ReSe\({}_{2}\) layer was estimated by comparing the RHEED streaks of graphene and ReSe\({}_{2}\). The calculated in-plane lattice parameter was 6.58 A, which was consistent with the bulk value (6.60 A(a1) and 6.71 A(a2)) [26]. The corresponding ReSe\({}_{2}\) thin film showed a flat surface with a roughness of 0.23 nm (Fig. 2(j)), and its thickness was expected to be about 3UC. The 3UC-thick ReSe\({}_{2}\) was characterized by Raman spectroscopy, as shown in Fig. 2(k). ReSe\({}_{2}\) exhibited diverse vibration modes in the range of 100-300 cm-1, because the inversion symmetry is broken in 1T' ReSe\({}_{2}\). The peak positions were consistent with those of the ReSe\({}_{2}\) bulk and thick films, and the peak positions showed only a slight thickness dependence [27, 28]. We also evaluated the layer thickness by high-angle annular dark field (HAADF) scanning transmission electron microscopy (STEM) analysis, as shown in Fig. 2(l). In this figure, three horizontal arrays of white dots are sandwiched with grey dots, as indicated by black arrows. Evidently, the top ReSe\({}_{2}\) layer shows a weaker signal, probably due to an incomplete coverage of the topmost layer. Additionally, we examined the stoichiometry of ReSe\({}_{2}\) by X-ray photoemission spectroscopy (XPS). We calculated the integrated peak areas of Re _4f_ and Se _3d_ and found that the Se/Re atomic ratio was approximately 2.01; this value was similar to the nominal stoichiometric ratio (see Supplementary Information, Fig. S1). These results confirm the successful growth of ReSe\({}_{2}\) thin films with controlled thicknesses, and indicate that the corresponding RHEED data can be analyzed by ML techniques. First, we analyzed the RHEED video of the ReSe\({}_{2}\) film by PCA. Figures 3(a,b) show the first six principal components (PCs) and their corresponding score values, which are similar to the concepts of eigenvectors and eigenvalues, respectively. The six components add up to 98.95 % of statistical variance in the dataset (see Supplementary Information, Fig. S2), implying most of the dataset can be represented by a few components and scores. Especially, PC1 has the most Figure 2: Growth and characterization of ReSe\({}_{2}\) thin films. (a) Crystal structures of 1T’ ReSe\({}_{2}\). (b-d) Schematic models of the graphene substrate and ReSe\({}_{2}\) thin films with 0.3UC and 3UC. (e-g) RHEED images and (h-j) AFM images of the ReSe\({}_{2}\) thin film for different growth times (0, 4, and 62 min). The black and red arrows in the RHEED images indicate the bilayer graphene substrate and ReSe\({}_{2}\) diffraction streak, respectively. (k) Raman spectrum and (l) HAADF STEM image of the 3UC ReSe\({}_{2}\) film. Scale bars in the AFM and STEM images are 500 nm and 3 nm, respectively. variation (91.98 %) in the RHEED video. The PC1 in Fig. 3(a) shows two major characteristics. First, the positive (red) area well matches the graphene pattern shown in Fig. 2(e). On the contrary, the negative (blue) area matches with the (2,0) and (-2,0) diffraction points of ReSe\({}_{2}\). The score 1, or the change in PC1 over time, decreases gradually and undergoes a sign change from positive to negative near the third dashed line in Fig. 3(b). This result implies that in the initial RHEED video, a gradually decreasing trend of the graphene signal is primarily observed. This signal trend is strikingly different from that of the oxide thin film, in which the in-plane lattice parameters are mostly nearly matched [19, 20, 21]. The second component, PC2 dominates the ReSe\({}_{2}\) streaks and minor diffraction points on the graphene and SiC substrates. The negative value of PC2 represents the epitaxial 2D growth of the ReSe\({}_{2}\) thin film, which is evidenced by the similar RHEED pattern of ReSe\({}_{2}\) in Fig. 2(g). The positive (red) region of PC2 includes the graphene diffraction streaks and several additional spots in the middle. Such spots are related to the buffer layer and SiC substrate beneath the graphene [29]. The initial decrease in score 2 (Fig. 3(b)) indicates that the substrate pattern disappears, and the ReSe\({}_{2}\) pattern begins to emerge, corresponding to the first dashed line. Conversely, PC3-6 contain the (2,0) and (-2,0) diffraction signals of the 3UC ReSe\({}_{2}\) layers. The corresponding score 3-6 exhibit an oscillating behavior (Fig. 3(b)). In the MBE growth, the oscillating behaviors of specular or diffraction spots are used to estimate the film thickness and to analyze the growth modes [30]. In the layer-by-layer growth mode, the RHEED intensity is periodically modulated by the interference between the adjacent layers or the degree of diffused scattering, depending on the surface coverage [8]. The PCA results of other Figure 3: PCA results; (a) Six PCs of the RHEED video for the 3UC-thick ReSe\({}_{2}\) thin film and (b) the corresponding score plots. Component 1 (PC1) shows the diffraction signal of graphene, while component 2 (PC2) contains the signals of both the graphene and ReSe\({}_{2}\) layers. Component 3-6 (PC3-6) show the signal of only the 2D growth of ReSe\({}_{2}\) layer. (c-e) The intensity plots of the (c) original RHEED video and (d,e) modified RHEED video. Blue and orange lines denote the (0,0) and (2,0) diffraction streaks of the ReSe\({}_{2}\) thin film (shown in the inset), respectively. thicknesses (2UC, 4UC, and 5UC) also revealed that the oscillating character is observed when the PCs include the (2,0) and (-2,0) diffraction signals (see Supplementary Information, Fig. S3). Although the contribution of PC3-6 to the entire RHEED signal is \(<\)2 % (see Supplementary Information, Fig. S2), they contain physical meaning about the film thickness and its growth mode. PCA is a versatile technique that allows us not only to decompose complex RHEED image sequences but also to selectively recombine the PCs and scores. However, further reconstruction of the selected components to extract the buried signal of interest has not yet been demonstrated. In the RHEED data of 3UC ReSe\({}_{2}\), we noticed that the strong signals of the graphene and substrate overshadowed the weak film intensities at the initial growth duration. In Fig. 3(c), the (0,0) peak intensity gradually declines and represents the graphene contribution, which is well correlated to score 1. To separate the weak ReSe\({}_{2}\) signal from the original video, we obtained the modified RHEED data (mPCA) by consecutively subtracting graphene-related components (PC1 or PC2) from the raw RHEED video, as described schematically in Fig. 1(b). Figs. 3(d,e) show the intensity plot of the (0,0) (blue lines) and (2,0) (orange lines) streaks obtained from the mPCA video sets. In Fig. 3(d), the subtraction of PC1 mainly changes the intensity plot within the initial period up to the third dashed line (23 min). This change indicates the signal transition from graphene to ReSe\({}_{2}\), consistent with the sign change in score 1 (indicated with an arrow in Fig. 3(b)). In Figs. 3(e), further subtractions of PC1 and PC2 result in stable oscillations for both blue and orange curves. Such oscillatory behaviors of the (0,0) and (2,0) streaks are likely linked to the layer-by-layer film growth, as mentioned before [8]. Interestingly, the orange curves show an additional period compared to the blue ones. Such discrepancy occurs in the initial duration when the strong graphene signal is overlapped with the ReSe\({}_{2}\) signal. In this duration, the blue curves show a dip and slow recovery up to 23 min, while the orange curves show a peak-dip-peak shape. The consistent oscillating behaviors of the blue and orange curves in Fig.3(e) provide accurate information about the film thickness such that the resulting film thickness of 3UC is consistent with the STEM data presented in Fig. 2(c). Accordingly, we added the vertical dashed lines in Figs. 3(b-e) and 4(a). For comparison with the PCA results, we analyzed an identical RHEED dataset by K-means clustering. The K-means clustering method categorizes the sequence of the RHEED images into several clusters based on similarity without the need for complex mathematical transformations, and thus, determines the transition moments between distinct phases during the thin-film growth. It is worth noting that the relation between PCA and K-means algorithms is somewhat linked, as established well previously [19, 31, 32]. We employed a different number of clusters (\(K=2\)-\(6\)). Figures 4(a,b) show the time-dependent clustering for each \(K\) value and the corresponding centroids. As \(K\) is increased from 2 to 6, more divided sections appear for the initial growth time (i.e., \(<35\) min), implying that the major pattern change mostly occurs at the initial duration. The boundaries between the clusters show good alignment with the vertical dashed lines for \(K=5\) and 6 (Fig. 4(a)). As shown in Fig. 4(c), the cost function (i.e., the accumulated differences between the clusters and the original data) is used to determine the valid number of clusters, and the appropriate \(K\) is near the saturation point of the curve [21]. The cost function is saturated when \(K\!>\!4\). To investigate the evolution of the centroids in detail, we plotted the difference between the adjacent centroids (\(\varDelta C_{i(i+1)}\)) as shown in Fig. 4(d) by subtracting a former centroid (\(C_{i}\)) from a latter one (\(C_{i^{+}1}\)) for \(K=6\). Here, the positive (red) and negative (blue) regions represent the emerging and disappearing characteristics in the RHEED patterns, respectively. A distinct feature of \(\varDelta C_{12}\) is the emerging ReSe\({}_{2}\) streak signal (indicated with red arrows), which corresponds to the emerging ReSe\({}_{2}\) signal in the PCA. The graphene signal (black arrows) shows a gradually disappearing trend up to \(\Delta C_{45}\) (23 min). This boundary corresponds to the third dashed line, at which the graphene signal nearly disappears as score 1 becomes negative in the PCA (Fig. 2(b)). After the graphene signal disappears, \(\Delta C_{56}\) mostly shows the intensity variations in the ReSe\({}_{2}\) streaks, implying a homoepitaxial growth regime. Therefore, the results obtained by K-means clustering with \(K>4\) were consistent with those of the PCA. ## 3 Discussion The stable oscillations of RHEED diffraction streaks in Fig. 3(e) indicate that the ReSe\({}_{2}\) film growth nearly follows the layer-by-layer growth mode. The two oscillation peaks are observed Figure 4: K-means clustering analysis of the RHEED video of the 3UC ReSe\({}_{2}\). (a) Clusters with number of clusters (\(K=2\)–\(6\)) and (b) their corresponding centroids. (c) Cost function as a function of \(K\). (d) Difference between the adjacent centroids for \(K=6\). until the RHEED signal of graphene disappears, as shown by an orange arrow (\(\sim\)23 min) in Fig. 3(e). This observation corresponds to the sign reversal moment of score 1 (Fig. 3(b)). The two oscillation peaks imply that the small portion of bilayer ReSe\({}_{2}\) domains are formed, before the graphene surface is completely covered, at the given growth condition. Such a phenomenon was observed in our previous scanning tunneling microscopy-based study, in which we had observed the partial formation of bilayer ReSe\({}_{2}\) islands when the graphene surface was incompletely buried [23]. These results of ReSe\({}_{2}\) growth behavior suggest some deviation from the layer-by-layer growth mode to the Stranski-Krastanov growth mode, Moreover, the first oscillation period of the (2,0) streak is approximately half of the following oscillation periods (black arrows in Fig. 3(e)). The shrinking of the first oscillation indicates that either the growth rate in the first layer accelerated or that in the following layers decelerated. Abrupt changes in the RHEED oscillation occur in the case of SrRuO\({}_{3}\) growth on SrTiO\({}_{3}\) (001) surfaces [33], and the first oscillation period is two times longer than the following periods. Koster et al. concluded that RuO\({}_{x}\) re-evaporates, and the growth rate of the first SrRuO\({}_{3}\) layer drops to nearly half of its initial value [34]. This decrease in the growth rate implies that the growth dynamics of the first layer are largely dependent on the surface energy of the substrates in the case of complex oxides and chalcogenides [33, 34, 35, 36]. In our case, the film growth process can be divided into two situations: ReSe\({}_{2}\) layer on graphene surface (heteroepitaxy) and ReSe\({}_{2}\) layer on ReSe\({}_{2}\) surface (homoepitaxy). Assuming that the number of atoms that are deposited is kept same during the film growth, the different surface energies of graphene and ReSe\({}_{2}\) are expected to lead to a faster growth of the first ReSe\({}_{2}\) layer when it is grown on a graphene. The shortening of the first RHEED oscillations are consistently observed when several ReSe\({}_{2}\) films are repeated (see Supplementary Information, Fig. S4). Since different substrate surface states have also shown alteration of growth modes of TMDC thin films [35, 36], further analysis of the initial RHEED analysis for different substrates and thin film materials would be beneficial to investigate the correlation between the surface energy and growth modes [37]. We applied comprehensive ML analyses, such as PCA and K-means clustering, to understand the growth mechanism of an FeSe\({}_{2}\) thin film on graphene, which is a model van der Waals heteroepitaxial system. In case of the oxide film growth, the previous ML analyses of RHEED have reported the growth modes and the implication of PCs because the RHEED patterns maintain similar shapes and sizes from the substrates to the films. However, TMDC thin films have been successfully grown on substrates with largely mismatched lattices, such as graphene and sapphire, because of the weak van der Waals bonding at the interfaces [1]. The low-dimensionality characteristic of the TMDCs also gives rise to unique layer-dependent quantum phenomena. Thus, precise prediction of the film thickness is crucial for the initial growth. The dominant substrate signal in the RHEED pattern hinders the analysis of the initial growth mechanism of a thin film. Our ML analysis focused on separating PCs corresponding to the substrates and the films by utilizing PCA with statistical significance. This ML analysis is beneficial for analyzing the growth dynamics and layer thicknesses for ultrathin van der Waals thin films, and the corresponding results are consistent with those of the K-means clustering method. Our results suggest that the ML-assisted RHEED analysis could be developed into an automatic validation method for investigating ultrathin 2D materials films, and it is complementary to other surface analysis tools [7, 38, 39]. Furthermore, this method can be applied to analyze the thin-film growth of other 2D materials, such as 2D chalcogenides, 2D MXenes, 2D oxides, and hexagonal boron nitrides [40, 41, 42, 43]. In summary, we conducted an ML-assisted _in situ_ RHEED analysis to understand the epitaxial growth of FeSe\({}_{2}\) thin films, with different thicknesses, on graphene. Using PCA, we can separate the _in situ_ RHEED dataset into newly defined PCs and their scores based on their statistical significance. We observed the growth dynamics of the FeSe\({}_{2}\) thin film by subtracting the graphene substrate contribution. We confirmed that the time evolution of the _K_-means clusters for \(K>4\) was consistent with the PCA result. Therefore, these results indicate the feasibility of applying ML techniques to analyze the epitaxial growth of 2D layered materials and suggest that such techniques can accelerate the development of automated film growth processes. ## 5 Experimental section ### 5.1 Film growth ReSe\({}_{2}\) thin films were grown on an epitaxial graphene bilayer, which was fabricated on a (0001) 6H-SiC substrate, using a home-built MBE system in ultrahigh vacuum (base pressure: 1.0 \(\times\) 10-9 torr). For the growth of the bilayer graphene on the SiC substrate, the substrate was outgassed at 650 \({}^{\circ}\)C for a few hours, and the substrates were subsequently annealed at 1300 \({}^{\circ}\)C for 6 min, as verified by the RHEED image shown in Fig. 2(f). High-purity Re (99.8 %) and Se (99.999 %) were used for the FeSe\({}_{2}\) thin-film growth. We synthesized the FeSe\({}_{2}\) thin film by co-evaporating Re and Se using an electron-beam evaporator and a Knudsen cell, respectively, while monitoring the film surface by the _in situ_ RHEED, as shown in Fig. 1(a). The substrate was maintained at 300 \({}^{\circ}\)C during the deposition [44]. ### 5.2 Characterization The Raman spectroscopic measurements were performed using a 532 nm excitation laser source with a fixed power (30 mW) and fixed acquisition time (60 s) at room temperature. Scattered light from the samples was analyzed using a single-grating monochromator with a focal length of 50 cm, and was detected by a liquid-nitrogen-cooled charge-coupled-device detector (LabRAM HR Evolution, HORIBA). AFM was performed to investigate the surface morphology under atmospheric conditions after the deposition (XE-100, Park system), and the samples were scanned in the non-contact mode using an NSC18/Pt tip. The XPS measurements were carried out to examine the stoichiometry of the films (NEXSA, Thermo Fisher Scientific). For the STEM analysis, cross-sectional specimens were fabricated using the focused ion beam technique (Helios Nanolab 450, ThermoFisher Scientific). The HAADF STEM images were obtained using a double Cs-corrected FEI Titan G2 60-300 microscope with an accelerating voltage of 200 kV. ### 5.3 ML method All the ML analyses were carried out using python version 3.8.12 (the code and model are publicly available [44]). For the PCA, first, we converted the RHEED video into a 2D array, namely \(X\), which was an \(M\times N\) matrix, where \(M\) and \(N\) represented the number of frames and pixels, respectively. We captured RHEED video at a rate of one frame/second so that each row of the matrix represented an RHEED image at a particular time as shown in Fig. 1(b) (blue-shaded boxes). For the PCA, the dataset was converted into a linearly superposed set with component weights and orthogonal basis consisting of eigenvectors. The basis matrix (red-shaded boxes) formed an \(N\times N\) matrix, and the column vectors indicated the individual PCs. In this newly defined matrix (green-shaded boxes in the PC space), the components were determined by the production of \(X\) and the basis matrix. The row vectors represented the RHEED images arranged in a descending order of eigenvalues ('Score'), whereas the column vectors represented the time-dependent behavior of each score. We proposed a reconstruction process, namely the mPCA, in which the frames of the PC space were merged while eliminating some of the selected PCs, i.e. "Original RHEED" - \(\sum_{i=1}^{n}PC_{i}\), for eliminating the substrate contributions. Then, we extracted the time dependences of the selected diffraction peak intensities, as shown in the left bottom of Fig. 1(b). Next, we carried out the K-means clustering analysis by using 20 PCs to reduce the dimension of the original dataset for faster computing. We split the RHEED image series into \(K\) clusters, in which each image was classified to the cluster with the nearest mean ("centroid"). First, we randomly selected the \(K\) images, as the initial selections of the centroids, from the whole dataset. Then, we allocated each RHEED image to the nearest centroid. The old centroids were replaced by the mean images constituting the corresponding clusters. This replacement was iteratively repeated until the centroids stopped changing. ## Abbreviations RHEED: Reflective high-energy electron diffraction, MBE: Molecular beam epitaxy, ML: Machine learning, PC: Principal components, PCA: Principal component analysis, mPCA: Modified principal component analysis, 2D: Two-dimensional, TMDC: Transition metal dichalcogenide, 1T': Distorted 1T, UC: Unit cell, AFM: Atomic force microscopy, HAADF: High-angle annular dark field, STEM: Scanning transmission electron microscopy, XPS: X-ray photoemission spectroscopy. ## Declarations ### Acknowledgments Authors thank Yea-Lee Lee and Seunghun Jang for the constructive discussions. #### Availability of data and materials The machine learning codes are available in GitHub at [https://github.com/youngjunching/RHEED_2D_ML](https://github.com/youngjunching/RHEED_2D_ML), Ref. [45], and the RHEED videos are available in our web-based platform, _2D Materials_ at [http://2dmat.chemdx.org/data_uos](http://2dmat.chemdx.org/data_uos), Ref [46], with permission from the corresponding authors upon reasonable request. #### Competing interests The authors declare no competing financial interest. #### Authors' information \({}^{1}\)Department of Physics, University of Seoul, Seoul, 02504, Republic of Korea. \({}^{2}\)Department of Smart Cities, University of Seoul, Seoul, 02504, Republic of Korea. \({}^{3}\)Department of Energy Science, Sungkyunkwan University (SKKU), Suwon, 16419, Republic of Korea. \({}^{4}\)Graduate School of Semiconductor Materials and Devices Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919, Republic of Korea. \({}^{5}\)Advanced Light Source (ALS), E. O. Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA. #### Funding This work was supported by the National Research Foundation of Korea (NRF) grants funded by the Korea Government (NRF-2020R1A2C200373211, 2021R1A6A3A14040322, and 2022R1A2C2011109) and [Innovative Talent Education Program for Smart City] by MOLIT. This research has been performed as a cooperation project of "Basic project (referring to projects performed with the budget directly contributed by the government to achieve the purposes of establishment of government-funded research institutes)" and supported by the Korea Research Institute of Chemical Technology (KRICT) (S12151-10-06). ## Authors' contributions HJK and YJC designed the experiments and performed the analyses. HJK, YGK, TGR, and BKC performed the sample preparation and characterizations. M-HJ, Y-MK and HYJ carried out the scanning transmission electron microscopy analysis. HJK and MC carried out the machine-learning analysis. HJK, MC and YJC analyzed the results and prepared the manuscript. All authors read and approved the final manuscript.
2302.04207
Free duals and a new universal property for stable equivariant homotopy theory
We study the left adjoint $\mathbb{D}$ to the forgetful functor from the $\infty$-category of symmetric monoidal $\infty$-categories with duals and finite colimits to the $\infty$-category of symmetric monoidal $\infty$-categories with finite colimits, and related free constructions. The main result is that $\mathbb{D} \mathcal C$ always splits as the product of 3 factors, each characterized by a certain universal property. As an application, we show that, for any compact Lie group $G$, the $\infty$-category of genuine $G$-spectra is obtained from the $\infty$-category of Bredon (\emph{a.k.a} ``naive") $G$-spectra by freely adjoining duals for compact objects, while respecting colimits.
Tim Campion
2023-02-08T17:26:17Z
http://arxiv.org/abs/2302.04207v1
# Free duals ###### Abstract. We study the left adjoint \(\mathbb{D}\) to the forgetful functor from the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with duals and finite colimits to the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with finite colimits, and related free constructions. The main result is that \(\mathbb{DC}\) always splits as the product of \(3\) factors, each characterized by a certain universal property. As an application, we show that, for any compact Lie group \(G\), the \(\infty\)-category of genuine \(G\)-spectra is obtained from the \(\infty\)-category of Bredon (_a.k.a_ "naive") \(G\)-spectra by freely adjoining duals for compact objects, while respecting colimits. ###### Contents * 0 Introduction * 0.1 Motivation: equivariant homotopy theory * 0.2 Twisted-trivial braiding * 0.3 Freely adjoining duals: finding the appropriate setting * 0.4 Variations * 0.5 Overview * 0.6 Acknowledgements * 0.1 1-Categorical Preliminaries * 0.2.1 Monoidal categories * 0.2.2 Duals * 0.2.3.1 Generalizing duals * 0.2.4 Complementary smashing localizations * 0.2.5 \(\infty\)-Categorical Preliminaries * 0.2.6 \(\infty\)-Categories with certain colimits * 0.2.7 \(\infty\)-Categories with Duals and Certain Colimits * 0.2.8 \(\infty\)-Categories with Duals and \(\infty\)-Categories with \(\infty\ Application to Equivariant Homotopy Theory 5.1. Equivariant homotopy theory 5.2. The proof of Theorem 5.1.4 ## 0. Introduction This thesis1 is a meditation on the implications that dualizable objects have for the global structure of symmetric monoidal \(\infty\)-category \((\mathcal{C},\wedge,S)\). We expose a number of such implications.2 My ultimate goal is to apply such observations to understand what happens when one _freely_ adjoins duals to a symmetric monoidal \(\infty\)-category. Footnote 1: This is a pre-publication version of my 2021 PhD thesis at the University of Notre Dame, written under the supervision of Chris Schommer-Pries. Footnote 2: See Definition 1.2.6 for a recollection of the notion of a _dual_ for an object \(X\) in a symmetric monoidal category \(\mathcal{C}\), familiar to homotopy theorists from the study of Spanier-Whitehead duality. This is the notion often called _strong dualizability_ in the homotopy theory literature. We follow the convention of the category-theoretic literature in simply saying _dualizable_, because we do not use any other notion of “dualizability” in this thesis. ### Motivation: equivariant homotopy theory Motivating questions in this direction were asked by Charles Rezk ([17]). Let \(G\) be a compact Lie group (for example, \(G\) might be finite). Roughly, Rezk asked: 1. What happens when duals are freely adjoined to the \(\infty\)-category \(G\mathsf{Top}_{*}^{\mathrm{fin}}\) of finite pointed \(G\)-spaces? 2. In particular, is the resulting category related to the category \(G\mathsf{Spt}\) of genuine \(G\)-spectra? As a preliminary note, Rezk is asking about certain _homotopical universal properties_, which immediately suggests that the appropriate framework to answer his question is the language of _\(\infty\)-categories_ (rather than e.g. the language of _model categories_). For this reason, although much of the "action" in this thesis takes place in the relevant homotopy categories, the main results are formulated and proven in the language of \(\infty\)-categories. But perhaps Rezk's motivating questions themselves require some motivation. To this end, let us recall a couple of facts about equivariant homotopy theory with respect to a compact Lie group \(G\). Unstably, there are, to a first approximation, two forms of \(G\)-equivariant homotopy theory. On the one hand there is _Borel \(G\)-equivariant homotopy theory_, which at the point-set level studies spaces with \(G\)-action up to _underlying_ weak homotopy equivalences. That is, in the Borel setting, a \(G\)-equivariant map is considered to be an "equivalence" if it is a weak equivalence of underlying spaces. On the other hand there is _\(G\)-equivariant homotopy theory_, where a \(G\)-equivariant map is only considered to be a weak equivalence if it restricts to a weak homotopy equivalence on \(H\)-fixed-point sets for every closed subgroup \(H\subseteq G\). Many natural equivariant questions originate in the Borel setting, but the \(G\)-equivariant setting provides more refined information and has better formal properties. Rezk's question concerns \(G\)-equivariant homotopy theory. Having agreed to study equivariant homotopy theory rather than Borel equivariant homotopy theory, there is a further distinction to make when passing to the _stable_ arena of equivariant homotopy theory. On the one hand we have _Bredon_ equivariant stable homotopy theory,3 where the "\(G\)-spectra" involved admit deloopings with respect to \(S^{1}\). On the other hand there is _genuine_ equivariant stable homotopy theory, where the "\(G\)-spectra" involved admit deloopings not just with respect to \(S^{1}\), but with respect to every representation sphere \(S^{V}\).4 Similar remarks are applicable here: Bredon equivariant homotopy theory is in some sense easier to define, but genuine equivariant homotopy theory has better formal properties, and it is essential to use this somewhat more refined setting for the important applications of equivariant homotopy theory. Footnote 4: Here \(V\) is a real \(G\)-representation, and the representation sphere \(S^{V}\) is its one-point compactification. Among the pleasant formal properties enjoyed by genuine equivariant homotopy theory is a good theory of _Spanier-Whitehead duality_. That is, just as in nonequivariant homotopy theory, it is the case that every finite genuine \(G\)-spectrum \(X\) admits a dual \(X^{\vee}\) (cf. Lemma 5.1.3), called its "Spanier-Whitehead dual". Precisely, the homotopy category of finite \(G\)-spectra, which is symmetric monoidal under smash product, has duals for all objects. By contrast, finite Bredon spectra are not dualizable in general. Although not directly relevant to the present thesis, it is worth noting that this pattern is repeated elsewhere in homotopy theory. For example, if \(S\) is a scheme, then the unstable motivic category \(H(S)\) may be stabilized with respect to \(S^{1}\) to obtain a stable category \(SH^{S^{1}}(S)\), but better formal properties are obtained when one additionally stabilizes with respect to \(\mathbb{P}^{1}\) and not just \(S^{1}\), to obtain the stable motivic category \(SH(S)\). Analogously to before, one of the pleasant formal properties of \(SH(S)\) is that every smooth projective scheme over \(S\) is dualizable in \(SH(S)\), but not in \(SH^{S^{1}}(S)\). The general pattern appears to be something like the following. Given a "geometric" \(\infty\)-category \(\mathcal{C}\), if one wishes to obtain a satisfactory "stable version" of \(\mathcal{C}\), it is not enough to look for deloopings with respect to \(S^{1}\). Instead, one should also look for any other "sphere-like" objects in \(\mathcal{C}\), and ask for deloopings with respect to those objects as well. The story so far suggests that determining which objects are relevantly "sphere-like", and deserve to be delooped, is some kind of art form, which encodes in an essential way certain information about the geometric nature of the "sphere-like" objects of \(\mathcal{C}\). An attempt to formalize this general procedure which recurs both equivariantly and motivically might start by axiomatizing suitable properties for an object \(C\in\mathcal{C}\) to be regarded as a "sphere", and then investigate what happens when such an object is inverted under the monoidal product. This paradigm has been studied in some generality by Robalo [10], following Voevodsky [29], who attributes it to Jeff Smith's unpublished study of Adams' construction of the smash product of spectra [1]. There, an object \(T\) is "sphere-like" if it is _symmetric_, in the sense that the cyclic permutation \(T^{\wedge 3}\to T^{\wedge 3}\) (which, if the monoidal product \(\wedge\) were cartesian, would be denoted \((x,y,z)\mapsto(z,x,y)\)), should be homotopic to the identity ([10, Definition 2.16]). Robalo / Voevodsky / Smith show that if \(T\) is symmetric, then the category \(\mathcal{C}[T^{-1}]\), by definition obtained from \(\mathcal{C}\) by universally inverting \(T\) under the monoidal product, may be calculated in the familiar way as the sequential colimit \(\mathcal{C}_{T}\) of symmetric monoidal \(\infty\)-categories \(\mathcal{C}_{T}=\varinjlim(\mathcal{C}\xrightarrow{T\wedge(-)}\mathcal{C} \xrightarrow{T\wedge(-)}\cdots)\). The converse holds: if \(\mathcal{C}_{T}\) correctly computes \(\mathcal{C}[T^{-1}]\), then \(T\) is symmetric (cf. [14, Proposition 6]). There is also an infinitary version: if \(\mathcal{C}\) is presentably symmetric monoidal, then let \(\mathcal{C}[T^{-1},L]\) be the presentably symmetric monoidal \(\infty\)-category obtained from \(\mathcal{C}\) by universally inverting \(T\) under \(\wedge\) while respecting colimits. Then, if \(T\) is symmetric, we have the inverse limit formula \(\mathcal{C}[T^{-1},L]=\varprojlim(\mathcal{C}\stackrel{{(-)^{T}}}{{ \leftarrow}}\mathcal{C}\stackrel{{(-)^{T}}}{{\leftarrow}}\cdots)\); conversely, if this formula is correct, then \(T\) is symmetric. These sorts of theorems should be compared to "group-completion" theorems in algebraic \(K\)-theory, such as that of McDuff-Segal [10]. At this point, one might conclude the following: 1. A "sphere-like" object should simply be defined to be a symmetric object. 2. A good theory of "genuine stabilization" should seek to identify such sphere-like objects, and monoidally invert them. 3. We should consider ourselves lucky that there is a particularly simple formula for inverting an object when it is sphere-like. However, these conclusions are a bit question-begging: if we believe that this is the end of the story, then in some sense the ultimate reason for inverting sphere-like objects is simply the fact that we happen to know a convenient formula for doing so. We have not, for instance, identified a universal property associated with the inversion of sphere-like objects. We have not connected the inversion of sphere-like objects to the desideratum of good duality properties. And we have little to say about the significance of the nice formulas for these constructions other than that they are straightforward and familiar. In this thesis, the logic is reversed. we propose understanding the phenomenon of "genuine stabilization" not as a process of inverting carefully-chosen objects, but rather as a process of imposing a theory of Spanier-Whitehead duality for rather generically-chosen objects. That is, the use of genuine equivariant stable homotopy theory is often justified by pointing to its good theory of Spanier-Whitehead duality, and we propose to take this justification seriously. We show (Corollary 3.2.2) that there is a well-defined way to _freely adjoin duals_ to the compact objects of a symmetric monoidal compactly-generated \(\infty\)-category. We show moreover, that when one does this to the \(\infty\)-category \(G\mathsf{Top}_{*}\) of pointed \(G\)-spaces, the resulting category \(\mathbb{D}_{\omega}G\mathsf{Top}_{*}\) is _almost_ the \(\infty\)-category \(G\mathsf{Spt}\) of genuine \(G\)-spectra; in fact \(\mathbb{D}_{\omega}G\mathsf{Top}_{*}\) splits as a product of several subcategories; one factor (characterized as the unique stable factor) is none other than \(G\mathsf{Spt}\) (Corollary 5.1.5). Thus the answers to Rezk's questions are that one can indeed freely adjoin duals to \(G\)-spaces in an appropriate sense, and that the resulting category is closely related to the category of \(G\)-spectra, but has some additional unstable "cruft" attached to it. This "cruft" is shaken off if we additionally stabilize our category, for instance, so that when the functor \(\mathbb{D}_{\omega}\) is applied to the symmetric monoidal \(\infty\)-category \(G\mathsf{Top}_{*}[(S^{1})^{-1}]\) of Bredon \(G\)-spectra, the result is precisely the symmetric monoidal \(\infty\)-category \(G\mathsf{Spt}\) of genuine \(G\)-spectra. ### Twisted-trivial braiding The proof of Corollary 5.1.5 does shed some light on the nature of "sphere-like objects", insofar as the result hinges on the behavior of symmetric objects. In fact, for most of the thesis we work with the slightly stronger (Lemma 2.1.10) condition that an object \(T\) have _twisted-trivial braiding_ (Definition 2.1.2), meaning that the braiding, or "swap" map \(\beta_{T,T}:T\wedge T\to T\wedge T\) is (homotopic to) a map of the form \(\operatorname{id}_{T}\wedge t:T\wedge T\to T\wedge T\), for some \(t:T\to T\).5 For instance (Example 2.1.4), when \(T=S^{n}\) is a sphere in the pointed homotopy category \(\mathsf{hoTop}_{*}\), this condition is satisfied with \(t=(-1)^{n}:S^{n}\to S^{n}\). Similarly, representation spheres \(S^{V}\) in the homotopy category \(G\mathsf{Top}_{*}\) of pointed \(G\)-spaces have twisted-trivial braiding (Example 2.1.6), as does projective space \(\mathbb{P}^{1}_{S}\) in the unstable pointed motivic category \(H(S)\) (Example 2.1.7). The crucial observation is that if \(T\) has twisted-trivial braiding and is forced to become dualizable, then under mild conditions, the resulting category splits as a product of the subcategories of "\(T\)-stable" objects and "\(T\)-torsion objects" (Theorem 3.2.10). In the former factor, \(T\) becomes invertible, while in the latter factor, \(T\) becomes zero. Thus, if we take the property "having twisted-trivial braiding" as a crude, approximate explication of what it means to be "sphere-like", then Theorem 3.2.10 may be stated loosely as follows: _sphere-like objects, when dualizable, are either invertible or nil_. In the case of equivariant homotopy theory, the representation spheres \(S^{V}\) provide an ample supply of objects of \(G\mathsf{Top}_{*}^{\mathrm{fin}}\) with twisted-trivial braiding, each of which induces a splitting of the \(\infty\)-category \(\mathbb{D}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}\) obtained by freely adjoining duals to \(G\mathsf{Top}_{*}^{\mathrm{fin}}\). In the splitting induced by \(S^{V}\), one factor (the "\(S^{V}\)-stable" factor) has \(S^{V}\) becoming invertible, while the other (the "\(S^{V}\)-torsion" factor) has it becoming trivial. In order to arrive at the main result (Corollary 5.1.5, Corollary 5.1.6), the real input from equivariant homotopy theory consists in showing that at least if we restrict attention to the stable factor of \(\mathbb{D}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}\), all of the \(S^{V}\)-torsion factors actually vanish, so that the stable factor of \(\mathbb{D}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}\) has all representation spheres \(S^{V}\) invertible. As \(G\mathsf{Spt}\) is standardly defined ([15], [16]) to be obtained from \(G\mathsf{Top}\) by universally inverting the representation spheres \(S^{V}\), we may now conclude that this stable factor of \(\mathbb{D}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}\) is precisely the \(\infty\)-category \(G\mathsf{Spt}^{\mathrm{fin}}\) of finite \(G\)-spectra. In other words, genuine equivariant homotopy theory is obtained from Bredon equivariant homotopy theory by universally adjoining duals for compact objects. ### Freely adjoining duals: finding the appropriate setting We shall now explain what we mean by "universally adjoining duals", previewing a bit from Section 3.2.1. Recall that by the \(1\)-dimesional cobordism hypothesis [10, 11], there is a free symmetric monoidal \(\infty\)-category on an object, which is given by the symmetric monoidal \(\infty\)-category \(\mathsf{Bord}_{1}^{\mathrm{fr}}\) of oriented \(0\)-manifolds and oriented bordisms between them. Thus if \(X\) is an object of a symmetric monoidal \(\infty\)-category \(\mathcal{C}\), then a dual may be universally adjoined to the object \(X\) by passing to the symmetric monoidal \(\infty\)-category \(\mathcal{C}\cup_{\mathsf{Fin}^{\mathrm{iso}}}\mathsf{Bord}_{1}^{\mathrm{fr}}\), where \(\mathsf{Fin}^{\mathrm{iso}}\) is the symmetric monoidal groupoid of finite sets and the pushout is taken in the \(\infty\)-category \(\mathsf{SMC}\) of symmetric monoidal \(\infty\)-categories. Moreover, because \(\mathsf{SMC}\) is presentable, and because the full subcategory \(\mathsf{SMD}\subset\mathsf{SMC}\) of symmetric monoidal \(\infty\)-categories with duals for objects is precisely the right orthogonal complement of the canonical symmetric monoidal functor \(\mathsf{Fin}^{\mathrm{iso}}\to\mathsf{Bord}_{1}^{\mathrm{fr}}\), it follows by the adjoint functor theorem that there is also a left adjoint \(\mathbb{D}:\mathsf{SMC}\to\mathsf{SMD}\) to the inclusion functor. However, the functor \(\mathbb{D}\) is a bit too crude for the present purposes. Instead we work with variant categories such as the inclusion \(\mathsf{SMD}^{\mathrm{rex}}\to\mathsf{SMC}^{\mathrm{rex}}\) of symmetric monoidal \(\infty\)-categories with duals and finite colimits into symmetric monoidal \(\infty\)-categories with finite colimits (preserved by the tensor product in each variable), and the reflection \(\mathbb{D}^{\mathrm{rex}}\) onto this full subcategory. After all, one is rarely interested in studying _arbitrary_ symmetric monoidal functors out of symmetric monoidal \(\infty\)-categories such as \(G\mathsf{Top}^{\mathrm{fin}}\), which has interesting finite colimits. The category \(G\mathsf{Spt}^{\text{fin}}\) which is supposed to be related to the output of the construction likewise has interesting finite colimits. Thus it is natural to take account of these colimits on both ends of the construction. Moroever, in order to invoke the full strength of the spitting theorem alluded to before (Theorem 3.2.10), it is necessary to know that the universal functor \(G\mathsf{Top}_{*}^{\text{fin}}\to\mathbb{D}^{\text{rex}}G\mathsf{Top}_{*}^{ \text{fin}}\) preserves cogroup objects, which are defined using coproducts. Furthermore, in order to have some reasonable interpretation of the object \(S^{1}\in G\mathsf{Top}_{*}^{\text{fin}}\) and its crucial twisted-trivial braiding, it is desireable for all functors in sight to preserve suspensions. For these reasons, we generally consider the functor \(\mathbb{D}^{\text{rex}}\) and variants respecting other classes of colimits, rather than the plain functor \(\mathbb{D}\). ### Variations The main result comes in several versions, both finitary and infinitary (see Corollary 5.1.6. For instance, \(G\mathsf{Spt}\) is the free compactly-generated symmetric monoidal \(\infty\)-category on \(G\mathsf{Top}_{*}\) which has duals for compact objects. This results from the finitary version by applying the \(\operatorname{Ind}\) construction to everything. A word on basepoints: we believe that their use is inessential, because any symmetric monoidal \(\infty\)-category with duals for objects and an initial object is in fact pointed (Proposition 1.3.3). So, modulo showing that \(G\mathsf{Top}_{*}^{\text{fin}}\) is the free symmetric monoidal pointed \(\infty\)-category with finite colimits on \(G\mathsf{Top}^{\text{fin}}\), viewed as a symmetric monoidal \(\infty\)-category with finite colimits, it will result that \(\mathbb{D}^{\text{rex}}G\mathsf{Top}^{\text{fin}}=\mathbb{D}^{\text{rex}}G \mathsf{Top}_{*}^{\text{fin}}\), so that it should be possible to drop the basepoints from the above discussion. However, we have not at this point proven that \(G\mathsf{Top}_{*}^{\text{fin}}\) has the required universal property. In fact, modulo similar considerations, it should be possible to derive a universal property for equivariant stable homotopy theory whose starting data is even simpler - just the orbit category \(\mathcal{O}_{G}\) itself. We hope to apply this methodology to also obtain a universal property for the stable motivic category \(SH(S)\) as well. However, there the story is a bit more complicated: not every compact object is dualizable in general (though the smooth projective schemes provide a good supply of dualizable objects). Moreover, in this case, it appears that simply freely adjoining a dual to the object \(\mathbb{P}^{1}\) and stabilizing does not cut down precisely to \(SH(S)\) - there seem to be other stable factors. We hope to study these factors in future work. ### Overview Section 1 and Section 2, unlike the rest of this work, take place entirely in the 1-categorical setting. Section 1 contains some standard background on (symmetric) (monoidal) 1-categories, none of which is new. The least familiar fact mentioned here may be Houston's theorem, Proposition 1.4.6, which says that any braided monoidal category with finite coproducts and duals is semiadditive. Section 2 also takes place purely in the 1-categorical setting, but contains the most crucial results about split monoidal localizations, which are 1-categorical in nature. In Definition 2.1.2, we introduce the notion of an _object with twisted-trivial braiding_, which is closely related to the _symmetry_ condition (Definition 2.1.8), used by Voevodsky, [14, Definition 2.16], [15, Definition 9.2], [16], and others to control monoidal localizations. We show (Proposition 2.3.1) that any dualizable object with twisted-trivial braiding gives rise to a smashing-cosmashing localization. We review in Section 2.4 how this often implies that the entire category splits along lines defined by this object. Next, these \(1\)-categorical results are applied in the \(\infty\)-categorical setting. Section 3 develops the \(\infty\)-categorical infrastructure necessary to apply these results in the \(\infty\)-categorical setting, primarily recalling material from [17]. Section 4 explores certain splittings which occur in great generality when duals and colimits interact. Section 5 applies the theory to the example of equivariant homotopy theory, leading up to the main theorem, Corollary 5.1.6. ### Acknowledgements This document is a pre-publication version of my PhD thesis, completed at the University of Notre Dame between 2016 and 2021. Minor additions and corrections have been made since then. I would like to thank my advisor Chris Schomer-Pries, for helping me to grow and mature as a mathematician, and for supporting me in challenging times. Thanks also to the members of my dissertation committee: Chris Schommer-Pries, Mark Behrens, Stephan Stolz, and Pavel Mnev - each of whom have been generous with their time and support during my time at Notre Dame. On that note, I would like to thank all the wonderful people I have worked with at Notre Dame during my time there, particularly the members of the Geometry and Topology group, I would like to thank my family for supporting me through challenging times. I am grateful for the support of NSF grant DMS-1547292, which supported some of this work, as well as the support of the ARO under MURI Grant W911NF-20-1-0082. ## 1. 1-Categorical Preliminaries This chapter discusses several elementary structures encountered in 1-categories and monoidal 1-categories. In contrast to the rest of the text, no \(\infty\)-categories appear in this chapter. Most of the material comprises a review of some basic concepts. However, the central idea of this thesis also appears in Section 2.1 and Section 2.3, namely the attention given to objects with _twisted-trivial braiding_ (Definition 2.1.2) in the context of dualizability. The main theorem on these objects is Proposition 2.3.1, which in combination with Proposition 2.4.2 shows that dualizable objects with twisted-trivial braidings often lead to splittings of the entire category. The implications of these observations, when applied to various "sphere-like" examples of such objects (see Section 2.1), are the main topic of this thesis, and will be explored further in Section 4 and Section 5. One perhaps well-known but under-publicized fact is discussed in Section 1.4, where in Proposition 1.4.6 a result of Robin Houston is recalled [10]: any symmetric monoidal category with duals and finite coproducts is semiadditive. Throughout this chapter, we work usually with braided monoidal categories. This is not because we have particular examples in mind which are braided but not symmetric, but rather because we find the concomitant string diagrams easier to understand in the braided setting than in the symmetric setting. ### Monoidal categories We assume that the reader is familiar with basic concepts of category theory, as well as monoidal categories, braided monoidal categories, and symmetric monoidal categories. We will often take advantage of the Mac Lane coherence theorem to assume that monoidal categories are _strict_, i.e. that the associators and unitors are identities. We shall also freely use string diagrams to reason in monoidal categories and braided or symmetric monoidal categories. We assume the reader is familiar with their usage. My conventions are that string diagrams are read from top to bottom, and that the braiding isomorphism is depicted as follows: As part of my use of string diagrams, we assume the reader is familiar with the use of "cups" and "caps" associated with dual objects (the definition of a dual object is recalled in Definition 1.2.6 below). When considering morphisms involving both an object and its dual in a string diagram, we include an arrow indicating the orientation of the string. If the orientation is downward for an object \(X\), then it is upward for \(X^{\vee}\), etc. **Notation 1.1.1**.: _There is an equivalence between monoidal categories and strong monoidal functors on the one hand, and bicategories with a unique object on the other hand. If \(\mathcal{C}\) is a monoidal category, denote by \(\mathbb{BC}\) the corresponding 1-object bicategory._ **Definition 1.1.2**.: Let \(\mathcal{K}\) be a class of small categories. A _monoidal category with \(\mathcal{K}\)-colimits_ is a monoidal category \((\mathcal{C},\wedge,S)\) such that \(\mathcal{C}\) has \(\mathcal{K}\)-colimits and preserves them separately in each variable. A braided (resp. symmetric) monoidal category with \(\mathcal{K}\)-colimits is a braided (resp. symmetric) monoidal category whose underlying monoidal category is a monoidal category with \(\mathcal{K}\)-colimits. **Remark 1.1.3**.: See Section 3.3 for a more systematic account of the interaction of monoidal structure and colimits in the \(\infty\)-categorical setting. **Definition 1.1.4**.: A _localization_ is an adjunction \(L:\mathcal{C}_{\leftarrow}^{\rightarrow}\mathcal{D}:i\) whose counit \(Li\Rightarrow\mathrm{id}_{\mathcal{D}}\) is an isomorphism. A _monoidal localization_ is a localization \(L\dashv i\) where \(L\) is strong monoidal. Braided (resp. symmetric) monoidal localizations are defined in the obvious way. A _smashing localization_ is a braided monoidal localization \(L:\mathcal{C}_{\leftarrow}^{\rightarrow}\mathcal{D}:i\) such that the map \(i(S)\wedge X\to iL(X)\) is an isomorphism for all \(X\in\mathcal{C}\) (here \(S\) is the unit of \(\mathcal{D}\) and \(\wedge\) is the tensor in \(\mathcal{C}\)). We will have more to say about smashing localizations in Section 2.2 below. ### Duals In this section we review the concept of dualizability in monoidal and particularly in braided monoidal categories. **Definition 1.2.1**.: Let \((\mathcal{C},S,\wedge)\) be a monoidal category. An object \(C\in\mathcal{C}\) is said to be _invertible_ if \(C\) is an equivalence in \(\mathbb{R}\mathcal{C}\). Explicitly, this means that there exists an object \(C^{-1}\in\mathcal{C}\) such that \(C^{-1}\wedge C\cong S\) and \(C\wedge C^{-1}\cong S\). We call \(C^{-1}\) the _inverse_ of \(C\). **Remark 1.2.2**.: If \(\mathcal{C}\) is braided, then the two isomorphisms \(C^{-1}\wedge C\cong S\) and \(C\wedge C^{-1}\cong S\) are equivalent, so only one of them need be checked. **Remark 1.2.3**.: Let \((\mathcal{C},S,\wedge)\) be a monoidal category. An inverse to \(C\in\mathcal{C}\), if it exists, is unique up to isomorphism, justifying the notation \(C^{-1}\) used in Definition 1.2.1. **Remark 1.2.4**.: Strong monoidal functors preserve invertible objects and inverses to invertible objects. **Example 1.2.5**.: The unit object of a monoidal category is inverse to itself. **Definition 1.2.6**.: Let \((\mathcal{C},S,\wedge)\) be a monoidal category. A _duality datum_ in \(\mathcal{C}\) consists of an adjunction in the associated bicategory \(\mathbb{R}\mathcal{C}\). Explicitly, this means we have a pair of objects \(R,L\) and morphisms \(\eta:S\to R\wedge L\) (the _unit_), \(\varepsilon:L\wedge R\to S\) (the _counit_) satisfying the _triangle equations_: \((\varepsilon\wedge L)(L\wedge\eta)=\mathrm{id}_{L}\) and \(\mathrm{id}_{R}=(R\wedge\varepsilon)(\eta\wedge R)\). We say that \(L\) is the _left dual_ and \(R\) is the _right dual_, or that \(L\)_is left dual to_\(R\) and \(R\)_is right dual to_\(L\). If \(\mathcal{C}\) is braided then left and right duals coincide, so we drop the handedness from the terminology and write \(L=R^{\vee}\), \(R=L^{\vee}\). **Remark 1.2.7**.: Let \(L\) be an object of a monoidal category \(\mathcal{C}\). Then the groupoid of extensions of \(L\) to a duality datum where \(L\) is the left dual is either empty or contractible. Thus admitting a right dual,like admitting a right adjoint, is a property of an object rather than a structure. This fact justifies the notation \(X^{\vee}\) introduced in Definition 1.2.6. **Remark 1.2.8**.: Strong monoidal functors preserve duals in the sense that if \((L,R,\eta,\varepsilon)\) is a duality datum and \(F\) is a strong monoidal functor, then \((F(L),F(R),F(\eta),F(\varepsilon))\) is also a dualtiy datum. **Remark 1.2.9**.: In some of the algebraic topology literature, the term "strongly dualizable" is used for what we have called "dualizable" in Definition 1.2.6. There is some justification for this terminology: if \((\mathcal{C},\wedge,S)\) is monoidal with internal homs, then the dual of \(X\) must, by uniqueness of adjoints, coincide with the internal hom \([X,S]\), so there is some justification for referring to \([X,S]\) as a "dual" which always exists in the presence of internal homs, and reserving "strong dual" for the case when a unit map exists to complete the duality data. However, as we do not consider here the internal hom \([X,S]\) in cases where \(X\) is not (strongly) dualizable, we have opted to use the plain term "dualizable" for the situation of Definition 1.2.6. **Example 1.2.10**.: Any invertible object is dualizable. This is a special case of the fact that any equivalence in a \(2\)-category may be upgraded to an adjoint equivalence. ### Pointed categories In this section we review basic concepts about categories with a zero object. **Definition 1.3.1**.: A category \(\mathcal{C}\) is _pointed_ if it has an object \(0=0_{\mathcal{C}}\in\mathcal{C}\) which is both initial and terminal. A _pointed functor_ between pointed categories is a functor preserving the zero object. A _monoidal pointed category_ is a monoidal category with initial object whose underlying category is pointed. A braided (resp. symmetric) monoidal pointed categories is a braided (resp. symmetric) monoidal category whose underlying monoidal category is a monoidal pointed category. **Remark 1.3.2**.: Let \(\mathcal{C}\) be an \(\infty\)-category with an initial object \(\emptyset\) and a terminal object \(1\). Then \(\mathcal{C}\) is pointed if and only if the unique map \(\emptyset\to 1\) is invertible. A folklore result is that **Proposition 1.3.3**.: _Let \((\mathcal{C},\wedge,S)\) be a monoidal category with an initial object \(\emptyset\) preserved by \(X\wedge(-)\) for any object \(X\). Then \(\emptyset\) has a right dual \(\emptyset^{\vee}\) iff \(\emptyset=\emptyset^{\vee}\) is a zero object._ Proof.: In one direction, it is straightforward to check that a zero object is canonically self-dual. Conversely, suppose that \(\emptyset\) has a dual \(\emptyset^{\vee}\) and \(\wedge\) preserves the initial object. One of the triangle equations says that \(\emptyset^{\vee}\) is a retract of \(\emptyset\wedge\emptyset^{\vee}\wedge\emptyset=\emptyset\) and hence \(\emptyset^{\vee}\cong\emptyset\). Then maps \(X\to\emptyset\) correspond by adjunction to maps \(\emptyset=X\wedge\emptyset^{\vee}\to S\), so that \(\emptyset\) is terminal as well as initial as desired. ### Semiadditive categories In this section we review basic concepts about categories enriched in commutative monoids. **Definition 1.4.1**.: Let \(X,Y,Z\) be objects of a category \(\mathcal{C}\). We say that \(Z\) is the (binary) _biproduct_ or _direct sum_ of \(X\) and \(Y\), and write \(Z=X\oplus Y\), if there is a diagram \(X^{\leftarrow}_{\rightarrow}Z^{\rightarrow}_{\leftarrow}Y\) which exhibits \(X,Y\) as splittings of commuting idempotents, while simultaneously exhibiting \(Z\) as both the product and coproduct of \(X\) and \(Y\). We say that \(X,Y\) are _complementary_ retracts of \(Z\). If every \(X,Y\) fit into a biproduct diagram, we say that \(\mathcal{C}\)_has binary biproducts_, and if in addition \(\mathcal{C}\) has a zero object, we say that \(\mathcal{C}\)_has finite biproducts_ or \(\mathcal{C}\)_is semiadditive_. If \(\mathcal{C}\) is monoidal, then a biproduct is _monoidal_ if it is preserved by \(X\wedge(-)\) and \((-)\wedge X\) for each \(X\in\mathcal{C}\). A _monoidal semiadditive category_ is a monoidal category with finite coproducts whose underlying category is semiadditive. Braided (resp. symmetric) monoidal semiadditive categories are defined in the obvious way. **Remark 1.4.2**.: Suppose that \(Z=X\oplus Y\) in a category \(\mathcal{C}\). Let \(e,f\) be the idempotents on \(Z\) split by \(X,Y\) respectively. Then \(ef\) is also an idempotent, and its splitting \(W\) (which exists in the idempotent completion of \(\mathcal{C}\)) is both subterminal (in the sense that any object admits at most one map to \(W\)) and co-subterminal (any object admits at most one map from \(W\)). Thus if \(\mathcal{C}\) has a zero object, then \(W\) is the zero object. **Remark 1.4.3**.: Suppose that \(X\) is a retract of \(Z\), and that \(Y,Y^{\prime}\) are retracts of \(Z\) which are both complementary to \(X\). Then \(Y,Y^{\prime}\) are equal as subobjects (or quotient objects) of \(Z\). Thus we may speak of _the_ complement of a retract of \(Z\), if it exists. **Remark 1.4.4**.: Let \(\mathcal{C}\) be a category with finite biproducts. Then \(\mathcal{C}\) is uniquely enriched in commutative monoids. Conversely, if \(\mathcal{C}\) is enriched in commutative monoids and has finite products or finite coproducts, then \(\mathcal{C}\) is semiadditive. **Remark 1.4.5**.: If \(\mathcal{C}\) is pointed and \(X,Y\in\mathcal{C}\), then the biproduct of \(X\) and \(Y\) exists if and only if the coproduct \(X\amalg Y\) and the product \(X\times Y\) exist and the canonical map \(X\amalg Y\to X\times Y\) is an isomorphism. In analogy to Proposition 1.3.3, we have another well-known result: **Proposition 1.4.6** ([10], see also [1]).: _Let \((\mathcal{C},\wedge,S)\) be a braided monoidal category with zero object. Suppose that the product \(S\times S\) exists and let \(X,Y\in\mathcal{C}\) be objects such that the coproduct \(X\amalg Y\) exists and moreover \(X\wedge(-)\), \(Y\wedge(-)\), and \((X\amalg Y)\wedge(-)\) all preserve the product \(S\times S\), while \((-)\wedge(S\times S)\) preserves the coproduct \(X\amalg Y\). Then \(X\amalg Y\) is a biproduct \(X\oplus Y\)._ **Remark 1.4.7**.: In particular, this result and its dual imply that if \((\mathcal{C},\wedge,S)\) is a braided monoidal category with finite coproducts preserved by \(\wedge\) in each variable, then the biproduct \(X\oplus Y\) exists in any of the following cases: 1. \(S\times S\) exists and \(X,Y\) have duals. 2. \(S\amalg S\) has a dual. 3. Every object has a dual. Proof of Proposition 1.4.6.: By the hypotheses, we have \[(X\times X)\amalg(Y\times Y) =((X\wedge S)\times(X\wedge S))\amalg((Y\wedge S)\times(Y\wedge S))\] \[=(X\wedge(S\times S))\amalg(Y\wedge(S\times S))\] \[=((X\amalg Y)\wedge S)\times((X\amalg Y)\wedge S)\] \[=((X\amalg Y)\wedge(S\times S)\] \[=((X\amalg Y)\wedge S)\times((X\amalg Y)\wedge S)\] \[=((X\wedge S)\amalg(Y\wedge S))\times((X\wedge S)\amalg(Y\wedge S))\] \[=(X\amalg Y)\times(X\amalg Y)\] (To be careful, we have that \((X\amalg Y)\wedge(S\times S)\) exists and we derive the existence of the other objects from the hypotheses). The canonical map \(X\amalg Y\to X\times Y\) whose invertibility defines semiadditivity is a retract of this canonical isomorphism, and is thus also an isomorphism. ### (Co)group Objects In this section we review basic concepts about group objects and cogroup objects. **Definition 1.5.1**.: A _group object_ in a category \(\mathcal{C}\) with finite products is an object \(X\) with maps \(1\xrightarrow{u}X\xleftarrow{m}X\times X\) (unit and multiplication) and a _negation_ map \((-1):X\to X\) satisfying the group equations. Dually, we define a _cogroup object_ in a category with finite coproducts. An _additive_ category is a semiadditive category where every object is a group object, or (equivalently - see Remark 1.5.2) a cogroup object. A _monoidal additive category_ is a monoidal semiadditive category whose underlying category is additive. Braided (resp. symmetric) monoidal additive categories are defined in the obvious way. **Remark 1.5.2**.: If \(\mathcal{C}\) is semiadditive, then group objects and cogroup objects are the same thing, and an object can be a (co)group in at most one way. An object \(X\) is a (co)group object if and only if the commutative monoid \(\operatorname{Hom}(X,Y)\) is an abelian group for all \(Y\), if and only if the commutative monoid \(\operatorname{Hom}(Y,X)\) is an abelian group for all \(Y\), if and only if the commutative monoid \(\operatorname{Hom}(X,X)\) is an abelian group. Another way to say this is that the "shear" map \(\begin{pmatrix}\operatorname{id}_{X}&\operatorname{id}_{X}\\ 0&\operatorname{id}_{X}\end{pmatrix}:X\oplus X\to X\oplus X\) is an isomorphism. **Remark 1.5.3**.: (Co)group objects are preserved by any functor which preserves finite (co)products. **Remark 1.5.4**.: Suppose that \(X\) is a cogroup object, and moreover that the cofiber \(C\) of the comultiplication \(X\to X\amalg X\) exists. Then \(C\) corepresents invertible elements of \(\operatorname{Hom}(X,-)\). **Example 1.5.5**.: The sphere \(S^{1}\) is a cogroup object in \(\operatorname{\mathsf{hoTop}}_{*}\), the homotopy category of pointed spaces. **Example 1.5.6**.: Let \(\mathcal{C}\) be a pointed \(\infty\)-category with suspensions and finite coproducts. Then it follows from Example 1.5.5 that the suspension \(\Sigma X\) of any object \(X\) is a cogroup object in \(\operatorname{\mathsf{ho}\mathcal{C}}\). Cogroup structure provides one way to ensure that complements exist: **Lemma 1.5.7**.: _Let \(\mathcal{C}\) be a pointed category, and suppose that \(X\underset{i}{\overset{\mathcal{r}}{\rightarrow}}Z\) is a retract. If a complement \(Z\underset{i}{\overset{\mathcal{q}}{\leftarrow}}Y\) to \(X\) exists, then \(q\) exhibits \(Y\) as the cokernel of \(i\) (and dually, \(j\) exhibits \(Y\) as the kernel of \(p\)). Conversely, if \(\mathcal{C}\) is semiadditive and \(X\) is a group object, then any cokernel of \(i\) (or kernel of \(p\)) is a complement to \(X\). Alternatively, an idempotent splitting for \(1+i(-1)r\) yields a complementary retract of \(Z\)._ Proof.: Only the "Conversely" part is not entirely standard. For this, use the negation on \(X\) to construct the idempotent \(1+i(-1)p\) on \(Z\) and then proceed as usual. ## 2. Structures in 1-Category Theory ### Objects with (twisted) trivial braiding and the symmetry condition In this section we discuss objects with _twisted trivial braiding_ (Definition 2.1.2), and the relationship to the _symmetry condition_ (Definition 2.1.8). The latter is well-known from the study of monoidal localization; the former is stronger in general (Lemma 2.1.10), but sometimes easier to check. Moreover, the two conditions become equivalent as soon as the object in question is dualizable (Lemma 2.1.11). We shall see later, for instance in Section 2.3, that these conditions continue to be relevant when one is adjoining a \(\wedge\)-dual to an object rather than a \(\wedge\)-inverse. **Proposition 2.1.1**.: _Let \((\mathcal{C},\wedge,S)\) be a braided monoidal category. Let \(T\) be an object of \(\mathcal{C}\), and let \(s,t:T\rightrightarrows T\) be endomorphisms of \(T\). Consider the braiding morphism \(\beta_{T,T}:T\wedge T\to T\wedge T\). The following are equivalent:_ 1. \(\beta_{T,T}=s\wedge t\)__ 2. \(\beta_{T,T}=t\wedge s\)__ 3. \(\beta_{T,T}=ts\wedge\mathrm{id}_{T}\)__ 4. \(\beta_{T,T}=st\wedge\mathrm{id}_{T}\)__ 5. \(\beta_{T,T}=\mathrm{id}_{T}\wedge ts\)__ 6. \(\beta_{T,T}=\mathrm{id}_{T}\wedge st\)__ Proof.: First we show that \((1)\Rightarrow(2)\). Assuming \((1)\), we calculate: Here in the first equation we have isotoped, so that in the second equation we may apply the identity \(\beta_{T,T}=s\wedge t\) from \((1)\). In the third equation we have isotoped. This shows that \(\mathrm{id}_{T\wedge T}=(t\wedge s)\beta_{T,T}^{-1}\). Because the braiding is invertible, this implies that \(t\wedge s=\beta_{T,T}\), i.e. that \((2)\) holds. Of course, \((2)\Rightarrow(1)\) follows by symmetry. This also implies that \((3)\Leftrightarrow(5)\) and \((2)\Leftrightarrow(4)\). We now show that \((1)\Rightarrow(5)\). Assuming (1), we calculate: Here in the first equation we have isotoped, so that in the second equation we may apply the identity \(\beta_{T,T}=s\wedge t\) of (1). In the third equation we have isotoped so that in the fourth equation we may apply the identity \(\beta_{T,T}=s\wedge t\) again. In the fifth equation we have isotoped. We are left with \(\mathrm{id}_{T}\wedge ts=s\wedge t\), which by hypothesis is the same as \(\beta_{T,T}\), so that (5) holds as claimed. We now show that \((5)\Rightarrow(1)\). Assuming (5), we calculate: Here, in the first equation, we have isotoped so that in the second equation we can use the equation \(\beta_{T,T}=\mathrm{id}_{T}\wedge ts\) from (5). In the third equation, we have isotoped so that in the fourth equation we may use the equation \(\beta_{T,T}=\mathrm{id}_{T}\wedge ts\) from (5) again, as well as the equation \(\beta_{T,T}=ts\wedge\mathrm{id}_{T}\) from (3) which we have seen is equivalent to (5). The resulting equation \(\beta_{T,T}(s\wedge t)=\beta_{T,T}^{2}\) implies that \(s\wedge t=\beta_{T,T}\) because \(\beta_{T,T}\) is invertible. Thus \((\ref{eq:1})\Rightarrow(\ref{eq:2})\) as desired. Of course, \((\ref{eq:1})\Leftrightarrow(\ref{eq:2})\) implies that \((\ref{eq:2})\Leftrightarrow(\ref{eq:2})\) by symmetry. Combining with the earlier equivalences, we are done. **Definition 2.1.2**.: Let \(T\) be an object in a braided monoidal category and let \(t\) be an endomorphism of \(T\). We say that \(T\) has \(t\)_-twisted trivial braiding_ if the braiding morphism \(\beta_{T,T}:T\wedge T\to T\wedge T\) is equal to \(\mathrm{id}_{T}\wedge t\) (or equivalently by Proposition 2.1.1, to \(t\wedge\mathrm{id}_{T}\)). If \(t\) is the identity, we say that \(T\) has a _trivial braiding_. We say that \(T\) has a _twisted-trivial braiding_ to mean that it has a \(t\)-twisted trivial braiding for some \(t\). **Remark 2.1.3**.: Twisted trivial braidings are preserved by braided monoidal functors: if \(X\in\mathcal{C}\) has twisted-trivial braiding and \(F:\mathcal{C}\to\mathcal{D}\) is a braided monoidal functor, then \(F(X)\) also has twisted-trivial braiding. Objects with twisted-trivial braiding show up in a variety of contexts as "pre-\(\wedge\)-invertible" objects. **Example 2.1.4**.: The sphere \(S^{1}\) has \((-1)\)-trivial braiding as an object of \(\mathsf{hoTop}_{*}\) pointed at \(1=e^{0}\). Here \((-1)\) is the automorphism of \(S^{1}\) sending \(e^{i\theta}\) to \(e^{-i\theta}\). This can be seen because the matrix \(B=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\) is homotopic through invertible matrices to the matrix \(1\oplus(-1)=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\), for instance by taking the homotopy \(\begin{pmatrix}\sin t&\cos t\\ \cos t&-\sin t\end{pmatrix}\). **Example 2.1.5**.: Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal pointed \(\infty\)-category (\(n\geq 2\)) with suspension (the suspension of \(X\) is denoted \(\Sigma X\)) over which \(\wedge\) distributes. Then \(\Sigma S\) has \((-1)_{\mathcal{C}}\)-twisted trivial braiding, where \((-1)_{\mathcal{C}}:\Sigma S\to\Sigma S\) is induced by the automorphism \((-1)\) of \(S^{1}\) from Example 2.1.4. **Example 2.1.6**.: Let \(G\) be a compact Lie group, and \(V\) a representation of dimension \(n\). Let \(S^{V}\) be the corresponding representation sphere, i.e. the one-point compactification of \(V\). Then \(S^{V}\) is naturally an object of the homotopy category \(\mathsf{ho}G\mathsf{Top}_{*}\) of based \(G\)-spaces, with basepoint either at \(0\) or \(\infty\). Either basepoint is fixed by the \(G\)-equivariant automorphism \((-1)^{n}:S^{V}\to S^{V}\) induced by \(-\mathrm{id}_{V}\). The braiding on \(S^{V}\) is \((-1)^{n}\)-trivial. This can be seen by tensoring the linear homotopy in Example 2.1.4 with \(V\) to get an equivariant homotopy through automorphisms of \(V\), inducing an equivariant homotopy on spheres \(S^{V}\). Example 2.1.4 is the case where \(G\) is the trivial group. This example leads to the main result of this thesis, Corollary 5.1.5. **Example 2.1.7**.: Let \(S\) be a scheme. Then \(\mathbb{P}^{1}_{S}\), the Tate sphere over \(S\), admits a basepoint at either \(0\) or \(\infty\); either way it is preserved by the automorphism \((-1)\) induced by the \((-1)\) map on \(\mathbb{A}^{1}_{S}\), the trivial bundle of rank \(1\). 6 The braiding on \(\mathbb{P}^{1}_{S}\) is \((-1)\)-trivial in the homotopy category \(\mathsf{hoSm}_{S,*}\) of pointed smooth schemes over \(S\) localized at \(\mathbb{A}^{1}_{S}\). To see this, we use the Thom space structure: \(\mathbb{P}^{1}_{S}=\operatorname{Th}(\mathbb{A}^{1}_{S})\wedge_{S}\operatorname{Th}( \mathbb{A}^{1}_{S})=\operatorname{Th}(\mathbb{A}^{1}_{S}\times_{S}\mathbb{A}^{1} _{S})=\operatorname{Th}(\mathbb{A}^{2}_{S})\). Hence an \(\mathbb{A}^{1}\)-homotopy through linear automorphisms of \(\mathbb{A}^{2}_{S}\) gives rise to an \(\mathbb{A}^{1}\)-homotopy between the induced self-maps of \(\mathbb{P}^{1}_{S}\wedge_{S}\mathbb{P}^{1}_{S}\), and so it suffices to show that the matrices \(B\) and \(1\oplus(-1)\) from (Example 2.1.4) are \(\mathbb{A}^{1}\)-homotopic in \(\operatorname{GL}_{2}\). It's not clear that this can be done through a single \(\mathbb{A}^{1}\)-homotopy, but it may be done in two steps, as the linear homotopies \(\begin{pmatrix}t&1-t\\ 1+t&-t\end{pmatrix}\) and \(\begin{pmatrix}1&0\\ 2t&-1\end{pmatrix}\) (note that both have constant determinant \(-1\)) exhibit \(B\) and \(1\oplus(-1)\) respectively as \(\mathbb{A}^{1}\)-homotopic to \(\begin{pmatrix}1&0\\ 2&-1\end{pmatrix}\). We plan to study this example further in future work. In the literature, a slightly weaker condition is often used instead: **Definition 2.1.8** (Smith, [10]).: Let \(T\) be an object in a braided monoidal category. We say that \(T\) is _symmetric_ if the map \((\operatorname{id}_{T}\wedge\beta_{T,T}^{-1})(\beta_{T,T}\wedge\operatorname{ id}_{T})\) is equal to \(\operatorname{id}_{T^{3}}\). **Remark 2.1.9**.: We do not know whether Definition 2.1.8 has appeared in the literature in the generality of a braided monoidal category (rather than just a symmetric monoidal category). The choice of handedness for the crossings is made to ensure that Lemma 2.1.10 below is true. In a symmetric monoidal category, if \(T\) is symmetric then for any \(n\in\mathbb{N}\), the homomorphism \(\Sigma_{n}\to\operatorname{Aut}(T^{\wedge n})\) factors through the sign homomorphism \(\operatorname{sgn}:\Sigma_{n}\to C_{2}\). Similarly, in the braided setting, if \(T\) is symmetric then for every \(n\in\mathbb{N}\), the canonical map \(B_{n}\to\operatorname{Aut}(T^{\wedge n})\) factors through the abelianization homomorphism \(B_{n}\to\mathbb{Z}\) (where \(B_{n}\) is the braid group on \(n\) strands). This is easily seen from the usual generators-and-relations description of \(B_{n}\), where generator \(b_{i}\) is the positive crossing of the \(i\)th strand over the \((i+1)\)st strand, since the symmetry condition forces that \(b_{i}b_{i+1}^{-1}\equiv 1\). **Lemma 2.1.10**.: _Let \(T\) be an object in a braided monoidal category with twisted-trivial braiding. Then \(T\) is symmetric._ Proof.: Let \(t\) be the twist. We have (using Proposition 2.1.1): \[\begin{array}{ The converse holds if \(T\) is dualizable - a condition we are interested in forcing anyway. **Lemma 2.1.11**.: _Let \(T\) be an object in a braided monoidal category. Assume that \(T\) is symmetric and dualizable. Then \(T\) has \(t\)-twisted trivial braiding, where \(t=\operatorname{id}_{T}\wedge(\varepsilon\beta_{T,T^{\vee}}^{-1}\eta)\) is mulitplication by the Euler characteristic of \(T\)._ Proof.: First, we introduce some cups, caps and crossings: Now we introduce a few more crossings, then apply the symmetry condition, and finally simplify: \[=\] ### Idempotent objects and smashing localizations In this section we review some basic concepts of smashing and co-smashing localizations. Much of the material of this section may be found in the preprint [1], which was previously available on Drinfeld's website. **Definition 2.2.1** ([1]).: Let \((\mathcal{C},S,\wedge)\) be a braided monoidal category. * A _closed idempotent_ in \(\mathcal{C}\) is an object \(E\) equipped with a morphism \(r:S\to E\) such that the induced morphism \(r\wedge\operatorname{id}_{E}:S\wedge E\to E\wedge E\) is an isomorphism (equivalently, \(\operatorname{id}_{E}\wedge r:E\wedge S\to E\wedge E\) is an isomorphism). * Dually, an _open idempotent_ is a closed idempotent in \(\mathcal{C}^{\operatorname{op}}\), i.e. an object \(E\) equipped with a map \(i:E\to S\) such that \(i\wedge\operatorname{id}_{E}\) is an isomorphism. * A _clopen idempotent_ is an object \(E\) with maps \(r:S\to E\), \(i:E\to S\) satisfying the equations \[ri=\operatorname{id}_{E}\quad(\text{``splitting''})\quad\text{and}\quad ir \wedge\operatorname{id}_{E}=\operatorname{id}_{E\wedge E}\quad(\text{``stability''})\] * If \(r:S\to E\) (resp. \(i:E\to S\)) is a map and \(X\in\mathcal{C}\), say that \(X\) is _\(r\)-stable_ (resp. _\(i\)-stable_) if \(r\wedge\operatorname{id}_{X}:X\to E\wedge X\) (resp. \(i\wedge\operatorname{id}_{X}:E\wedge X\to X\)) is an isomorphism. We may say _\(E\)-stable_ if \(r\) (resp. \(i\)) is understood. We write \(\mathcal{C}_{E}\subseteq\mathcal{C}\) for the full subcategory of \(E\)-stable objects. * If \(E\) is an object and \(\mathcal{C}\) is pointed, say that an object \(X\) is _\(E\)-torsion_ if \(E\wedge X=0\). **Remark 2.2.2**.: Any braided monoidal functor \(F:\mathcal{C}\to\mathcal{D}\) preserves open, closed, and clopen idempotents, along with stability in the sense that if \(X\) is \(E\)-stable, then \(F(X)\) is \(F(E)\)-stable. If moreover \(\mathcal{C}\) and \(\mathcal{D}\) are pointed and \(F\) preserves the zero object, then if \(X\) is \(E\)-torsion, then \(F(X)\) is \(F(E)\)-torsion. **Proposition 2.2.3**.: _Let \((\mathcal{C},\wedge,S)\) be a braided monoidal category and \((E,r:S\to E)\) a closed idempotent. Then the functor \(E\wedge(-):\mathcal{C}\to\mathcal{C}\) is a localization, with essential image \(\mathcal{C}_{E}\). The unit at \(X\in\mathcal{C}\) is given by \(r\wedge\operatorname{id}_{X}:X\to E\wedge X\), and the equation \(r\wedge\operatorname{id}_{E}=\operatorname{id}_{E}\wedge r\) holds._ _Moreover, \(\mathcal{C}_{E}\) is braided monoidal under \(\wedge\) with unit \(E\), and the localization \(\mathcal{C}\to\mathcal{C}_{E}\) is braided monoidal._ Proof.: As a first step, we check that \(E\wedge(-)\) is faithful when restricted to its image. Let \(f,g:E\wedge X{\rightarrow}E\wedge Y\), and suppose that \(\operatorname{id}_{E}\wedge f=\operatorname{id}_{E}\wedge g\). To this equation postp compose \((r\wedge\operatorname{id}_{E})^{-1}\wedge\operatorname{id}_{Y}\) and precompose \(r\wedge\operatorname{id}_{E}\wedge\operatorname{id}_{X}\); slide the \(r\) down and cancel the inverses to obtain \(f=g\). (2.2.4) (2.2.5) \(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(E\)\(X\)\(E\)\(E\)\(E\)\(X\)\(E\)\(E\)\(E\)\(E\)\(X\)\(E\ The associator and braiding for \(\mathcal{C}_{E}\) are as in \(\mathcal{C}\). For the unit, note that if \(X\in\mathcal{C}_{E}\), then \(r\wedge\mathrm{id}_{X}:X\to E\wedge X\) is invertible; the left unit is defined to be the inverse to this morphism, and similarly the right unit. Only the unit equation must be checked, and it is immediate upon precomposition with \(r\). To make the localization functor monoidal, use copies of \(r\) for the constraints. It is straightforward to check that this makes \(E\wedge(-)\) into a monoidal functor; the fact that it is braided follows from the equation \(r\wedge\mathrm{id}_{E}=\mathrm{id}_{E}\wedge r\). ### Twisted trivial braiding, duality, and idempotents In this section, we show that any dualizable object with twisted-trivial braiding gives rise to a smashing-cosmashing localization Proposition 2.3.1. As we shall review in Section 2.4, this often implies that the entire category splits along lines defined by this object. **Proposition 2.3.1**.: _Let \((\mathcal{C},\wedge,S)\) be a braided monoidal category and \(E\in\mathcal{C}\) an object._ 1. _If_ \(E\) _is a closed or open idempotent, then_ \(E\) _has trivial braiding._ 2. _The following are equivalent:_ 1. \(E\) _admits the structure of a clopen idempotent._ 2. \(E\) _admits the structure of a closed idempotent and the structure of an open idempotent._ 3. \(E\) _is dualizable and admits the structure of a closed idempotent._ _Moreover, in this case_ \(E\) _is self-dual. For any closed structure on_ \(E\)_, there is at most one clopen structure compatible with it._ 3. _If_ \(T\in\mathcal{C}\) _has twisted-trivial braiding and is dualizable, then_ \(T^{\vee}\wedge T\) _is a clopen idempotent (and in particular has trivial braiding by (1))._ 4. _If_ \(E\) _is a closed idempotent, then an object_ \(X\) _is_ \(E\)_-stable iff_ \(X\) _is of the form_ \(X=E\wedge Y\)_. If_ \(T\in\mathcal{C}\) _has twisted-trivial braiding, then_ \(X\) _is_ \(T^{\vee}\wedge T\)_-stable iff_ \(X\) _is of the form_ \(X=T\wedge Z\)_._ Proof.: (1): Let \(r:S\to E\) be a closed idempotent. We want to show that \(\beta_{E,E}=\mathrm{id}_{E\wedge E}\). Composing with the isomorphism \(r\wedge\mathrm{id}_{E}\), it suffices to show that \(r\wedge\mathrm{id}_{E}=\mathrm{id}_{E}\wedge r\). This is part of Proposition 2.2.3. The open case is dual. (2): \((a)\Rightarrow(b)\): (This implication is superfluous in light of the others we will prove, but we point it out because it is the most straightforward.) If \((E,r,i)\) is a clopen idempotent, then the splitting and stability equations yield explicit inverses for \(r\wedge\mathrm{id}_{E}\) and \(i\wedge\mathrm{id}_{E}\), so that \((E,r)\) is a closed idempotent and \((E,i)\) is an open idempotent. \((b)\Rightarrow(a)\): Let \((E,r)\) be a closed idempotent and \((E,i)\) be an open idempotent. Set \(i^{\prime}=i(\mathrm{id}_{E}\wedge r)^{-1}(\mathrm{id}_{E}\wedge i)^{-1}\). Then we show that \((E,r,i^{\prime})\) is a clopen idempotent, as follows. First we verify the stability equation \(i^{\prime}r\wedge\mathrm{id}_{E}=\mathrm{id}_{E}\). \(\begin{array}{c}\includegraphics[height=56.905512pt]{figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figsfigs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs definition of \(i^{\prime}\) - so we have in fact proven that the splitting equation follows from the stability equation plus the triviality of the braiding. \((a)\Rightarrow(c)\): As remarked, \(E\) is self-dual, with unit \(\eta=r\wedge r\) and counit \(\varepsilon=i\wedge i\). We verify one triangle equation: Here in the first equation we have expanded the definitions of \(\eta\) and \(\varepsilon\). In the second equation, we have isotoped, preparing for the third equation where we apply the stability equation. In the fourth equation we have isotoped again, preparing for the fifth equation where we apply the splitting equation. We are left with \(\mathrm{id}_{E}\) as desired. The proof of the other triangle identity is similar. \((c)\Rightarrow(b)\): Let \(\eta:S\to E\wedge E^{\vee}\) and \(\varepsilon:E^{\vee}\wedge E\to S\) be the unit and counit of the duality between \(E\) and \(E^{\vee}\). Define \(i:E\to S\) to be \((\mathrm{id}_{E}\wedge\varepsilon\beta_{E,E^{\vee}})((r\wedge\mathrm{id}_{E}) ^{-1}\wedge\mathrm{id}_{E^{\vee}})(\mathrm{id}_{E}\wedge\eta)\). We claim that \((E,i)\) is an open idempotent, i.e. that \(\mathrm{id}_{E}\wedge i\) is an isomorphism. Precomposing the isomorphism \(\mathrm{id}_{E}\wedge r\), it will suffice to show that \(\mathrm{id}_{E}\wedge ir\) is an isomorphism: Here in the first equation we have expanded the definition of \(i\). In the second equation we have isotoped, setting up the third equation where we cancel inverse morphisms. We continue: Here in the first equation, we have used the fact that \(E\) has trivial braiding (because it is a closed idempotent, applying (1)). In the second equation, we have isotoped, leaving us with \(\mathrm{id}_{E}\), which is an isomorphism as desired. This leaves the last two statements of (2). We have already shown that \(E\) is self-dual. For the last statement, suppose that \((E,r,i)\) and \((E,r,i^{\prime})\) are both clopen. Then we calculate: Here in the first equation we have used the splitting equation \(ri^{\prime}=\mathrm{id}_{E}\). In the second equation, we have isotoped, so that in the third equation we may use the stability equation \(ir\wedge\mathrm{id}_{E}=\mathrm{id}_{E}\). The equation \(i=i^{\prime}\) results. (3): Let \(\eta:S\to T^{\vee}\wedge T\) be the unit and and \(\varepsilon:T\wedge T^{\vee}\to S\) the counit of the duality between \(T\) and \(T^{\vee}\); let \(t:T\to T\) be the twist. We define \(r:S\to T^{\vee}\wedge T\) to be \((\mathrm{id}_{T^{\vee}}\wedge t)\eta\) and \(i:T^{\vee}\wedge T\to S\) to be \(\varepsilon\beta_{T^{\vee},T}\). We first verify the splitting equation \(ri=\mathrm{id}_{T^{\vee}\wedge T}\): \(T^{\vee}\)\(T\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T^{\vee}\)\(T\)\(T^{\vee}\)\(T^{\vee **Definition 2.4.1**.: Let \((\mathcal{C},\wedge,S)\) be a braided monoidal category with finite biproducts preserved by \(\wedge\) in each variable. If \(S=X\oplus Y\), where \(X\) and \(Y\) are clopen idempotents, we say that \(X,Y\) are _complementary_ clopen idempotents. **Proposition 2.4.2**.: _Let \((\mathcal{C},\wedge,S)\) be a braided monoidal semiadditive category, and let \(E\) be a clopen idempotent. Then \(E\) is a retract of \(S\). If a complement \(S/E\) to \(E\) exists, then \(S/E\) is also a clopen idempotent, and thus \(E,S/E\) are complementary clopen idempotents. In this case, for all objects \(X,Y\), the natural map \(\mathcal{C}(X,Y)\to\mathcal{C}(E\wedge X,E\wedge Y)\times\mathcal{C}(S/E \wedge X,S/E\wedge Y)\) is a bijection, and thus \(\mathcal{C}\) splits as a product of its full subcategories of \(E\)-stable and \(E\)-torsion objects. In particular, an object is \(E\)-stable iff it is \(S/E\)-torsion and vice versa._ Proof.: For the first statement, let \(r,i\) be the maps exhibiting \(E\) as a clopen idempotent, and let \(s,j\) be maps exhibiting \(S/E\) as a complementary idempotent to \(E\). Then the splitting equation is satisfied by \(s,j\), so we just need to verify the stability equation, which says that \(\operatorname{id}_{S/E}\wedge js=\operatorname{id}_{S/E}\). It suffices to verify that \(js\wedge js=js\). We may write \(js=1-ip\), and then we must verify that \(1\wedge 1-ip\wedge 1-1\wedge ip+ip\wedge ip=1-ip\wedge 1\). We know that \(ip\wedge 1\), \(1\wedge ip\), and \(ip\wedge ip\) are all equal, so this is true. We have \(\mathcal{C}(X,Y)=\mathcal{C}((E\wedge X\oplus S/E\wedge X,E\wedge Y\oplus S/E \wedge Y)=\mathcal{C}(E\wedge X,E\wedge Y)\times\mathcal{C}(E\wedge X,S/E \wedge Y)\times\mathcal{C}(S/E\wedge X,E\wedge Y)\times\mathcal{C}(S/E\wedge X,S/E\wedge Y)\), so we want to show that the cross-terms vanish. Now, since \(E\) is self-dual by Proposition 2.3.1(2), we may calculate that \(\mathcal{C}(E\wedge X,S/E\wedge Y)=\mathcal{C}(X,E\wedge S/E\wedge Y)\) and similarly since \(S/E\) is self-dual we have \(\mathcal{C}(S/E\wedge X,E\wedge Y)=\mathcal{C}(X,S/E\wedge E\wedge Y)\), so it suffices to show that \(E\wedge S/E=0\). Now, \(E\wedge S/E\) splits the idempotent \(ir\wedge js\), which coincides with \(irjs\wedge 1=0\wedge 1=0\), so indeed \(E\wedge S/E=0\) ## 3. \(\infty\)-Categorical Preliminaries This chapter is technical in nature. It mostly consists of assembling some tools from [10] which will be used in Section 4 and Section 5 to investigate the structure of \(\infty\)-categories with certain duality and cocompleteness properties. Section 3.1 reviews some basics of the theory of operads from [10], which we shall use to formalize statements about symmetric monoidal \(\infty\)-categories in Section 3.3 and the later chapters. Section 3.2 discusses several ways to leverage the \(1\)-categorical results of Section 1 in the \(\infty\)-categorical context. Of particular interest is Section 3.2.1, where the \(1\)-dimensional cobordism hypothesis is used to justify defining a dualizable object in a symmetric monoidal \(\infty\)-category in terms of the homotopy category, and to show that there exists a universal way to adjoin duals to a symmetric monoidal \(\infty\)-category in a variety of contexts. Section 3.3 discusses \(\infty\)-categories with certain colimits, with a particular eye toward their monoidal properties, as discussed in [10]. These properties are used in Section 4 to construct universal examples of symmetric monoidal \(\infty\)-cateogories with various duality, cocompleteness, and exactness properties. ### A quick review of Lurie's theory of operads In this section, we review some of the basics of Lurie's theory of operads from [10]. This provides a language to speak rigorously about symmetric monoidal \(\infty\)-categories. **Definition 3.1.1** ([10]).: Let \(\mathsf{Fin}\) denote the \(1\)-category of finite sets, and let \(\mathsf{Fin}_{*}\) denote the \(1\)-category of finite pointed sets. For \(n\in\mathbb{N}\), let \(n_{+}=\{0,1,\ldots,n\}\) denote a set of cardinality \(n+1\) with basepoint \(0\). A morphism \(f:m_{+}\to n_{+}\) of \(\mathsf{Fin}_{*}\) is said to be _active_ if \(f^{-1}(0)=\{0\}\), or equivalently if \(f\) is in the image of the functor \((-)_{+}:\mathsf{Fin}\to\mathsf{Fin}_{+}\) which adds a disjoint basepoint. A morphism \(f:m_{+}\to n_{+}\) of \(\mathsf{Fin}_{*}\) is said to be _inert_ if \(f^{-1}(i)\) is a singleton for \(i\neq 0\), i.e. if \(f\) "only collapses to the basepoint". For \(1\leq i\leq n\), we denote by \(\rho_{i}:m_{+}\to 1_{+}\) the inert morphism which collapses every element to the basepoint except for \(i\). **Definition 3.1.2** ([10]).: An _\(\infty\)-operad_\(\mathcal{O}\) comprises an \(\infty\)-category \(\mathcal{O}^{\otimes}\) and a functor \(p:\mathcal{O}^{\otimes}\to\mathsf{Fin}_{*}\). For \(n_{+}\in\mathsf{Fin}_{*}\), we denote by \(\mathcal{O}^{\otimes}_{n_{+}}\) the fiber of \(p\) at \(n_{+}\), and we sometimes abusively write \(\mathcal{O}\) for the _underlying \(\infty\)-category_\(\mathcal{O}^{\otimes}_{1_{+}}\). This data is required to satisfy the following two conditions: 1. For every inert morphism \(f:m_{+}\to n_{+}\) of \(\mathsf{Fin}_{*}\) and every object \(X\) of \(\mathcal{O}^{\otimes}\) over \(m_{+}\), there exists a cocartesian lift of \(f\) emanating from \(X\). In particular, reindexing along an inert morphism \(f:m_{+}\to n_{+}\) induces a functor \(f_{*}:\mathcal{O}^{\otimes}_{m_{+}}\to\mathcal{O}^{\otimes}_{n_{+}}\). The second condition is: 1. (Segal condition) For each \(m_{+}\in\mathsf{Fin}_{*}\), the functors \((\rho_{1})_{*},\ldots,(\rho_{m})_{*}\) induce an equivalence of categories \(\mathcal{O}^{\otimes}_{m_{+}}\to\prod_{i=1}^{m}\mathcal{O}^{\otimes}_{1_{+}}\). A _morphism of \(\infty\)-operads_\(\mathcal{O}\to\mathcal{P}\) is a functor \(\mathcal{O}^{\otimes}\to\mathcal{P}^{\otimes}\) over \(\mathsf{Fin}_{*}\) which preserves cocartesian lifts of inert morphisms. **Remark 3.1.3**.: Let \(\mathcal{O}\) be an \(\infty\)-operad. Then the Segal condition allows us to view any object \(X\in\mathcal{O}^{\otimes}_{n_{+}}\) as an \(n\)-tuple of objects of \(\mathcal{O}=\mathcal{O}_{1_{+}}\). **Remark 3.1.4**.: Let \(\mathcal{O}\) be an \(\infty\)-operad such that \(\mathcal{O}^{\otimes}\) is a \(1\)-category. Then \(\mathcal{O}\) corresponds to a _colored symmetric operad_ in the usual sense, also known as a _symmetric multicategory_. The correspondence is as follows. Given a symmetric multicategory \(\mathcal{C}\), define \(\mathcal{O}^{\otimes}\to\mathsf{Fin}_{*}\) to be the category such that \(\mathcal{O}^{\otimes}_{n_{+}}=\mathcal{C}^{n}\), and with homsets defined by \(\mathcal{O}^{\otimes}((C_{1},\ldots,C_{m}),(D_{1},\ldots,D_{n})):=\prod_{j=1}^ {n}\mathcal{C}(C_{1},\ldots,C_{m};D_{j})\); composition and the functor to \(\mathsf{Fin}_{*}\) are defined in the obvious way. \(\mathcal{O}^{\otimes}\) is called the _category of operators_ of \(\mathcal{C}\), and is an \(\infty\)-operad. This construction is part of an equivalence of categories. **Remark 3.1.5**.: Let \(\mathcal{O}\) be an \(\infty\)-operad. We see from the construction of the category of operators that reindexing along an inert morphism should be viewed as the operation of _forgetting_ certain objects from a tuple of objects (namely those objects whose index is mapped to the basepoint). **Definition 3.1.6** ([17]).: A _symmetric monoidal \(\infty\)-category_\(\mathcal{C}\) comprises an \(\infty\)-category \(\mathcal{C}^{\otimes}\) and a functor \(p:\mathcal{C}^{\otimes}\to\mathsf{Fin}_{*}\). For \(n_{+}\in\mathsf{Fin}_{*}\), we denote by \(\mathcal{C}^{\otimes}_{n_{+}}\) the fiber of \(p\) at \(n_{+}\), and we sometimes abusively write \(\mathcal{C}\) for the _underlying \(\infty\)-category_\(\mathcal{C}^{\otimes}_{1_{+}}\). This data is required to satisfy the following two conditions: 1. \(p\) is a cocartesian fibration. In particular, reindexing along any morphism \(f:m_{+}\to n_{+}\) induces a functor \(f_{*}:\mathcal{C}^{\otimes}_{m_{+}}\to\mathcal{C}^{\otimes}_{n_{+}}\). The second condition is: 1. (Segal condition) For each \(m_{+}\in\mathsf{Fin}_{*}\), the functors \((\rho_{1})_{*},\ldots,(\rho_{m})_{*}\) induce an equivalence of categories \(\mathcal{C}^{\otimes}_{m_{+}}\to\prod_{i=1}^{m}\mathcal{C}^{\otimes}_{1_{+}}\). A _lax symmetric monoidal functor_ between symmetric monoidal \(\infty\)-categories is a morphism of the underlying operads. A _strong symmetric monoidal functor_ between symmetric monoidal \(\infty\)-categories is functor over \(\mathsf{Fin}_{*}\) which preserves cocartesian lifts of all morphisms. We let \(\mathsf{SMC}\) denote the \(\infty\)-category of symmetric monoidal \(\infty\)-categories and strong symmetric monoidal functors. **Remark 3.1.7**.: Let \(\mathcal{C}\) be a symmetric monoidal \(\infty\)-category. Then in particular, \(\mathcal{C}\) is an \(\infty\)-operad. In addition to reindexing along inert morphisms (corresponding to forgetting objects from a tuple), \(\mathcal{C}^{\otimes}\) permits reindexing along _active_ morphisms. The interpretation of these reindexing functors is as follows. Reindexing along the unique active morphism \(2_{+}\to 1_{+}\) corresponds to the tensor product functor \(\otimes:\mathcal{C}\times\mathcal{C}\to\mathcal{C}\). Reindexing along the unique morphism \(0_{+}\to 1_{+}\) corresponds to the the unit object, viewed as a functor \(*=\mathcal{C}^{0}\to\mathcal{C}\). Reindexing along a general surjective active morphism \(f:m_{+}\to n_{+}\) corresponds to tensoring together everything in each fiber of \(f\), and reindexing along a general injective active morphism \(f:m_{+}\to n_{+}\) corresponds to inserting copies of the unit at all tuple indices not in the image of \(f\). Reindexing along a general active morphism \(f:m_{+}\to n_{+}\) is a composite of these two operations. **Theorem 3.1.8** ([17]).: _There is an equivalence of \(\infty\)-categories between symmetric monoidal \(\infty\)-categories and strong symmetric monoidal functors on the one hand, and algebras in \(\mathsf{Cat}_{\infty}\) for the \(E_{\infty}\) operad on the other._ **Definition 3.1.9**.: Let \(\mathcal{D}\) be an \(\infty\)-operad. We say that \(\mathcal{D}\) is _unital_ if the nullary operation is representable, i.e. if \(\mathcal{D}^{\otimes}(*,-):\mathcal{D}^{\otimes}_{1_{+}}\to\mathsf{Top}\) is represntable, where \(*\in\mathcal{D}^{\otimes}_{0_{+}}\) is the unique nullary object. We say that \(\mathcal{D}\) is _exponentially closed_ if, for every \(D\in\mathcal{D}\), the functor \(D\otimes(-):\mathcal{D}\to\mathcal{D}\) has a right adjoint \(F(D,-):\mathcal{D}\to\mathcal{D}\). Then \(F(D,D^{\prime})\) is called the _internal hom_ of \(D,D^{\prime}\in\mathcal{D}\). Let \(\mathcal{C}\) be a symmetric monoidal \(\infty\)-category and \(\mathcal{D}\subseteq\mathcal{C}\) a full suboperad. We say that \(\mathcal{D}\) is a _\(\otimes\)-ideal_ in \(\mathcal{C}\) if \(C\in\mathcal{C},D\in\mathcal{D}\Rightarrow C\otimes D\in\mathcal{D}\), where \(\otimes\) is taken in \(\mathcal{C}\). More generally, we say that \(\mathcal{D}\) is _closed under \(\otimes\)_ in \(\mathcal{C}\) if \(D,D^{\prime}\in\mathcal{D}\Rightarrow D\otimes D^{\prime}\in\mathcal{D}\). If \(\mathcal{C}\) is exponentially closed with internal hom \(F\), then we say that \(\mathcal{D}\subseteq\mathcal{C}\) is a _exponential ideal_ if \(F(C,D)\in\mathcal{D}\) for all \(C\in\mathcal{C},D\in\mathcal{D}\), and _closed under exponentiation_ if \(F(D,D^{\prime})\in\mathcal{D}\) for all \(D,D^{\prime}\in\mathcal{D}\). **Lemma 3.1.10**.: _Let \(\mathcal{C}\) be a symmetric monoidal \(\infty\)-category which is exponentially closed and let \(\mathcal{D}\subseteq\mathcal{C}\) be a full suboperad. Suppose that \(\mathcal{D}\) closed under exponentiation and closed under \(\otimes\) in \(\mathcal{C}\), and that \(\mathcal{D}\) is unital. Then \(\mathcal{D}\) is a symmetric monoidal \(\infty\)-category, and the inclusion \(\mathcal{D}\to\mathcal{C}\) is a lax monoidal functor preserving \(\otimes\)._ Proof.: Because \(\mathcal{D}\subseteq\mathcal{C}\) is a full suboperad, the functor \(\mathcal{D}^{\otimes}\to\mathsf{Fin}_{*}\) is an isofibration. It will suffice to verify that cocartesian lifts exist for all morphisms of \(\mathsf{Fin}_{*}\). Because cocartesian morphsims are closed under composition, it will suffice to show that \(\mathcal{D}\) has cocartesian lifts over inert morphisms, active surjections, and injections in \(\mathsf{Fin}_{*}\) separately. Cocartesian lifts over inert morphisms exist by virtue of \(\mathcal{D}\) being an operad (and they are computed as in \(\mathcal{C}^{\otimes}\) by virtue of \(\mathcal{D}\subseteq\mathcal{C}\) being a full suboperad inclusion). Because \(\mathcal{D}\subseteq\mathcal{C}\) is closed under \(\otimes\), it follows that cocartesian lifts over active surjective morphisms of \(\mathsf{Fin}_{*}\) also exist, computed as in \(\mathcal{C}^{\otimes}\). Because cocartesian morphisms are closed under composition, it will now suffice to exhibit cocartesian lifts over the injective map \(i_{m+1}:m_{+}\to(m+1)_{+}\) which misses \(m+1\), for each \(m\in\mathbb{N}\). Let \(I\in\mathcal{C},J\in\mathcal{D}\) be the respective units and let \(F\) be the exponential in \(\mathcal{C}\). We claim that a cocartesian lift of \(D_{1},\dots,D_{m}\) over \(i_{m+1}\) is given by \(D_{1},\dots,D_{m},J\). That is, for \(f:m+1\to n\), we claim that the map \(\mathcal{D}^{\otimes}_{f}(D_{1},\dots,D_{m},J;\vec{D}^{\prime})\to\mathcal{D} ^{\otimes}_{fi_{m+1}}(D_{1},\dots,D_{m};\vec{D}^{\prime})\) is an isomorphism. By the Segal condition, it will suffice to consider the case where \(n=1\) and \(f\) is active. In this case, we have \[\mathcal{D}^{\otimes}_{f}(D_{1},\dots,D_{m},J;D^{\prime}) =\mathcal{C}^{\otimes}_{f}(D_{1},\dots,D_{m},J;D^{\prime})\] \[=\mathcal{C}(D_{1}\otimes\dots\otimes D_{m}\otimes J,D^{\prime})\] \[=\mathcal{C}(J,F(D_{1}\otimes\dots\otimes D_{m},D^{\prime}))\] \[=\mathcal{D}(J,F(D_{1}\otimes\dots\otimes D_{m},D^{\prime}))\] \[=\mathcal{D}^{\otimes}_{0}(*;F(D_{1}\otimes\dots\otimes D_{m},D^ {\prime}))\] \[=\mathcal{C}^{\otimes}_{0}(*;F(D_{1}\otimes\dots\otimes D_{m},D^ {\prime}))\] \[=\mathcal{C}(I,F(D_{1}\otimes\dots\otimes D_{m},D^{\prime}))\] \[=\mathcal{C}(D_{1}\otimes\dots\otimes D_{m}\otimes I,D^{\prime})\] \[=\mathcal{C}(D_{1}\otimes\dots\otimes D_{m},D^{\prime})\] \[=\mathcal{C}^{\otimes}_{fi_{m+1}}(D_{1},\dots,D_{m};D^{\prime})\] \[=\mathcal{D}^{\otimes}_{fi_{m+1}}(D_{1},\dots,D_{m};D^{\prime})\] as desired. Here \(*\) denotes the unique object of \(\mathcal{C}^{\otimes}_{0_{+}}=\mathcal{D}^{\otimes}_{0_{+}}\) and \(0\) is the unique morphism \(0_{+}\to 1_{+}\) in \(\mathsf{Fin}_{*}\). In this argument, we have used that \(\mathcal{D}^{\otimes}\subseteq\mathcal{C}^{\otimes}\) is a full subcategory, the existence of cocartesian lifts for \(\mathcal{C}^{\otimes}\to\mathsf{Fin}_{*}\), the exponential closedness of \(\mathcal{C}\), the fully faithfulness of \(\mathcal{D}\to\mathcal{C}\) and the fact that \(\mathcal{D}\) is closed under \(\otimes\) and exponentiation, the universal proeprty of \(J\), the fully faithfulness of \(\mathcal{D}^{\otimes}\to\mathcal{C}^{\otimes}\), the unitality of \(\mathcal{C}\), the exponential closedness of \(\mathcal{C}\), unitality of \(\mathcal{C}\), the definition of \(D_{1}\otimes\dots\otimes D_{m}\), and the fully faithfulness of \(\mathcal{D}^{\otimes}\to\mathcal{C}^{\otimes}\) again. **Definition 3.1.11**.: Let \(\mathcal{C}\) be an \(\infty\)-category and \(\mathcal{D}\) a full subcategory. A functor \(L:\mathcal{C}\to\mathcal{D}\) is said to be a _localization functor_ if it is left adjoint to the inclusion \(i:\mathcal{D}\to\mathcal{C}\) and moreover \(Li\) is naturally equivalent to the identity on \(\mathcal{D}\). The composite \(iL:\mathcal{C}\to\mathcal{C}\) may also be referred to as a _localization functor_. An _\(L\)-local morphism_ is a morphism in \(\mathcal{C}\) whose image under \(L\) is an equivalence. An _\(L\)-local functor_ out of \(\mathcal{C}\) is a functor taking \(L\)-local morphisms to equivalences; these span a full subcategory \(\operatorname{Fun}_{L\text{-loc}}(\mathcal{C},\mathcal{E})\) of \(\operatorname{Fun}(\mathcal{C},\mathcal{E})\). Let \(\mathcal{O}\) be an \(\infty\)-category, \(\mathcal{C}\to\mathcal{O}\) a category over \(\mathcal{O}\), and \(\mathcal{D}\) a full subcategory of \(\mathcal{C}\) over \(\mathcal{O}\). An adjunction functor \(L:\mathcal{C}_{\mathcal{C}}^{\to}\mathcal{D}:i\) is said to be a _localization over \(\mathcal{O}\)_ if it is a localization and \(L,i\) are over \(\mathcal{O}\). We do not require that the unit and counit of the adjunction \(L\dashv i\) be vertical. We may call the endofunctor \(iL\) a _localization functor over \(\mathcal{O}\)_, and an _\(L\)-local functor over \(\mathcal{O}\)_ out of \(\mathcal{C}\) is a functor over \(\mathcal{O}\) which is \(L\)-local; these span a full subcategory \(\operatorname{Fun}_{L\text{-loc},\mathcal{O}}(\mathcal{C},\mathcal{E})\) of \(\operatorname{Fun}_{\mathcal{O}}(\mathcal{C},\mathcal{E})\). Let \(\mathcal{O}\) be an \(\infty\)-category, \(\mathcal{C}\to\mathcal{O}\) a cocartesian fibration, and \(\mathcal{D}\) a full subcategory of \(\mathcal{C}\) over \(\mathcal{O}\) which is cocartesian over \(\mathcal{O}\). We do not require that the inclusion \(\mathcal{D}\to\mathcal{C}\) preserve cocartesian edges. A functor \(L:\mathcal{C}\to\mathcal{D}\) is said to be a _cocartesian localization functor over \(\mathcal{O}\)_ if is a localization functor, it is over \(\mathcal{O}\), and it preserves cocartesian edges. We may refer to \(iL\) as a _cocartesian localization functor over \(\mathcal{O}\)_. An _\(L\)-local cocartesian functor over \(\mathcal{O}\)_ is an \(L\)-local functor which is cocartesian over \(\mathcal{O}\); these span a full subcategory \(\operatorname{Fun}_{L\text{-loc},\mathcal{O}}^{\text{cocart}}(\mathcal{C}, \mathcal{E})\) of \(\operatorname{Fun}_{\mathcal{O}}^{\text{cocart}}(\mathcal{C},\mathcal{E})\). In the case where \(\mathcal{O}\) is an \(\infty\)-operad, recall that the cocartesian fibrations \(\mathcal{C},\mathcal{D}\) over \(\mathcal{O}\) are called _\(\mathcal{O}\)-monoidal \(\infty\)-categories_. In this case, we correspondingly refer to a cocartesian localization functor over \(\mathcal{O}\) as an _\(\mathcal{O}\)-monoidal localization_, and we refer to a cocartesian localization functor over \(\mathcal{O}\) as an _\(L\)-local \(\mathcal{O}\)-monoidal functor_; these span a full subcategory \(\operatorname{Fun}_{L,\mathcal{O}}^{\text{cocart}}(\mathcal{C},\mathcal{E})\) of \(\operatorname{Fun}_{\mathcal{O}}(\mathcal{C},\mathcal{E})\). **Remark 3.1.12**.: The notion of a symmetric monoidal \(\infty\)-category is recovered as an \(\mathcal{O}\)-monoidal \(\infty\)-category when \(\mathcal{O}^{\otimes}=\operatorname{\mathsf{Fin}}_{*}\) is the \(E_{\infty}\)-operad. **Proposition 3.1.13**.: 1. _If_ \(L:\mathcal{C}\to\mathcal{D}\) _is a localization functor, it has the following universal property. For any_ \(\infty\)_-category_ \(\mathcal{E}\)_, precomposing_ \(L\) _induces an equivalence between_ \(\operatorname{Fun}(\mathcal{D},\mathcal{E})\) _and_ \(\operatorname{Fun}_{L\text{-loc}}(\mathcal{C},\mathcal{E})\)_, with inverse given by restriction to_ \(\mathcal{D}\)_._ 2. _If_ \(L:\mathcal{C}_{\leftarrow}^{\to}\mathcal{D}:i\) _is a localization functor over_ \(\mathcal{O}\)_, it has the following universal property. For any_ \(\infty\)_-category_ \(\mathcal{E}\) _over_ \(\mathcal{O}\)_, the precomposing_ \(L\) _induces an equivalence between_ \(\operatorname{Fun}_{\mathcal{O}}(\mathcal{D},\mathcal{E})\) _and_ \(\operatorname{Fun}_{L\text{-loc},\mathcal{O}}(\mathcal{C},\mathcal{E})\)_, with inverse given by restricting to_ \(\mathcal{D}\)_._ 3. _If_ \(L:\mathcal{C}\to\mathcal{D}\) _is a cocartesian localization functor over_ \(\mathcal{O}\)_, it has the following universal property. For any_ \(\infty\)_-category_ \(\mathcal{E}\) _cocartesian over_ \(\mathcal{O}\)_, precomposing_ \(L\) _induces an equivalence between_ \(\operatorname{Fun}_{\mathcal{O}}(\mathcal{D},\mathcal{E})\) _and_ \(\operatorname{Fun}_{L,\mathcal{O}}(\mathcal{C},\mathcal{E})\)_, with inverse given by restriction to_ \(\mathcal{D}\)_._ Proof.: (1) is [11, Proposition 5.2.7.12]. (2) follows from (1) as soon as we note that since \(L\) and the inclusion \(i:\mathcal{D}\to\mathcal{C}\) are over \(\mathcal{O}\), precomposing them preserves the property of being over \(\mathcal{O}\). (3) follows from (2) as soon as we check the following. Let \(F:\mathcal{D}\to\mathcal{E}\) be a functor over \(\mathcal{O}\). We must show that if \(FL:\mathcal{C}\to\mathcal{E}\) preserves cocartesian morphisms, then \(F\) preserves cocartesian morphisms. (Because we do not assume that \(i\) preserves cocartesian morphisms, this does not immediately follow from the fact that \(F\simeq FLi\).) To see this, it suffices to show that any cocartesian morphism in \(\mathcal{D}\) is isomorphic to the image of a cocartesian morphism in \(\mathcal{C}\). This is straightforward: let \(f\) be a morphism in \(\mathcal{O}\), let \(D\in\mathcal{D}\) lie over the domain of \(f\), and let \(f_{*}^{D}\) be a cocartesian lift of \(f\) out of \(D\). Let \(f_{*}^{iD}\) be a cocartesian lift of \(f\) out of \(iD\in\mathcal{C}\). We have a factorization \(if_{*}^{D}=\overline{if_{*}^{D}}f_{*}^{iD}\) where \(\overline{if_{*}^{D}}\) is vertical. Applying \(L\), we have \(Lif_{*}^{D}=\overline{Lif_{*}^{D}}Lf_{*}^{iD}\), with \(\overline{Lif_{*}^{D}}\) vertical. Now \(Lif_{*}^{D}\) is isomorphic to \(f_{*}^{D}\) and hence cocartesian, while \(Lf_{*}^{iD}\) is cocartesian because \(L\) preserves cocartesian morphisms. Hence the vertical factorization \(\overline{Lif_{*}^{D}}\) must be an isomorphism. So \(f_{*}^{D}\) is isomorphic to \(Lif_{*}^{D}\), which in turn isomorphic to \(Lf_{*}^{iD}\), which is the image under \(L\) of a cocartesian morphism, completing the proof. **Lemma 3.1.14**.: _Let \((\mathcal{C},\wedge,S)\) be a symmetric monoidal \(\infty\)-category. Then \(S\) is naturally (in fact, uniquely) an \(E_{\infty}\)-algebra in \(\mathcal{C}\), and as such is the initial object of the \(\infty\)-category of \(E_{\infty}\)-algebras in \(\mathcal{C}\)._ Proof.: See [17, Corollary 3.2.1.9]. ### Structures detected in the homotopy category In this section, we discuss certain properties of \(\infty\)-categories which may be detected by passing to the homotopy category. Notably, in Section 3.2.1, the \(1\)-dimensional cobordism hypothesis is used to show that duals may be freely adjoined to the objects of a symmetric monoidal \(\infty\)-category in a wide variety of contexts. Moreover, in Section 3.2.3 the category-wide splittings arising from dualizable objects with twisted-trivial braiding (Section 2.3 and Section 2.4) are lifted to the \(\infty\)-categorical setting. #### 3.2.1. The Cobordism Hypothesis In this subsection, we use the \(1\)-dimensional cobordism hypothesis to justify defining a dualizable object in a symmetric monoidal \(\infty\)-category to be an object whose image in the homotopy category is dualizable. Moreover, the \(1\)-dimensional cobordism hypothesis is used to verify the hypotheses of the Adjoint Functor Theorem and conclude that in many contexts, duals maybe freely adjoined to all objects of a symmetric monoidal \(\infty\)-category (Corollary 3.2.2). Let \(\mathsf{SMD}\) denote the full sub-\(\infty\)-category of \(\mathsf{SMC}\) spanned by the small symmetric monoidal \(\infty\)-categories which have duals for objects. We would like to show that the inclusion \(\mathsf{SMD}\to\mathsf{SMC}\), and variants where the categories involved have various colimits, have left adjoints. This is not difficult once we avail ourselves of a form of the \(1\)-dimensional cobordism hypothesis. **Theorem 3.2.1**.: _[_17_, 18_]_ _Let \(\mathcal{C}\) be a symmetric monoidal \(\infty\)-category. Then the homotopy category of \(\mathcal{C}\) has duals if and only if \(\mathcal{C}\) is right orthogonal to the inclusion \(\mathsf{Fin}^{\mathrm{iso}}\to\mathsf{Bord}_{1}^{\mathrm{fr}}\) of the symmetric monoidal category of finite sets and bijections (under disjoint union) into the symmetric monoidal \(\infty\)-category of framed 1-dimensional bordisms. Moreover, an object \(C\in\mathcal{C}\) is dualizable if and only if the symmetric monoidal functor \(\mathsf{Fin}^{\mathrm{iso}}\to\mathcal{C}\) classifying \(C\) extends to a functor \(\mathsf{Bord}_{1}^{\mathrm{fr}}\to\mathcal{C}\), in which case the extension is unique up to a contractible space of choices._ **Corollary 3.2.2**.: _The inclusion \(\mathsf{SMD}\to\mathsf{SMC}\) has a left adjoint \(\mathbb{D}\) exhibiting \(\mathsf{SMD}\) as a localization of \(\mathsf{SMC}\). If \(\mathcal{K}\to\mathsf{SMC}\) is a right-adjoint functor from a presentable \(\infty\)-category, then \(\mathsf{SMD}\times_{\mathsf{SMC}}\mathcal{K}\to\mathcal{K}\) has a left adjoint exhibiting \(\mathsf{SMD}\times_{\mathsf{SMC}}\mathcal{K}\) as a localization of \(\mathcal{K}\)._ Proof.: The second statement follows from the first because limits of presentable categories and right adjoint functors are computed as for the underlying categories. The first statement holds because by Theorem 3.2.1, \(\mathsf{SMD}\) is characterized as the full subcategory right orthogonal to a small (in fact a singleton) set of morphisms \(\{\mathsf{Fin}^{\mathrm{iso}}\to\mathsf{Bord}_{1}^{\mathrm{fr}}\}\), and therefore is an accessible localization of the presentable \(\infty\)-category \(\mathsf{SMC}\). **Corollary 3.2.3**.: _If \(\mathcal{C}\) is a symmetric monoidal \(\infty\)-category and \(S\subseteq\mathcal{C}\) is a set of objects, then there is a universal symmetric monoidal \(\infty\)-category \(\mathbb{D}_{S}\mathcal{C}\) receiving a symmetric monoidal functor from \(\mathcal{C}\) carrying the object of \(S\) to dualizable objects. If \(\mathcal{K}\) is a cocomplete \(\infty\)-category admitting a functor \(U:\mathcal{K}\to\mathsf{SMC}\), and if \(P\mathsf{Fin}^{\mathrm{iso}},P\mathsf{Bord}_{1}^{\mathrm{fr}}\in\mathcal{K}\) are objects such that that \(\mathcal{K}(P\mathsf{Fin}^{\mathrm{iso}},-)\cong\mathsf{SMC}(\mathsf{Fin}^{ \mathrm{iso}},U-)\) and \(\mathcal{K}(P\mathsf{Bord}_{1}^{\mathrm{fr}},-)\cong\mathsf{SMC}(\mathsf{Bord }_{1}^{\mathrm{fr}},U-)\), then for any \(K\in\mathcal{K}\) and \(S\subseteq UK\), there is likewise a universal object \(\mathbb{D}_{S}^{\mathcal{K}}K\) of \(\mathcal{K}\) admitting a morphism \(f:K\to\mathbb{D}_{S}^{\mathcal{K}}K\) such that \(Uf:U\mathcal{K}\to U(\mathbb{D}_{C}^{\mathcal{K}}K)\) carries each \(C\in S\) to a dualizable object._ Proof.: For the first statement, \(\mathbb{D}_{C}\mathcal{C}\) is computed by a pushout in \(\mathsf{SMC}\): \(\mathbb{D}_{C}\mathcal{C}=\mathcal{C}\cup_{\mathbb{I}_{s\in S}\mathsf{Fin}^{ \mathrm{iso}}}\Pi_{s\in S}\mathsf{Bord}_{1}^{\mathrm{fr}}\), where \(\Pi_{s\in S}\) denotes a coproduct in \(\mathsf{SMC}\). (Note that pushouts in \(\mathsf{SMC}\) are computed via a bar construction in \(\mathsf{Cat}_{\infty}\).) Likewise, for the second statement we have \(\mathbb{D}_{S}^{\mathcal{K}}K=K\cup_{\mathbb{I}_{n\in S}P\mathsf{Fin}^{ \mathrm{iso}}}\Pi_{s\in S}P\mathsf{Bord}_{1}^{\mathrm{fr}}\). (Again, if \(\mathcal{K}\) is an \(\infty\)-category of commutative algebra objects in some symmetric monoidal \(\infty\)-category \(\mathcal{L}\), then pushouts in \(\mathcal{K}\) are computed via a bar construction in \(\mathcal{L}\)). #### 3.2.2. Certain limits and colimits In this subsection, we discuss certain very basic limit and colimit constructions which may be detected in the homotopy category. **Lemma 3.2.4**.: _Let \(\mathcal{C}\) be an \(\infty\)-category with \(K\)-colimits, where \(K\) is discrete. Then the homotopy category \(\mathsf{ho}\mathcal{C}\) has \(K\)-colimits, and the canonical functor \(\mathcal{C}\to\mathsf{ho}\mathcal{C}\) preserves them._ Proof.: This follows from the fact that \(\pi_{0}:\mathsf{Top}\to\mathsf{Set}\) commutes with products. **Corollary 3.2.5**.: _Let \(\mathcal{C}\) be an \(\infty\)-category with an initial object and a terminal object. Then \(\mathcal{C}\) is pointed if and only if \(\mathsf{ho}\mathcal{C}\) is pointed._ _Let \(\mathcal{C}\) be a pointed \(\infty\)-category with finite products and finite coproducts. Then \(\mathcal{C}\) is semiadditive if and only if \(\mathsf{ho}\mathcal{C}\) is semiadditive._ _Let \(\mathcal{C}\) be a semiadditive \(\infty\)-category. Then \(\mathcal{C}\) is additive if and only if \(\mathsf{ho}\mathcal{C}\) is additive._ Proof.: For the first statement, \(\mathsf{ho}\mathcal{C}\) has an initial and terminal object by Lemma 3.2.4 and its dual, and \(\mathcal{C}\to\mathsf{ho}\mathcal{C}\) preserves initial and terminal objects. The map from the latter to the former is an isomorphism in \(\mathcal{C}\) iff it is an isomorphism in \(\mathsf{ho}\mathcal{C}\), verifying the claim. The proof of the second statement is similar, using the canonical map \(A\amalg B\to A\times B\) coming from the pointed structure. The proof of the third statement is also similar, using the map \(\begin{pmatrix}1&1\\ 0&1\end{pmatrix}:A\oplus A\to A\oplus A\) **Lemma 3.2.6**.: _Let \(\mathcal{C}\) be a pointed \(\infty\)-category, and let \(i:E\to S\) be a split monomorphism in \(\mathcal{C}\). If \(i\) has a cofiber \(q:S\to S/E\), then \(q\) is the cokernel of \(i\) in the homotopy category \(\mathsf{hoC}\)._ Proof.: The cofiber sequence \(E\to S\to S/E\) induces a fiber sequence \(\mathcal{C}(E,D)\leftarrow\mathcal{C}(S,D)\leftarrow\mathcal{C}(S/E,D)\) for any \(D\in\mathcal{C}\). Because \(E\to S\) splits, the induced long exact sequence of homotopy groups includes in particular a short exact sequence \(\pi_{0}\mathcal{C}(E,D)\leftarrow\pi_{0}\mathcal{C}(S,D)\leftarrow\pi_{0} \mathcal{C}(S/E,D)\), which is to say that \(E\to S\to S/E\) is a cofiber sequence in \(\mathsf{hoC}\) as desired. **Corollary 3.2.7**.: _Let \(\mathcal{C}\) be a pointed \(\infty\)-category, and suppose that \(X_{\stackrel{{\mathcal{L}}}{{\dash}}}^{\sim}Z\) is a retract. If \(\mathcal{C}\) is semiadditive and \(X\) is a group object, then any cofiber of \(i\) (or fiber of \(p\)) is a complement to \(X\)._ Proof.: This follows from Lemma 3.2.6 and Lemma 1.5.7. **Lemma 3.2.8**.: 1. _Let_ \(\mathcal{C}\) _be a symmetric monoidal_ \(\infty\)_-category. Then the unit object is self-dual._ 2. _Let_ \(\mathcal{C}\) _be a symmetric monoidal_ \(\infty\)_-category. Then any_ \(\otimes\)_-invertible object is dualizable (with dual given by its inverse)._ 3. _Let_ \(\mathcal{C}\) _be a symmetric monoidal pointed_ \(\infty\)_-category. Then the zero object is dualizable._ 4. _Let_ \(\mathcal{C}\) _be a symmetric monoidal semiadditive_ \(\infty\)_-category. Then the dualizable objects are closed under direct sum._ 5. _Let_ \(\mathcal{C}\) _be a symmetric monoidal stable_ \(\infty\)_-category. Then the dualizable objects in_ \(\mathcal{C}\) _are closed under finite limits and finite colimits._ 6. _Let_ \(\mathcal{C}\) _be a symmetric monoidal_ \(\infty\)_-category with split idempotents. Then the dualizable objects in_ \(\mathcal{C}\) _are closed under retracts._ Proof.: (1) is trivial. (2) is an instance of the fact that any equivalence of categories may be upgraded to an adjoint equivalence. For (3), if \((X,X^{\vee},\eta,\varepsilon)\) and \((X^{\prime},X^{\prime\vee},\eta^{\prime},\varepsilon^{\prime})\) are duality data, then writit \((X\oplus X^{\prime})\wedge(X^{\vee}\oplus{X^{\prime}}^{\vee})=(X\wedge X^{ \vee})\oplus(X\wedge X^{\prime}\wedge X^{\vee})\oplus(X^{\prime}\wedge X^{ \prime}\wedge X^{\prime})\) and \((X^{\vee}\oplus{X^{\prime}}^{\vee})\wedge(X\oplus X^{\prime})=(X^{\vee} \wedge X)\oplus(X^{\vee}\wedge X^{\prime})\oplus({X^{\prime}}^{\vee}\wedge X) \oplus({X^{\prime}}^{\vee}\wedge X^{\prime})\), then \((X\oplus X,X^{\vee}\oplus{X^{\prime}}^{\vee},[\eta,0,0,\eta^{\prime}],[ \varepsilon,0,0,\varepsilon^{\prime}])\) is a duality datum. For (4), it suffices in light of (3) to show that dualizable objects are closed under cofibers and desuspension. The latter follows from closure under tensor and the fact that \(\Sigma^{-1}S=(\Sigma S)^{\vee}\) is dualizable. For the former, from a cofiber sequence \(X\to Y\to Z\) with \(X,Y\) dualizable, we obtain a fiber sequence \(X^{\vee}\gets Y^{\vee}\gets F\), and we claim that \(F\) is dual to \(Z\). For any \(A,B\in\mathcal{C}\), we have \(\mathcal{C}(Z\otimes A,B)=\mathcal{C}(Y\otimes A,B)\times_{\mathcal{C}(X \otimes A,B)}\{0\}=\mathcal{C}(A,Y^{\vee}\otimes B)\times_{\mathcal{C}(A,X^{ \vee}\otimes B)}\{0\}=\mathcal{C}(A,F\otimes B)\). That is, we have a an adjunction \(Z\otimes(-)\mathcal{C}\smash{\mathop{\leftarrow}\limits^{\sim}}\mathcal{C}:F \otimes(-)\), which by the Yoneda lemma implies that \(F\) is right adjoint to \(Z\) in \(\mathbb{B}\mathcal{C}\), i.e. that \(F=Z^{\vee}\) as desired. For (5), suppose that \(A\) has a dual \(A^{\vee}\). Suppose that \(B\) is a retract of \(A\), splitting the idempotent \(e:A\to A\). Then there is a dual idempotent \(e^{\vee}:A^{\vee}\to A^{\vee}\), and a splitting \(B^{\vee}\) of this idempotent provides a dual for \(B\). #### 3.2.3. Functorial Splitting In this subsection we discuss how to use the observations of Section 1 Section 3.2.2 to obtain splittings of entire \(\infty\)-categories from the existence of dualizable objects with twisted-trivial braiding. **Proposition 3.2.9**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category where \(n\geq 2\). Suppose that \(E\) is a clopen idempotent. Then_ 1. _The full subcategory_ \(\mathcal{C}_{E}\) _of_ \(E\)_-stable objects is canonically_ \(E_{n}\)_-monoidal, with the localization functor_ \(\mathcal{C}\to\mathcal{C}_{E}\) _being an_ \(E_{n}\)_-monoidal localization._ 2. _If_ \(\mathcal{C}\) _has (co)limits of shape_ \(I\) _preserved by_ \(\wedge\) _in each variable, then so does_ \(\mathcal{C}_{E}\)_. Moreover, the functors_ \(\mathcal{C}_{\leftarrow}^{\to}\mathcal{C}_{E}\) _preserve these colimits, and an_ \(E_{n}\)_-monoidal functor_ \(\mathcal{C}_{E}\to\mathcal{D}\) _preserves_ \(I\)_-(co)limits iff_ \(\mathcal{C}\to\mathcal{C}_{E}\to\mathcal{D}\) _does._ 3. _If_ \(E\) _has a complement_ \(S/E\)_, then the localization functor_ \(\mathcal{C}\to\mathcal{C}_{E}\times\mathcal{C}_{S/E}\) _is an_ \(E_{n}\)_-monoidal equivalence._ Proof.: For (1), by [11, Proposition 2.2.1.9], it suffices to show that \(E\)-local equivalences are stable under tensoring. This is clear: if \(E\wedge f\) and \(E\wedge g\) are equivalences, then \(E\wedge(f\wedge g)=(E\wedge f)\wedge(E\wedge g)\) is an equivalence. For (2), note that \(\mathcal{C}_{E}\) is closed in \(\mathcal{C}\) under \(I\)-(co)limits. For if we have an \(I\)-diagram in \(\mathcal{C}_{E}\) with (co)limit in \(\mathcal{C}\), we may smash the colimit diagram with \(E\); by hypothesis this results in a new (co)limit diagram. The base of the original diagram is naturally isomorphic to the new diagram, so the old and new (co)limits are also isomorphic via the natural map, i.e. the (co)limit lies in \(\mathcal{C}_{E}\). So \(i\) preserves and reflects \(I\)-(co)limits. Moreover \(iL\) (which is given by smashing with \(E\)) also preserves \(I\)-(co)limits. It follows that \(L\) preserves \(I\)-colimits. Since \(i\) and \(L\) preserve \(I\)-(co)limits, (2) follows. For (3), the functor is a product of \(E_{n}\)-monoidal functors and hence also \(E_{n}\)-monoidal. That it is an equivalence can be checked at the level of homotopy categories, and follows from Proposition 2.4.2. **Theorem 3.2.10**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category, where \(n\geq 2\)._ 1. _If_ \(E\) _is a clopen idempotent, then the_ \(E_{n}\)_-monoidal localization_ \(\mathcal{C}\to\mathcal{C}_{E}\) _is the universal_ \(E_{n}\)_-monoidal functor inverting_ \(E\) _under_ \(\wedge\)_._ 2. _If_ \(T\) _has twisted-trivial braiding and a dual_ \(T^{\vee}\)_, then the_ \(E_{n}\)_-monoidal localization_ \(\mathcal{C}\to\mathcal{C}_{T^{\vee}\wedge T}\) _is the universal_ \(E_{n}\)_-monoidal functor inverting_ \(T\) _under_ \(\wedge\)_._ 3. _Suppose that_ \(\mathcal{C}\) _has finite biproducts preserved by_ \(\wedge\) _in each variable. If_ \(E\) _is a clopen idempotent with complement_ \(S/E\)_, then_ \(\mathcal{C}\to\mathcal{C}_{S/E}\) _is the universal_ \(E_{n}\)_-monoidal functor preserving finite biproducts and sending_ \(E\) _to 0._ Proof.: For (1), \(E\) is certainly \(\wedge\)-invertible in \(\mathcal{C}_{E}\) - in fact it is the monoidal unit. By the universal property of \(E_{n}\)-monoidal localization, it suffices to verify that an \(E_{n}\)-monoidal functor \(F:\mathcal{C}\to\mathcal{D}\) inverts \(E\) under \(\wedge\) iff it takes \(E\)-local morphisms to equivalences. Now, \(F\) inverts \(E\) under \(\wedge\) iff it takes the unit and counit exhibiting \(E\) as self-dual to isomorphisms. This unit and counit are none other than the canonical maps \(E\to S\) and \(S\to E\), so they are inverted iff \(F\) takes \(E\to S\) to an isomorphism. But becuase \(F\) is \(E_{n}\)-monoidal, this happens iff \(F\) takes \(E\wedge X\to X\) to an isomorphism for every \(X\), i.e. takes \(E\)-local morphisms to equivalences as claimed. For (2), \(T\) is certainly \(\wedge\)-invertible in \(\mathcal{C}_{T^{\vee}\wedge T}\), with inverse \(T^{\vee}\) - that is to say, \(T^{\vee}\wedge T\) is the monoidal unit. A functor \(F:\mathcal{C}\to\mathcal{D}\) inverts \(T\) under \(\wedge\) if and only if it sends the duality data for \(T\) to equivalences, if and only if it sends the clopenness data for \(T^{\vee}\wedge T\) to equivalences, if and only if (by (1)) it factors though \(\mathcal{C}\to\mathcal{C}_{T^{\vee}\wedge T}\) For (3), it suffices to verify that \(F:\mathcal{C}\to\mathcal{D}\) inverts \(S/E\) iff it takes \(E\) to \(0\). This follows from the fact that in a biproduct \(Z=X\oplus Y\), we have \(X=0\) iff \(Y\to Z\) is an isomorphism. ### \(\infty\)-Categories with certain colimits In this section, we review some material from [10] on \(\infty\)-categories with certain colimits, with an eye toward monoidal properties. We also begin considering certain subcategories of such categories, starting with the pointed case. This study will continue in Section 4. **Lemma 3.3.1**.: _Let \(\mathcal{K}\) be a class of small \(\infty\)-categories and let \(\mathcal{C}\) be a small \(\infty\)-category. Let \(\mathcal{R}\) be a set of cocones on diagrams with shape as in \(\mathcal{K}\). Then there exists an \(\infty\)-category \(\mathcal{P}^{\mathcal{K}}_{\mathcal{R}}(\mathcal{C})\) with all colimits of shape in \(\mathcal{K}\), and a functor \(\mathcal{C}\to\mathcal{P}^{\mathcal{K}}_{\mathcal{R}}(\mathcal{C})\) carrying the cocones of \(\mathcal{R}\) to colimiting cocones, which is universal with these properties._ _Moreover, \(\mathcal{P}^{\mathcal{K}}_{\mathcal{R}}(\mathcal{C})\) may be constructed as follows. Let \(\mathsf{Psh}(\mathcal{C})\) be the \(\infty\)-category of presheaves on \(\mathcal{C}\), let \(S\subseteq\operatorname{Mor}\mathsf{Psh}(\mathcal{C})\) be the collection of morphisms \(\varinjlim_{k\in K}R(k)\to R(\infty)\) for each \(R:K^{\triangleright}\to\mathcal{C}\) in \(\mathcal{R}\), and let \(L:\mathsf{Psh}(\mathcal{C})\to S^{-1}\mathsf{Psh}(\mathcal{C})\) be the localization functor. Then \(\mathcal{P}^{\mathcal{K}}_{\mathcal{R}}(\mathcal{C})\) is the closure of \(L(\mathcal{C})\subseteq S^{-1}\mathsf{Psh}(\mathcal{C})\) under \(\mathcal{K}\)-shaped colimits._ Proof.: This is [10, Proposition 5.3.6.2]. The explicit description is given in the proof. **Notation 3.3.2**.: _We will denote \(\mathcal{P}_{\mathcal{R}}(\mathcal{C}):=\mathcal{P}^{\mathcal{K}}_{\mathcal{R }}(\mathcal{C})\) for \(\mathcal{K}\) the class of all small colimits. We will denote \(\mathcal{P}^{\mathcal{K}}(\mathcal{C}):=\mathcal{P}^{\mathcal{K}}_{\mathcal{R }}(\mathcal{C})\) for \(\mathcal{R}\) the empty class of cocones. Thus \(\mathcal{P}(\mathcal{C})=\mathsf{Psh}(\mathcal{C})\) denotes \(\mathcal{P}^{\mathcal{K}}_{\mathcal{R}}(\mathcal{C})\) when \(\mathcal{K}\) is the class of all small colimits and \(\mathcal{R}\) is the empty class of cocones._ **Definition 3.3.3**.: Let \(\mathcal{K}\) be a class of small categories. Let \(\mathsf{Cat}_{\mathcal{K}}\) be the symmetric monoidal \(\infty\)-category of \(\infty\)-categories with \(\mathcal{K}\)-indexed colimits, where the symmetric monoidal structure is that of [10, Corollary 4.8.1.4]. **Remark 3.3.4**.: According to [10, Notation 4.8.1.2], a morphism in \(\mathsf{Cat}^{\otimes}_{\mathcal{K}}\) from \(\mathcal{C}_{1},\ldots,\mathcal{C}_{n}\) to \(\mathcal{D}\) is a functor \(\mathcal{C}_{1}\times\cdots\times\mathcal{C}_{n}\to\mathcal{D}\) which preserves \(\mathcal{K}\)-indexed colimits separately in each variable. In particular, when \(n=0\), a nullary morphism \(*\to\mathcal{D}\) is simply an object of \(\mathcal{D}\). According to [10, Proposition 4.8.1.3], the tensor product in \(\mathsf{Cat}_{\mathcal{K}}\) may be constructed by setting \(\mathcal{C}\otimes_{\mathcal{K}}\mathcal{D}=\mathcal{P}^{\mathcal{K}}_{ \mathcal{K}\boxtimes\mathcal{K}}(\mathcal{C}\times\mathcal{D})\), where \(\mathcal{K}\boxtimes\mathcal{K}\) is the collection of diagrams described in [10, Notation 4.8.1.7]. That is, \(\mathcal{K}\boxtimes\mathcal{K}\) comprises those diagrams of the form \((C_{k},D)_{k\in K}\) for \(K\in\mathcal{K}\), with vertex \((\varinjlim_{k\in K}C_{k},D)\), as well as those diagrams of the form \((C,D_{k})_{k\in K}\) for \(K\in\mathcal{K}\) with vertex \((C,\varinjlim_{k\in K}D_{k})\). As recalled in Lemma 3.3.1, the proof of [10, Proposition 5.3.6.2] shows this means that \(\mathcal{C}\otimes_{\mathcal{K}}\mathcal{D}\) is the smallest full subcategory of \((\mathcal{K}\boxtimes\mathcal{K})^{-1}\mathsf{Psh}(\mathcal{C}\times\mathcal{D})\) containing the image \(L(\mathcal{C}\times\mathcal{D})\) (where \(L:\mathsf{Psh}(\mathcal{C}\times\mathcal{D})\to(\mathcal{K}\boxtimes\mathcal{K}) ^{-1}\mathsf{Psh}(\mathcal{C}\times\mathcal{D})\) is the localization functor) and closed under \(\mathcal{K}\)-shaped colimits. **Notation 3.3.5**.: _For \(\mathcal{C}_{1},\ldots,\mathcal{C}_{n},\mathcal{D}\in\mathsf{Cat}_{\mathcal{K}}\), we let \(\operatorname{Fun}_{\mathcal{K}}(\mathcal{C}_{1},\ldots,\mathcal{C}_{n}; \mathcal{D})\) be the \(\infty\)-category of functors \(\mathcal{C}_{1}\times\cdots\times\mathcal{C}_{n}\to\mathcal{D}\) preserving \(\mathcal{K}\)-colimits separately in each variable, so that the multi-hom-space \(\mathsf{Cat}_{\mathcal{K}}(\mathcal{C}_{1},\ldots,\mathcal{C}_{n};\mathcal{D})\) is the underlying \(\infty\)-groupoid of \(\operatorname{Fun}_{\mathcal{K}}(\mathcal{C}_{1},\ldots,\mathcal{C}_{n}; \mathcal{D})\)._ **Remark 3.3.6**.: As noted in [10, 4.8.1.6], the construction \(\operatorname{Fun}_{\mathcal{K}}\) of Notation 3.3.5 exhibits the symmetric monoidal \(\infty\)-category \(\mathsf{Cat}_{\mathcal{K}}\) as an exponentially closed symmetric monoidal \(\infty\)-category. **Remark 3.3.7**.: There is an equivalence between \(E_{\infty}\)-algebra objects in \(\mathsf{Cat}_{\mathcal{K}}\) and symmetric monoidal \(\infty\)-categories with \(\mathcal{K}\)-indexed colimits over which \(\otimes\) distributes. More generally, there is an equivalence between \(E_{n}\)-monoidal \(\infty\)-categories with \(\mathcal{K}\)-colimits over which \(\otimes\) distributes, and \(E_{n}\)-algebra objects in \(\mathsf{Cat}_{\mathcal{K}}\). See [12, 4.8.1.9]. **Definition 3.3.8**.: Let \(\mathcal{K}\) be a class of small \(\infty\)-categories containing the empty category, so that any \(\mathcal{K}\)-cocomplete category \(\mathcal{C}\in\mathsf{Cat}_{\mathcal{K}}\) has an initial object. Let \(\mathsf{Cat}_{\mathcal{K},*}\subset\mathsf{Cat}_{\mathcal{K}}\) denote the full sub-operad of \(\mathsf{Cat}_{\mathcal{K}}\) comprising those \(\mathcal{K}\)-cocomplete categories which are pointed (i.e. where the initial object is a zero object). **Remark 3.3.9**.: As in Remark 3.3.7,there is an equivalence between \(E_{\infty}\)-algebra objects in \(\mathsf{Cat}_{*,\mathcal{K}}\) and pointed symmetric monoidal \(\infty\)-categories with \(\mathcal{K}\)-indexed colimits over which \(\otimes\) distributes. **Lemma 3.3.10**.: _Let \(\mathcal{K}\) be a class of small \(\infty\)-categories containing the empty category. Then the free pointed \(\mathcal{K}\)-cocomplete \(\infty\)-category on an object is given by \(\mathsf{Top}_{*}^{\mathcal{K}}\), the smallest full subcategory of \(\mathsf{Top}_{*}\) which contains \(S^{0}\) and is closed under \(\mathcal{K}\)-colimits. That is, for any \(\mathcal{D}\in\mathsf{Cat}_{*,\mathcal{K}}\), evaluation at \(S^{0}\) determines an equivalence of categories \(\mathsf{Cat}_{*,\mathcal{K}}(\mathsf{Top}_{*}^{\mathcal{K}},\mathcal{D}) \rightarrow\mathcal{D}^{\sim}\)._ **Remark 3.3.11**.: In the following proof, we heavily use [12, Section 4.4.5], which Lurie has rewritten since the book was published. References in the following proof are to theorem numbers from the current version of [12] freely available from Lurie's website at [https://www.math.ias.edu/~lurie/](https://www.math.ias.edu/~lurie/). Proof of Lemma 3.3.10.: Following [12, Definition 4.4.5.2], let \(\mathsf{Idem}^{+}\) denote the walking split idempotent, which is a \(1\)-category, with an initial object \(Y\) and another object \(X\) of which \(Y\) is a retract. We will use [12, Proposition 4.4.5.6], which says that as an \(\infty\)-category, \(\mathsf{Idem}^{+}\) is freely generated by the objects \(Y,X\), the two morphisms \(i:Y\to X\) and \(r:X\to Y\), and the homotopy \(ri\simeq\mathrm{id}_{Y}\). Let us contemplate the \(\infty\)-category \(\mathcal{P}_{\mathcal{R}}^{\mathcal{K}}(\mathsf{Idem}^{+})\) where \(\mathcal{R}\) comprises the cocone on the empty diagram with vertex at \(Y\). Unraveling the definitions, \(\mathcal{P}(\mathsf{Idem}^{+})\) is the \(\infty\)-category of spaces \(\underline{X}\) equipped with a retract \(\underline{Y}\), and \(\mathcal{P}_{\mathcal{R}}(\mathsf{Idem}^{+})\subset\mathcal{P}(\mathsf{Idem }^{+})\) is the full subcategory where \(\underline{Y}\) is contractible. Thus we have a canonical equivalence \(\mathcal{P}_{\mathcal{R}}(\mathsf{Idem}^{+})\simeq\mathsf{Top}_{*}\). Under this identification, the localization \(\mathcal{P}(\mathsf{Idem}^{+})\rightarrow\mathsf{Top}_{*}\) carries the representable at \(X\) to \(S^{0}\) and \(Y\) to \(0\). So \(\mathcal{P}_{\mathcal{R}}^{\mathcal{K}}(\mathsf{Idem}^{+})\subset\mathsf{Top}_ {*}\) is identified with \(\mathsf{Top}_{*}^{\mathcal{K}}\). By the universal property of \(\mathsf{Top}_{*}^{\mathcal{K}}=\mathcal{P}_{\mathcal{R}}^{\mathcal{K}}( \mathsf{Idem}^{+})\), evaluation at \(S^{0}\) determines an equivalence of categories \(\mathsf{Cat}_{\mathcal{K}}(\mathcal{P}_{\mathcal{R}}^{\mathcal{K}}(\mathsf{Idem }^{+}),\mathcal{D})\rightarrow\mathsf{Cat}_{\{\emptyset\}}(\mathsf{Idem}^{+}, \mathcal{D})\) for any \(\mathcal{D}\in\mathsf{Cat}_{\mathcal{K}}\). That is, \(\mathcal{K}\)-cocominuous functors \(\mathcal{P}_{\mathcal{R}}^{\mathcal{K}}(\mathsf{Idem}^{+})\rightarrow\mathcal{D}\) are identified with functors \(\mathsf{Idem}^{+}\rightarrow\mathcal{D}\) preserving the initial object. By [12, Proposition 4.4.5.6], this means that evaluation at \(X\) determines a functor \(\mathsf{Cat}_{\{\emptyset\}}(\mathsf{Idem}^{+},\mathcal{D})\rightarrow\mathcal{ D}^{\sim}\) whose fiber over \(D\in\mathcal{D}\) is the space whose points comprise the data of a morphism \(0\to D\), a morphism \(D\to 0\), and a homotopy between the composite and \(\mathrm{id}_{0}\) in \(\mathcal{D}(0,0)\) (here \(0\in\mathcal{D}\) is the initial object). If \(\mathcal{D}\) is pointed, then this space is contractible, so the functor \(\mathsf{Cat}_{\{\emptyset\}}(\mathsf{Idem}^{+},\mathcal{D})\rightarrow\mathcal{D}\) is an equivalence. It follows that the functor \(\mathsf{Cat}_{\mathcal{K}}(\mathcal{P}_{\mathcal{R}}^{\mathcal{K}}(\mathsf{Idem }^{+}),\mathcal{D})\rightarrow\mathcal{D}^{\sim}\), is an equivalence. The proof is completed as soon as we note that \(\mathsf{Top}_{*}^{\mathcal{K}}\) is itself pointed. **Lemma 3.3.12**.: _Let \(\mathcal{K}\) be a collection of small \(\infty\)-categories containing the empty category. Then \(\mathsf{Cat}_{*,\mathcal{K}}\) is a pointed \(\infty\)-category; its zero object is the terminal category \([0]\)._ Proof.: The unique functor \(\mathcal{C}\to[0]\) is as in \(\mathsf{Cat}_{\infty}\). The unique functor \([0]\to\mathcal{C}\) picks out the initial object. This functor has no automorphisms because the initial object of \(\mathcal{C}\) has no automorphisms. It is \(\mathcal{K}\)-colimit-preserving. **Lemma 3.3.13**.: _Let \(\mathcal{K}\) be a collection of small \(\infty\)-categories containing the empty category. The full suboperad \(\mathsf{Cat}_{*,\mathcal{K}}\subset\mathsf{Cat}_{\mathcal{K}}\) is a \(\otimes\)-ideal and an exponential ideal, and is unital. The unit is the \(\infty\)-category \(\mathsf{Top}_{*}^{\mathcal{K}}\) of Lemma 3.3.10._ Proof.: For any \(\mathcal{C}\in\mathsf{Cat}_{\mathcal{K}}\), the functors \(\mathcal{C}\otimes(-)\) and \(\operatorname{Fun}_{\mathcal{K}}(\mathcal{C},-)\) are \(2\)-functorial under the natural enrichment of \(\mathsf{Cat}_{\mathcal{K}}\) in itself via its exponentially closed structure. Moreover, these functors preserve the zero object of \(\mathsf{Cat}_{\mathcal{K}}\), i.e. the terminal category \([0]\). An \(\infty\)-category \(\mathcal{D}\) is pointed if and only if the unique functor \(\mathcal{D}\to[0]\) has a left and right adjoints which are isomorphic. Moreover, if \(\mathcal{D}\in\mathsf{Cat}_{\mathcal{K}}\), then all the functors involved preserve \(\mathcal{K}\)-colimits. Since adjunctions are preserved by any \(2\)-functor, it follows that \(\mathcal{C}\otimes(-)\) and \(\operatorname{Fun}_{\mathcal{K}}(\mathcal{C},-)\) preserve pointed \(\infty\)-categories, i.e. \(\mathsf{Cat}_{*,\mathcal{K}}\) is a \(\otimes\)-ideal and an exponential ideal. That \(\mathsf{Cat}_{*,\mathcal{K}}\) is unital follows from Lemma 3.3.10. **Corollary 3.3.14**.: _Let \(\mathcal{K}\) be a collection of small \(\infty\)-categories containing the empty category. The full suboperad \(\mathsf{Cat}_{*,\mathcal{K}}\subset\mathsf{Cat}_{\mathcal{K}}\) is symmetric monoidal, and the inclusion functor is lax symmetric monoidal and preserves \(\otimes\). The unit is the \(\infty\)-category \(\mathsf{Top}_{*}^{\mathcal{K}}\) of Lemma 3.3.10._ Proof.: This follows from Lemma 3.3.13 and Lemma 3.1.10. **Remark 3.3.15** (The infinitary perspective).: Let \(\mathcal{K}\subseteq\mathcal{K}^{\prime}\) be an inclusion of classes of small \(\infty\)-categories. By [17, Remark 4.8.1.8], the functor \(\mathcal{P}_{\mathcal{K}}^{\mathcal{K}^{\prime}}:\mathsf{Cat}_{\mathcal{K}} \to\mathsf{Cat}_{\mathcal{K}^{\prime}}\) is left adjoint to the forgetful functor, and moreover \(\mathcal{P}_{\mathcal{K}}^{\mathcal{K}^{\prime}}\) is strong symmetric monoidal. As noted in [17, Proposition 4.8.1.10 and Corollary 4.8.1.14], it is in particular the case that if \(\mathcal{C}\) is a symmetric mononoidal \(\infty\)-category with compatible finite colimits, then \(\operatorname{Ind}(\mathcal{C})\) is also symmetric monoidal under Day convolution, and has the universal property that \(\operatorname{Fun}^{\otimes,L}(\operatorname{Ind}(\mathcal{C}),\mathcal{D})= \operatorname{Fun}^{\otimes}(\mathcal{C},\mathcal{D})\) when \(\mathcal{D}\) is a symmetric monoidal presentable \(\infty\)-category. Moreover, if \(\mathcal{K}\) is a presentably symmetric monoidal \(\infty\)-category and \(T\in\mathcal{K}\) is dualizable with twisted-trivial braiding, then in the functorial splitting of Theorem 3.2.10, the factors \(\mathcal{K}_{T\wedge T^{\vee}}\) and \(\mathcal{K}/T\) are presentably symmetric monoidal and the localization functors onto them are symmetric monoidal left adjoints. ## 4. Symmetric Monoidal \(\infty\)-Categories with Duals and Certain Colimits In this section, the kernel of ideas from Section 1 and the infrastructure from Section 3 are combined to study the structure of certain \(\infty\)-categories of symmetric monoidal \(\infty\)-categories with duals and certain colimits. In Section 4.1 we discuss the fact that any symmetric monoidal \(\infty\)-category with duals and certain colimits admits a splitting as a product of certain subcategories characterized by various properties. For example, one factor is stable, another is an additive \(1\)-category, etc. The splittings all arise from certain objects with twisted-trivial braidings guaranteed to exist by virtue of having the appropriate colimits. The splittings are also functorial in nature, so any suitably cocontinuous, symmetric monoidal functor between such \(\infty\)-categories respects them, and thus the \(\infty\)-category of all such \(\infty\)-categories itself splits according to these factors. In Section 4.2, we contribute toward a preliminary understanding of some of these factors, by computing the free symmetric monoidal \(\infty\)-category with duals and \(\mathcal{K}\)-colimits for various \(\mathcal{K}\). ### Canonical splittings In this subsection, we identify two canonical objects with twisted-trivial braiding which are guaranteed to exist in any symmetric monoidal \(\infty\)-category which is suitably cocomplete. The first (Section 4.1.1) corepresents grouplike elements in the semiodditive setting with sufficient cofibers. The second (Section 4.1.2) is the suspension of the unit, which exists in the pointed case with suspensions. In both cases, the yoga of Section 3.2.3 tells us that if in addition these objects are dualizable, we obtain a splitting of the entire \(\infty\)-category. These splittings are characterized in the respective sections, and their interaction is discussed in Section 4.1.3. #### 4.1.1. The grouplike / anti-grouplike splitting In this subsection, we describe the object corepresenting grouplike elements of hom-spaces in a semiodditive symmetric monoidal \(\infty\)-category with cofibers. We discuss the splitting which results from Section 3.2.3 if this object is dualizable. **Lemma 4.1.1**.: _Let \((X,\mu,\eta)\) be a homotopy commutative, homotopy associative, \(H\)-space. Let \(i:X^{\operatorname{gp}}\to X\) denote the inclusion of the grouplike part of \(X\), i.e. the disjoint union of connected components of \(X\) which have inverses under the multiplication \(\mu\). Note that \(X^{\operatorname{gp}}\) is a grouplike \(H\)-space under the restriciton of the map \(\mu\). Let \(-1:X^{\operatorname{gp}}\to X^{\operatorname{gp}}\) be a map sending each point to a homotopy inverse under \(\mu\). Then the following commutative square is a homotopy pullback:_ Proof.: We show this in several steps. **Step 1:** First note that by Remark 1.5.4, the lemma is true when \(X\) is discrete. Now let \(F\) be the fiber of \(\mu\), so that we have a natural map \(X^{\operatorname{gp}}\to F\). **Step 2:** Because \(\mu\) is split by \(\operatorname{id}_{X}\times\eta\) (or by \(\eta\times\operatorname{id}_{X}\)), we have short exact sequences \(\pi_{n}(F)\to\pi_{n}(X\times X)\to\pi_{n}(X)\) for each \(n\in\mathbb{N}\). When \(n=0\), this tells that \(\pi_{0}(F)=\pi_{0}(X)^{\operatorname{gp}}\) by Step 1. Since \(\pi_{0}(X)^{\operatorname{gp}}=\pi_{0}(X)^{\operatorname{gp}}\), this shows that the comparison map \(\pi_{0}(X^{\operatorname{gp}})\to\pi_{0}(F)\) is a bijection. **Step 3:** For \(n\geq 1\), we identify the short exact sequence \(\pi_{n}(F)\to\pi_{n}(X\times X)\to\pi_{n}(X)\) with the short exact sequence \(\pi_{0}(\Omega^{n}F)\to\pi_{0}(\Omega^{n}X\times\Omega^{n}X)\to\pi_{0}(\Omega^{ n}X)\). By Step 2, the map \(\pi_{0}((\Omega^{n}X)^{\text{\rm sp}})\to\pi_{0}(\Omega^{n}F)\) is a bijection. Since \(\Omega^{n}(X^{\text{\rm sp}})=\Omega^{n}X=(\Omega^{n}X)^{\text{\rm sp}}\), we have that \(\pi_{n}(X^{\text{\rm sp}})=\pi_{0}(\Omega^{n}(X^{\text{\rm sp}}))\to\pi_{0}( \Omega^{n}F)=\pi_{n}(F)\) is a bijection. So by Whitehead's theorem, \(X^{\text{\rm sp}}\to F\) restricts to an equivalence on the connected component of the identity. **Step 4:**\(F\) inherits the structure of a commutative, associative \(H\)-space as the fiber of a map of such, and the map \(X^{\text{\rm sp}}\to F\) is a map of \(H\)-spaces. Because \(X^{\text{\rm sp}}\) is grouplike and \(X^{\text{\rm sp}}\to F\) is \(\pi_{0}\)-surjective, \(F\) is also grouplike. Therefore, because \(X^{\text{\rm sp}}\to F\) is an equivalence at the connected component of the identity (Step 3) and a \(\pi_{0}\)-bijection (Step 1), it follows that \(X^{\text{\rm sp}}\to F\) is an equivalence. **Definition 4.1.2**.: Let \(\mathcal{C}\) be a semiadditive \(\infty\)-category. Denote by \(\mathcal{C}_{\text{\rm sp}}\subseteq\mathcal{C}\) the full subcategory of those objects \(C\) such that the \(E_{\infty}\)-space \(\mathcal{C}(C,D)\) is grouplike for all \(D\in\mathcal{C}\). If \(\mathcal{C}=\mathcal{C}_{\text{\rm sp}}\), we say that \(\mathcal{C}\) is _additive_. If \(\mathcal{C}\) has cofibers, then for \(C\in\mathcal{C}\), let \(C_{\text{\rm sp}}\) be the cofiber of the diagonal \(\Delta:C\to C\oplus C\). Denote by \(\mathcal{C}_{\neg\text{\rm sp}}\subseteq\mathcal{C}\) the full subcategory of objects \(C\) such that \(\mathcal{C}(C,D)\) has no nonzero grouplike elements, for all \(D\in\mathcal{C}\). If \(\mathcal{C}=\mathcal{C}_{\neg\text{\rm sp}}\), we say that \(\mathcal{C}\) is _antiadditive_. **Corollary 4.1.3**.: _Let \(\mathcal{C}\) be a semiadditive \(\infty\)-category with cofibers, and let \(C\in\mathcal{C}\). Then the composite map induces a map factors canonically through the inclusion \(\mathcal{C}(C,D)^{\text{\rm sp}}\to\mathcal{C}(C,D)\), and the induced map \(r_{C}^{*}:\mathcal{C}(C_{\text{\rm sp}},D)\to\mathcal{C}(C,D)^{\text{\rm sp}}\) is an equivalence._ Proof.: The cofiber sequence induces a fiber sequence \(\mathcal{C}(C,D)\xleftarrow{\mu}\mathcal{C}(C,D)\times\mathcal{C}(C,D)\to \mathcal{C}(C_{\text{\rm sp}},D)\) where \(\mu\) is the addition on \(\mathcal{C}(C,D)\). So this follows from Lemma 4.1.1. **Theorem 4.1.4**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category with biproducts and cofibers over which \(\wedge\) distributes. Then the composite map \(r=r_{S}:S\xrightarrow{(\operatorname{id}_{S},0)}S\oplus S\to S_{\text{\rm sp}}\) of Corollary 4.1.3 is a closed idempotent. The \(S_{\text{\rm sp}}\)-stable objects are exactly the objects of \(\mathcal{C}_{\text{\rm sp}}\), and the \(S_{\text{\rm sp}}\)-torsion objects are exactly the objects of \(\mathcal{C}_{\neg\text{\rm sp}}\)._ Proof.: Note that for any \(C\in\mathcal{C}\), the cofiber sequence \(C\to C\oplus C\to C_{\text{\rm sp}}\) may be identified with the cofiber sequence \(C\wedge S\to C\wedge(S\oplus S)\to C\wedge S_{\text{\rm sp}}\). By Corollary 4.1.3, we have that \(\mathcal{C}(S_{\text{\rm sp}}\wedge C,D)=\mathcal{C}(C,D)^{\text{\rm sp}}\). In particular, \(\mathcal{C}(S_{\text{\rm sp}}\wedge S_{\text{\rm sp}},D)=\mathcal{C}(S_{ \text{\rm sp}},D)^{\text{\rm sp}}=(\mathcal{C}(S,D)^{\text{\rm sp}})^{\text{ \rm sp}}=\mathcal{C}(S,D)^{\text{\rm sp}}=\mathcal{C}(S,D)^{\text{\rm sp}}= \mathcal{C}(S_{\text{\rm sp}},D)\). So by the Yoneda lemma, \(S_{\text{\rm sp}}\to S_{\text{\rm sp}}\wedge S_{\text{\rm sp}}\) is an isomorphism, i.e. \(S\to S_{\text{\rm sp}}\) is a closed idempotent. From the equivalence of Corollary 4.1.3 also follows the description of \(S_{\text{\rm sp}}\)-stable and \(S_{\text{\rm sp}}\) torsion objects. **Corollary 4.1.5**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category with biproducts and cofibers over which \(\wedge\) distributes. Suppose that the cofiber \(S_{\text{\rm sp}}\) of the diagonal \(\Delta:S\to S\oplus S\) is dualizable. Then \(S_{\text{\rm sp}}\) is a clopen idempotent, with complement \(S_{\neg\text{\rm sp}}\). The induced functor \(\mathcal{C}\to\mathcal{C}_{\text{\rm sp}}\times\mathcal{C}_{\neg\text{\rm sp}}\) is an equivalence._ Proof.: By Theorem 4.1.4 and Proposition 2.3.1(2), \(S_{\text{\rm sp}}\) is a clopen idempotent. Because \(S_{\text{\rm sp}}\) is a cogroup object (Theorem 4.1.4), it follows from Corollary 3.2.7, that the cofiber \(S/E\) of \(E\to S\) is a complementary clopen idempotent. So the equivalence \(\mathcal{C}\to\mathcal{C}_{\text{\rm sp}}\times\mathcal{C}_{\neg\text{\rm sp}}\) follows from Theorem 4.1.4 and Proposition 3.2.9(3). **Corollary 4.1.6**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category \((n\geq 2)\) with biproducts and cofibers over which \(\wedge\) distributes. Suppose that the cofiber \(S_{\mathrm{gp}}\) of the diagonal \(\Delta:S\to S\oplus S\) is dualizable. Then the localization \(\mathcal{C}\to\mathcal{C}_{\mathrm{gp}}\) is the universal \(E_{n}\)-monoidal, biproduct-and-cofiber-preserving functor to an additive \(E_{n}\)-monoidal \(\infty\)-category with biproducts and cofibers over which \(\wedge\) distributes. The localization \(\mathcal{C}\to\mathcal{C}_{\mathrm{gp}}\) is the universal \(E_{n}\)-monoidal, biproduct-and-cofiber-preserving functor to an anti-additive \(E_{n}\)-monoidal \(\infty\)-category with biproducts and cofibers over which \(\wedge\) distributes._ Proof.: This follows from Corollary 4.1.5 and Theorem 3.2.10. #### 4.1.2. The stable / trivial suspension splitting In this subsection, we discuss the suspension of the unit object. When this object is dualizable, it induces (via Section 3.2.3) a splitting of the \(\infty\)-category into a stable part and a part with a dual characterization: all objects have trivial suspension. **Definition 4.1.7**.: Let \(\mathcal{C}\) be a pointed \(\infty\)-category with suspension \(\Sigma\) (that is, for every \(C\in\mathcal{C}\) the object \(\Sigma C=0\cup_{C}0\) exists). We say that \(\mathcal{C}\) is _weakly stable_ if the suspension functor \(\Sigma:\mathcal{C}\to\mathcal{C}\) is an equivalence of categories, and _stable_ if in addition \(\mathcal{C}\) has finite colimits (equivalently, \(\mathcal{C}\) has finite limits). We say that \(\mathcal{C}\)_has trivial suspension_ if the suspension functor \(\Sigma\) is constant at \(0\). **Remark 4.1.8**.: Recall from Example 1.5.6 that because \(S^{1}\in\mathsf{Top}_{*}\) is a cogroup object, it follows that any suspension object, in an \(\infty\)-category with suspensions and finite coproducts, is a cogroup object. Therefore, if \(\mathcal{C}\) is a weakly stable semiadditive \(\infty\)-category, then \(\mathcal{C}\) is additive. **Remark 4.1.9**.: In an additive \(\infty\)-category, the coequalizer of two maps \(f,g\) may be computed as the cofiber of \(f-g\). It follows that an additive \(\infty\)-category has finite colimits iff it has cofibers. A weakly stable \(\infty\)-category is stable if and only if it has finite coproducts and cofibers, if and only if it has finite colimits. **Proposition 4.1.10**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category \((n\geq 2)\) with finite coproducts and cofibers over which \(\wedge\) distributes. Suppose that the suspension of the unit \(\Sigma S\) has a dual \(\Sigma^{-1}S\). Then \(S_{\mathrm{stab}}:=\Sigma S\wedge\Sigma^{-1}S\) is a clopen idempotent, with a complement \(S_{\Sigma-\mathrm{triv}}\)._ Proof.: Because \(\Sigma S\) has twisted-trivial braiding (Example 2.1.5), \(S_{\mathrm{stab}}\) is a clopen idempotent by Proposition 2.3.1(3). Because \(S_{\mathrm{stab}}\) is a suspension object, it is a cogroup object (Remark 4.1.8). So by Corollary 3.2.7, \(S_{\Sigma-\mathrm{triv}}\) is a complementary clopen idempotent. **Definition 4.1.11**.: Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category \((n\geq 2)\) with finite coproducts and cofibers over which \(\wedge\) distributes. We write \(\mathcal{C}_{\mathrm{stab}}=\mathcal{C}_{S_{\mathrm{stab}}}\) and \(\mathcal{C}_{\Sigma-\mathrm{triv}}=\mathcal{C}_{S_{\Sigma-\mathrm{triv}}}\). **Theorem 4.1.12**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category \((n\geq 2)\) with finite coproducts and cofibers over which \(\wedge\) distributes. Suppose that the suspension of the unit \(\Sigma S\) has a dual \(\Sigma^{-1}S\). Then the the canonical functor \(\mathcal{C}\to\mathcal{C}_{\mathrm{stab}}\times\mathcal{C}_{\Sigma-\mathrm{ triv}}\) is an equivalence. The localization \(\mathcal{C}\to\mathcal{C}_{\mathrm{stab}}\) is the universal \(E_{n}\)-monoidal, finite-coproduct-and-cofiber-preserving functor to an \(E_{n}\)-monoidal stable \(\infty\)-category. The localization \(\mathcal{C}\to\mathcal{C}_{\Sigma-\mathrm{triv}}\) is the universal \(E_{n}\)-monoidal, finite-coproduct-and-cofiber-preserving functor to an \(E_{n}\)-monoidal \(\infty\)-category with trivial suspension and finite coproducts and cofibers over which \(\wedge\) distributes._ Proof.: The equivalence \(\mathcal{C}\to\mathcal{C}_{\mathrm{stab}}\times\mathcal{C}_{\Sigma-\mathrm{triv}}\) follows from Proposition 4.1.10 and Proposition 3.2.9(3). By Remark 4.1.8, \(\mathcal{C}_{\mathrm{stab}}\) is stable. Any \(E_{n}\)-monoidal, finite-coproduct-and-cofiber-preserving functor \(F\) to an \(E_{n}\)-monoidal stable \(\infty\)-category \((\mathcal{D},\wedge S)\) carries \(\Sigma S\) to the \(\wedge\)-invertible object \(\Sigma S\) (which has monoidal inverse \(\Sigma^{-1}S\)). Therefore by Theorem 3.2.10(1), \(F\) factors uniquely through \(\mathcal{C}_{\mathrm{stab}}\). Because \(\mathcal{C}\to\mathcal{C}_{\mathrm{stab}}\) is a product projection, the induced functor \(\mathcal{C}_{\mathrm{stab}}\to\mathcal{D}\) preserves any colimits which \(F\) does. This establishes the universal property of \(\mathcal{C}_{\mathrm{stab}}\). Similarly, any \(E_{n}\)-monoidal, finite-coproduct-and-cofiber-preserving functor \(F\) to an \(E_{n}\)-monoidal \(\infty\)-category \((\mathcal{D},\wedge S)\) with trivial suspension and finite coproducts and cofibers over which \(\wedge\) distributes, carries \(\Sigma S\) to \(0\). Therefore by Theorem 3.2.10(3), \(F\) factors uniquely through \(\mathcal{C}_{\Sigma-\mathrm{triv}}\), and as before the preservation of the appropriate colimits is automatic. This establishes the universal property of \(\mathcal{C}_{\Sigma-\mathrm{triv}}\). #### 4.1.3. The 3-fold splitting In this subsection, we describe the interaction between the splittings of the previous two subsections, yielding a 3-fold splitting of any symmetric monoidal \(\infty\)-category with duals, finite coproducts, and cofibers. **Proposition 4.1.13**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category \((n\geq 2)\) with finite biproducts and cofibers over which \(\wedge\) distributes. Suppose that \(\Sigma S\) and the cofiber \(S_{\mathrm{gp}}\) of the diagonal \(S\to S\oplus S\) have duals. Then the \(E_{n}\)-monoidal localization functors of Corollary 4.1.5 and Proposition 4.1.10 assemble into an equivalence of categories_ \[\mathcal{C}\to\mathcal{C}_{\mathrm{stab}}\times\mathcal{C}_{\mathrm{gp},\Sigma -\mathrm{triv}}\times\mathcal{C}_{\neg\mathrm{gp}}\] _Here the \(E_{n}\)-localization functor \(\mathcal{C}\to\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}=\mathcal{C}_{ \mathrm{gp}}\cap\mathcal{C}_{\Sigma-\mathrm{triv}}\) is the universal \(E_{n}\)-monoidal functor to an \(E_{n}\)-monoidal additive \(\infty\)-category with trivial suspension and cofibers over which \(\wedge\) distributes._ Proof.: As in Remark 4.1.8, any weakly stable \(\infty\)-category is additive. Therefore, the clopen idempotent \(S_{\mathrm{stab}}\) refines the clopen idempotent \(S_{\mathrm{gp}}\), and dually the clopen idempotent \(S_{\neg\mathrm{gp}}\) refines the clopen idempotent \(S_{\Sigma-\mathrm{triv}}\). Moreover, because \(n\geq 2\), the two closed idempotent structures \(S\to S_{\mathrm{gp}}\to S_{\mathrm{gp}}\wedge S_{\Sigma-\mathrm{triv}}\) and \(S\to S_{\Sigma-\mathrm{triv}}\to S_{\Sigma-\mathrm{triv}}\wedge S_{\mathrm{gp}}\) agree, i.e. the localizations \(\mathcal{C}\to\mathcal{C}_{\mathrm{gp}}\) and \(\mathcal{C}\to\mathcal{C}_{\Sigma-\mathrm{triv}}\) commute. The result follows. We have already explored some of the properties of \(\mathcal{C}_{\mathrm{stab}}\) and \(\mathcal{C}_{\neg\mathrm{gp}}\). The following property of \(\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}\) may come as a surprise: **Proposition 4.1.14**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal \(\infty\)-category \((n\geq 2)\) with finite biproducts and cofibers over which \(\wedge\) distributes. Suppose that \(\Sigma S\) and \(S_{\mathrm{gp}}\) have duals. Then the localization functor \(\mathcal{C}\to\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}\) is the universal \(E_{n}\)-monoidal, finite-coproduct-and-cofiber-preserving functor to an \(E_{n}\)-monoidal additive 1-category with cofibers over which \(\wedge\) distributes._ Proof.: We must first show that \(\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}\) is a 1-category, i.e. has discrete homspaces. For any \(C,D\in\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}\), we have \(\pi_{n}\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}(C,D)=\pi_{0}\mathcal{C} _{\mathrm{gp},\Sigma-\mathrm{triv}}(\Sigma^{n}C,D)\), where the basepoint is the zero morphism. Because suspension is trivial it follows that the component of \(\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}(C,D)\) at the zero morphism is contractible. Moreover, \(\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}(C,D)\) is a grouplike \(H\)-space because \(\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}\) is additive. So all of its connected components are homotopy equivalent, and so \(\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}(C,D)\) is discrete. It is clear that \(\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}\) is additive and has cofibers over which \(\wedge\) distributes. Conversely, if \(\mathcal{D}\) is an \(E_{n}\)-monoidal additive 1-category with cofibers over which \(\wedge\) distributes, then \(\mathcal{D}\) has trivial suspension. So if \(F:\mathcal{C}\to\mathcal{D}\) is an \(E_{n}\)-monoidal, finite-coproduct-and-cofiber-preserving functor, it carries \(\Sigma S\) to 0 and carries \(S\to S_{\mathrm{gp}}\) to an isomorphism, and therefore factors uniquely through \(\mathcal{C}_{\mathrm{gp},\Sigma-\mathrm{triv}}\) by Theorem 3.2.10. It automatically preserves any colimits which \(F\) does. Additive 1-categories with appropriate duals admit some further splittings: **Proposition 4.1.15**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal semiadditive 1-category with cofibers over which \(\wedge\) distributes. Let \(m\in\mathbb{N}\) be a number. Then the cofiber \(S/m\) of the map \(m:S\to S\) (i.e. the \(m\)-fold sum of the identity with itself) is a closed idempotent. We have \(\mathcal{C}(C\wedge S/m,D)=\mathcal{C}(C,D)[m]\) (where \(A[m]\) denotes the \(m\)-torsion subgroup of \(A\)) for all \(C,D\in\mathcal{C}\), so that the \(S/m\)-local objects are those objects \(C\in\mathcal{C}\) such that \(m=0\) in \(\mathcal{C}(C,D)\) for all \(D\in\mathcal{C}\), and the \(S/m\)-torsion objects are those \(C\in\mathcal{C}\) such that \(\mathcal{C}(C,D)[m]=0\) for all \(D\in\mathcal{C}\)._ **Warning 4.1.16**.: Proposition 4.1.15 is false in a general \(\infty\)-category. even if \(m\) is prime. For instance, in spectra, \(S/m\) is the mod-\(m\) Moore spectrum, and \(S/m\not\simeq S/m\wedge S/m\). Proof of Proposition 4.1.15.: The cofiber sequence \(C\xrightarrow{m}C\to C\wedge S/m\) induces a fiber sequence \(\mathcal{C}(C,D)\xrightarrow{m}\mathcal{C}(C,D)\leftarrow\mathcal{C}(C\wedge S /m,D)\) for any \(D\in\mathcal{D}\). Because \(\mathcal{C}\) is a 1-category, this says exactly that \(\mathcal{C}(C\wedge S/m,D)=\mathcal{C}(C,D)[m]\). So \(\mathcal{C}(S/m\wedge S/m,D)=\mathcal{C}(S/m,D)[m]=(\mathcal{C}(S,D)[m])[m]= \mathcal{C}(S,D)[m]=\mathcal{C}(S/m,D)\); by Yoneda \(S/m\) is a closed idempotent, and the rest follows. **Definition 4.1.17**.: Let \(\mathcal{C}\) be an additive 1-category and \(m\in\mathbb{Z}\). We say that \(C\in\mathcal{C}\) is _of characteristic dividing \(m\)_ if the endomorphism \(m:C\to C\) (i.e. the \(m\)-fold sum of the identity) is the zero morphism. We write \(\mathcal{C}/m\subseteq\mathcal{C}\) for the full subcategory of objects of characteristic dividing \(m\). We say that \(\mathcal{C}\)_has characteristic dividing \(m\)_ if \(\mathcal{C}=\mathcal{C}/m\), i.e. if all hom-groups have characteristic dividing \(m\). We say that \(C\) is _\(m\)-torsion-free_ if \(m:C\to C\) is a monomorphism and _co-\(m\)-torsion-free_ if \(m:C\to C\) is an epimorphism. We write \(\mathcal{C}(m)\subseteq\mathcal{C}\) for the full subcategory of objects which are co-\(m\)-torsion-free. We say that \(\mathcal{C}\) is _\(m\)-torsion-free_ if \(\mathcal{C}=\mathcal{C}(m)\), i.e. if all hom-groups are \(m\)-torsion-free. **Remark 4.1.18**.: The notation \(\mathcal{C}(m)\) is perhaps misleading: \(m\) need not act invertibly on \(\mathcal{C}(m)\), but only without torsion. **Remark 4.1.19**.: The reader may have expected the co-\(m\)-torsion-free objects to be called "\(m\)-divisible" by analogy to the category of abelian groups. We have opted against such terminology, finding that it is unhelpful in the present setting. For us, the important fact about co-\(m\)-torsion-free objects \(C\) is that \(\mathrm{Hom}(C,D)\) is always an \(m\)-torsion-free abelian group, so it seems best to have the term "torsion-free" appear in the name. In fact, we will have little use for \(m\)-torsion-free objects, and have only introduced them to stand as foils for the co-\(m\)-torsion-free objects. **Corollary 4.1.20**.: _Let \((\mathcal{C},\wedge,S)\) be an \(E_{n}\)-monoidal semiadditive 1-category with cofibers over which \(\wedge\) distributes. Let \(m\in\mathbb{N}\) be a number, and suppose that \(S/m\) has a dual. Then \(S/m\) is a clopen idempotent, with complement \(S(m)\). The induced functor \(\mathcal{C}\to\mathcal{C}/m\times\mathcal{C}(m)\) is an equivalence. The localization \(\mathcal{C}\to\mathcal{C}/m\) is the universal \(E_{n}\)-monoidal, semiadditive, cofiber-preserving functor to an additive 1-category of characteristic dividing \(m\) with cofibers over which \(\wedge\) distributes. The localization \(\mathcal{C}\to\mathcal{C}(m)\) is the universal \(E_{n}\)-monoidal, semiadditive, cofiber-preserving functor to an additive 1-category with cofibers over which \(\wedge\) distributes which is \(m\)-torsion-free._ Proof of Corollary 4.1.20.: That \(S/m\) is clopen follows from Proposition 2.3.1(2). Because \(\mathcal{C}\) is additive, \(S/m\) is a cogroup object, so has a complement by Corollary 3.2.7, and Proposition 2.4.2. So the equivalence \(\mathcal{C}\to\mathcal{C}/m\times\mathcal{C}(m)\) follows by Theorem 3.2.10. We have seen that \(\mathcal{C}/m\) has characteristic \(m\). If \(F:\mathcal{C}\to\mathcal{D}\) is an \(E_{n}\)-monoidal, semiadditive, cofiber-preserving functor to an additive 1-category of characteristic dividing \(m\) with cofibers over which \(\wedge\) distributes, then \(F\) carries \(S\to S/m\) to an isomorphism. So \(F\) factors uniquely through \(\mathcal{C}/m\) by Theorem 3.2.10, with colimit preservation being automatic. Likewise, we have seen that \(\mathcal{C}(m)\) is \(m\)-torsion-free. If \(F:\mathcal{C}\to\mathcal{D}\) is an \(E_{n}\)-monoidal, semiadditive, cofiber-preserving functor to an additive 1-category with cofibers over which \(\wedge\) distributes which is \(m\)-torsion-free, then \(F\) carries \(S/m\) to \(0\). So \(F\) factors through \(\mathcal{C}(m)\) by Theorem 3.2.10, with colimit preservation being automatic. **Remark 4.1.21**.: In the setting of Corollary 4.1.20, the localizations \(\mathcal{C}\to\mathcal{C}/m\) for varying \(m\) are compatible with one another whenever they exist. In particular, if \(m=p_{1}^{e_{1}}\cdots p_{r}^{e_{r}}\) is the prime factorization, then \(\mathcal{C}_{m}=\mathcal{C}/p_{1}\times\cdots\times\mathcal{C}/p_{r}\) so long as \(S/p_{1},\ldots,S/p_{r}\) all have duals. ### Initial objects In this section, we begin a preliminary study of symmetric monoidal \(\infty\)-categories with duals and various finite colimits suggested by the taxonomy of Section 4.1.3. The goal, achieved in Section 4.2.4, is to compute the initial object of the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with duals, finite coproducts, and cofibers. This is achieved by considering each of the factors from Section 4.1.3 in turn. **Definition 4.2.1**.: Let \(\mathsf{SMC}_{\mathrm{II,\mathsf{cof}}}\) denote the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with finite coproducts and cofibers over which \(\wedge\) distributes. Let \[\mathsf{SMD}_{\mathrm{II,\mathsf{cof}}}=\mathsf{SMD}\times_{\mathsf{SMC}} \mathsf{SMC}_{\mathrm{II,\mathsf{cof}}}\] denote the full subcategory of symmetric monoidal \(\infty\)-categories with finite coproducts, cofibers, and duals. Let \(\mathsf{SMD}_{\mathrm{stab}}\subset\mathsf{SMD}_{\mathrm{II,\mathsf{cof}}}\) denote the full subcategory of symmetric monoidal stable \(\infty\)-categories with duals. Let \(\mathsf{SMD}_{\mathrm{gp},\Sigma-\mathrm{triv,\mathsf{cof}}}\subset\mathsf{ SMD}_{\mathrm{II,\mathsf{cof}}}\) denote the full subcategory of symmetric monoidal additive 1-categories with duals and cofibers. For each prime \(p\), let \(\mathsf{SMD}_{p,\Sigma-\mathrm{triv,\mathsf{cof}}}\subset\mathsf{SMD}_{\mathrm{ II,\mathsf{cof}}}\) denote the full subcategory of symmetric monoidal additive 1-categories with duals and cofibers of characteristic dividing \(p\). Let \(\mathsf{SMD}_{-\mathrm{gp},\mathrm{II,\mathsf{cof}}}\subset\mathsf{SMD}_{\mathrm{ II,\mathsf{cof}}}\) denote the full subcategory of symmetric monoidal anti-additive \(\infty\)-categories with duals and cofibers. #### 4.2.1. The stable case In this subsection, we compute the initial object of the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with duals, finite coproducts, and cofibers which are moreover _stable_. Unsurprisingly, it turns out to be the \(\infty\)-category of finite spectra, symmetric monoidal under smash product. **Definition 4.2.2**.: Let \(\mathsf{Cat}_{\mathrm{rex}}\) be the symmetric monoidal \(\infty\)-category \(\mathsf{Cat}_{\mathcal{K}}\) (see Definition 3.3.3) when \(\mathcal{K}\) is the collection of finite categories. That is, \(\mathsf{Cat}_{\mathrm{rex}}\) is the symmetric monoidal \(\infty\)-category of \(\infty\)-categories with finite colimits. Let \(\mathsf{Cat}_{\mathrm{stab}}\subset\mathsf{Cat}_{\mathrm{rex}}\) be the full suboperad of \(\mathsf{Cat}_{\mathrm{rex}}\) whose objects are the stable \(\infty\)-categories. **Lemma 4.2.3**.: _Let \(\mathcal{D}\in\mathsf{Cat}_{\mathrm{stab}}\) be a stable \(\infty\)-category. Then evaluation at the sphere \(\mathbb{S}\in\mathsf{Spt}^{\mathrm{fin}}\) determines an equivalence of categories \(\mathsf{Cat}_{\mathrm{stab}}(\mathsf{Spt}^{\mathrm{fin}},\mathcal{D})\to\mathcal{ D}^{\sim}\)._ Proof.: By Corollary 3.3.14, \(\mathsf{Top}_{*}^{\mathrm{fin}}\) is the free pointed, right exact \(\infty\)-category on an object. The Spanier-Whitehead construction tells us that \(\mathsf{Spt}^{\mathrm{fin}}=\varinjlim(\mathsf{Top}_{*}^{\mathrm{fin}}\xrightarrow {\Sigma}\mathsf{Top}_{*}^{\mathrm{fin}}\xrightarrow{\Sigma}\dots)\). Let \(\mathcal{D}\) be a stable \(\infty\)-category, and \(F:\mathsf{Top}_{*}\to\mathcal{D}\) the right-exact functor classifying an object \(D\in\mathcal{D}\). Because \(F\) commutes with suspension, which is invertible on \(\mathcal{D}\), it easily follows that the space of extensions of \(F\) along \(\Sigma^{\infty}:\mathsf{Top}_{*}^{\mathrm{fin}}\to\mathsf{Spt}^{\mathrm{fin}}\) is contractible. As \(\mathsf{Spt}^{\mathrm{fin}}\) is stable, the result follows. **Theorem 4.2.4**.: _The full suboperad \(\mathsf{Cat}_{\mathrm{stab}}\subset\mathsf{Cat}_{\mathrm{rex}}\) is a \(\otimes\)-ideal and an exponential ideal, and unital with unit the \(\infty\)-category \(\mathsf{Spt}^{\mathrm{fin}}\) of finite spectra._ Proof.: The proof will follow the pattern of the proof of Lemma 3.3.13. We first show that \(\mathsf{Cat}_{\mathrm{stab}}\subset\mathsf{Cat}_{\mathrm{rex}}\) is a \(\otimes\)-ideal. Let \(\mathcal{C}\in\mathsf{Cat}_{\mathrm{stab}}\) and \(\mathcal{D}\in\mathsf{Cat}_{\mathrm{rex}}\); we will show that the tensor product \(\mathcal{C}\otimes\mathcal{D}\) in \(\mathsf{Cat}_{\mathrm{rex}}\) is stable. Because the suspension functor \(\Sigma:\mathcal{C}\to\mathcal{C}\) is an equivalence, it follows by functoriality of \((-)\otimes\mathcal{D}\) that \(\Sigma\otimes\mathcal{D}:\mathcal{C}\otimes\mathcal{D}\to\mathcal{C}\otimes \mathcal{D}\) is an equivalence. But this is nothing other than the suspension functor on \(\mathcal{C}\otimes\mathcal{D}\). This follows from the fact that \(0\otimes\mathcal{D}=0\), where \(0\) is a constant functor at the initial object, and the fact that \((F\cup_{G}H)\otimes\mathcal{D}=(F\otimes\mathcal{D})\cup_{G\otimes\mathcal{D}} (H\otimes\mathcal{D})\) for any functors \(F,G,H\). Untality follows from Lemma 4.2.3. **Corollary 4.2.5**.: _The full suboperad \(\mathsf{Cat}_{\mathrm{stab}}\subset\mathsf{Cat}_{\mathrm{rex}}\) is symmetric monoidal, with unit \(\mathsf{Spt}^{\mathrm{fin}}\). The inclusion functor is lax symmetric monoidal, preserving \(\otimes\)._ Proof.: This follows from Theorem 4.2.4 and Lemma 3.1.10. **Proposition 4.2.6**.: _The initial object of \(\mathsf{SMC}_{\mathrm{stab}}\) is the symmetric monoidal \(\infty\)-category \(\mathsf{Spt}^{\mathrm{fin}}\) of finite spectra, which is also the initial object of \(\mathsf{SMD}_{\mathrm{stab}}\)._ Proof.: By Lemma 3.1.14, the first statement follows from Theorem 4.2.4. So for the second statement, it suffices to observe that \(\mathsf{Spt}^{\mathrm{fin}}\) has duals for all objects. This is guaranteed by the theory of Spanier-Whitehead duality [10], which reduces to the observation that \(\mathbb{S}\in\mathsf{Spt}^{\mathrm{fin}}\) is (tautologically) dualizable, and that \(\mathsf{Spt}^{\mathrm{fin}}\) is stable and generated under finite colimits and desuspensions by \(\mathbb{S}\), so that this follows from Lemma 3.2.8. #### 4.2.2. The anti-additive case In this subsection, we compute the initial object in the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with duals, finite coproducts, and cofibers which are moreover _anti-additive_ in the sense of Section 4.1.1. It turns out to be the \((2,1)\)-category of spans of finite sets, symmetric monoidal under cartesian product. **Definition 4.2.7**.: Let \(\mathsf{Cat}_{\mathrm{II}}\) be the symmetric monoidal \(\infty\)-category \(\mathsf{Cat}_{\mathcal{K}}\) (see Definition 3.3.3) when \(\mathcal{K}\) is the collection of finite discrete categories. That is, \(\mathsf{Cat}_{\mathrm{II}}\) is the symmetric monoidal \(\infty\)-category of \(\infty\)-categories with finite coproducts. Let \(\mathsf{Cat}_{\oplus}\subset\mathsf{Cat}_{\mathrm{II}}\) be the full subcategory of \(\mathsf{Cat}_{\mathrm{II}}\) whose objects are the semiadditive \(\infty\)-categories. **Lemma 4.2.8**.: _Let \(\mathcal{K}\) be a collection of small \(\infty\)-categories containing the finite discrete categories. Then \(\mathsf{Cat}_{\mathcal{K}}\) is semiadditive; its biproducts are given by cartesian product._ Proof.: By Lemma 3.3.12, \(\mathsf{Cat}_{\mathcal{K}}\) is pointed by the terminal category \([0]\). Assume for the moment that \(\mathsf{Cat}_{\mathcal{K}}\) is small. Then by abstract nonsense, \(\mathsf{Cat}_{\mathcal{K}}\) has finite coproducts. So by Corollary 3.2.5, it will suffice to verify that the homotopy category \(\mathsf{hoCat}_{\mathcal{K}}\) is semiadditive. But this is clear: every object \(\mathcal{C}\in\mathsf{hoCat}_{\mathcal{K}}\) has a commutative monoid structure given by finite coproducts, and every morphism is a morphism of monoids because the morphisms of \(\mathsf{Cat}_{\mathcal{K}}\) preserve finite coproducts. If \(\mathcal{K}\) is not small, then write it as a directed union \(\mathcal{K}=\cup_{i}\mathcal{K}_{i}\) where each \(\mathcal{K}_{i}\) is small. By the foregoing, each \(\mathsf{Cat}_{\mathcal{K}_{i}}\) is semiadditive, and the inclusions \(\mathsf{Cat}_{\mathcal{K}_{i}}\to\mathsf{Cat}_{\mathcal{K}_{j}}\) preserve the semiadditive structure. Thus \(\mathsf{Cat}_{\mathcal{K}}\) is also semiadditive. **Theorem 4.2.9**.: _The full suboperad \(\mathsf{Cat}_{\oplus}\subset\mathsf{Cat}_{\mathrm{II}}\) is a \(\otimes\)-ideal and an exponenetial ideal. It is also unital, with unit \(\mathsf{Span}(\mathsf{Fin})\)._ Proof.: The proof will follow the pattern of the proof of Lemma 3.3.13 and Theorem 4.2.4. If \(\mathcal{C}\in\mathsf{Cat}_{\mathrm{II}}\), then \(\mathcal{C}\otimes_{\mathrm{II}}(-)\) and \(\mathrm{Fun}_{\mathrm{II}}(\mathcal{C},-)\) are \(2\)-functorial and preserve zero objects and biproducts. If \(\mathcal{D}\) is semiadditive, then the diagonal \(\mathcal{D}\to\mathcal{D}\times\mathcal{D}\) has a left and right adjoint which coincide, and these functors live in \(\mathsf{Cat}_{\mathrm{II}}\). The adjunctions are preserved by \(\mathcal{C}\otimes_{\mathrm{II}}(-)\) and \(\mathrm{Fun}_{\mathrm{II}}(\mathcal{C},-)\), and therfore \(\mathcal{C}\otimes_{\mathrm{II}}\mathcal{D}\) and \(\mathrm{Fun}_{\mathrm{II}}(\mathcal{C},\mathcal{D})\) are semiadditive. Thus \(\mathsf{Cat}_{\oplus}\) is a \(\otimes\)-ideal and an exponential ideal. Unitality follows from [15, Theorem A.1]. **Corollary 4.2.10**.: _The full suboperad \(\mathsf{Cat}_{\oplus}\subset\mathsf{Cat}_{\mathrm{II}}\) is symmetric monoidal with unit \(\mathsf{Span}(\mathsf{Fin})\). The inclusion functor is lax symmetric monoidal and preserves \(\otimes\)._ Proof.: This follows from Theorem 4.2.9 and Lemma 3.1.10. **Corollary 4.2.11**.: _The symmetric monoidal \((2,1)\)-category \(\mathsf{Span}(\mathsf{Fin})\) is the initial object in the \(\infty\)-category of symmetric monoidal semiadditive \(\infty\)-categories._ Proof.: By Lemma 3.1.14, this follows from Theorem 4.2.9. **Corollary 4.2.12**.: _The symmetric monoidal \((2,1)\)-category \(\mathsf{Span}(\mathsf{Fin})\) is the initial object in \(\mathsf{SMD}_{\neg\mathrm{gp},\mathrm{II},\mathsf{cof}}\)._ Proof.: By Corollary 4.2.11, it will suffice to verify that \(\mathsf{Span}(\mathsf{Fin})\) has duals and cofibers, is anti-additive, and that the cofibers are preserved by any symmetric monoidal functor to an anti-additive symmetric monoidal \(\infty\)-category with duals and cofibers. Indeed, coproducts in \(\mathsf{Span}(\mathsf{Fin})\) are disjoint union. Since every object is a finite disjoint union of copies of the unit, it follows from Lemma 3.2.8 that every object is dualizable (and indeed, self-dual). Since sums in hom-spaces are given by disjoint union, \(\mathsf{Span}(\mathsf{Fin})\) is clearly anti-additive, and in particular has trivial suspension. Let us compute cofibers in \(\mathsf{Span}(\mathsf{Fin})\). Any morphism in \(\mathsf{Span}(\mathsf{Fin})\) factors as a backwards morphism in \(\mathsf{Fin}\) followed by a forward morphism in \(\mathsf{Fin}\). We may further decompose the backward arrow as a coproduct of a bijection with maps \(1\gets 0\) followed by a coproduct of a bijection with maps \(1\gets n\) with \(n\geq 2\). The cofiber of the former are suspensions and hence zero. The cofiber of the latter likewise vanish by antiadditivity (vanishing of the cofiber of \(1\gets 2\) is the _definition_ of antiadditivity). So the cofiber of a backward map is zero, and this cofiber is preserved by any semiadditive functor to an antiadditive \(\infty\)-category. A forward map may be factored as a coproduct of a bijection with maps \(n\to 1\) followed by a coproduct of a bijection with maps \(0\to 1\). The cofiber of a map \(0\to 1\) is \(1\), and is preserved by all functors. It follows that the cofiber of an inclusion map \(i_{0}:1\to 2\) is 1, preserved by all semiadditive functors (the relevant morphism \(2\to 1\) is the backward map \(i_{1}:2\gets 1\)). Since \(i_{0}\) is a section of the map \(2\to 1\), and the cofiber of the identity is 0, it follows by pasting properties of pushout squares that the pushout of \(i_{1}:2\gets 1\) along the forward map \(2\to 1\) is 0, and this pushout is preserved by all semiadditive functors. Then because the cofiber of \(1\to 0\) is 0 by the triviality of suspension, it follows by pasting pushout squares that the cofiber of \(2\to 1\) is 0, and this cofiber is preserved by any semiadditive functor to a semiadditive \(\infty\)-category with trivial suspension. Since we have already computed the cofiber of the map \(0\to 1\) to be 0 and seen that this cofiber is preserved by all semiadditive functors, it follows that the cofiber of any morphism of \(\mathsf{Span}(\mathsf{Fin})\) exists and is preserved by any semiadditive functor to an antiadditive \(\infty\)-category, as desired. #### 4.2.3. The additive 1-category case In this subsection, we compute the initial object in the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with duals, finite coproducts, and cofibers which are moreover _additive_. It turns out to be a curious sort of restricted product of categories of finite-dimensional vector spaces over prime fields. **Definition 4.2.13**.: Let \(\mathsf{Cat}_{\mathrm{rex,\oplus}}\subset\mathsf{Cat}_{\mathrm{rex}}\) be the full sub-operad of semiadditive \(\infty\)-categories with finite colimits. Let \(\mathsf{Cat}_{\mathrm{rex,add}}\subset\mathsf{Cat}_{\mathrm{rex,add}}\) be the full suboperad of additive \(\infty\)-categories with finite colimits. Let \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\subset\mathsf{Cat}_{\mathrm{rex,add}}\) be the full subcategory of additive \(\infty\)-categories with finite colimits and trivial suspension (i.e, by Proposition 4.1.14, additive 1-categories with finite colimits). **Proposition 4.2.14**.: _The sub-operad \(\mathsf{Cat}_{\mathrm{rex,\oplus}}\subset\mathsf{Cat}_{\mathrm{rex}}\) is a \(\otimes\)-ideal._ Proof.: The proof is the same as Theorem 4.2.9. **Proposition 4.2.15**.: _The sub-operad \(\mathsf{Cat}_{\mathrm{rex,add}}\subset\mathsf{Cat}_{\mathrm{rex,\oplus}}\) is a \(\otimes\)-ideal._ Proof.: Let \(\mathcal{C}\in\mathsf{Cat}_{\mathrm{rex,add}}\) and \(\mathcal{D}\in\mathsf{Cat}_{\mathrm{rex,\oplus}}\). Then \(\mathcal{C}\otimes\mathcal{D}=\mathcal{P}_{\mathrm{rex}\mathrm{flex}}^{ \mathrm{rex}}(\mathcal{C}\times\mathcal{D})\). The localization of a representable \(L(C,D)\) is a cogroup object because \(C\) is a cogroup object and \(L(0,D)=0,L(C\oplus C,D)=L(C,D)\oplus L(C,D)\). Cogroup objects are closed under direct sums, so it remains to check that cogroup objects are closed under cofibers. This is true because being an object \(X\) in a semiadditive category is a cogroup object if and only if the map \(\begin{pmatrix}\mathrm{id}_{X}&\mathrm{id}_{X}\\ 0&\mathrm{id}_{X}\end{pmatrix}:X\oplus X\to X\oplus X\) is an isomorphism (Remark 1.5.2). This map is natural with respect to all maps, so if \(X\to Y\to Z\) is a cofiber sequence and this map is an isomorphism for \(X\) and for \(Y\), then it is also an isomorphism for \(Z\). **Proposition 4.2.16**.: _The sub-operad \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\subset\mathsf{Cat}_{\mathrm{rex,add}}\) is a \(\otimes\)-ideal._ Proof.: Let \(\mathcal{C}\in\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\) and \(\mathcal{D}\in\mathsf{Cat}_{\mathrm{rex,add}}\); we wish to show that \(\mathcal{C}\otimes\mathcal{D}\) has trivial suspension. But this follows from the fact that \((-)\otimes\mathcal{D}\) is a 2-functor, and locally commutes with zero objects and finite colimits (and hence with suspensions). **Definition 4.2.17**.: Let \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\) be the following \(\infty\)-operad. An object is an object \(\mathcal{C}\in\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\) equipped with, for each prime \(p\), a section of the map \(X\to X/p\) natural in \(X\in\mathcal{C}\). The hom-space \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}(\mathcal{C}_{1}, \dots,\mathcal{C}_{n};\mathcal{D})\) is the subgroupoid of \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}(\mathcal{C}_{1},\dots,\mathcal{C}_ {n};\mathcal{D})\) comprising those functors \(\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{ 1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C} _{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C} _{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C} _{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C} _{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C} _{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C} _{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times \mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1}\times\mathcal{C}_{1} \times\mathcal{C}_{1}\ \(\cdots\times\mathcal{C}_{n}\to\mathcal{D}\) which commute with these natural splittings separately in each variable. Composition is as in \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\). **Definition 4.2.18**.: The additive \(1\)-category \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) is defined as follows. An object \((V,F)\) comprises an infinite tuple \((V_{2},V_{3},V_{5},\dots)\) of finite-dimensional \(\mathbb{F}_{p}\)-vector spaces, one for each prime \(p\), a finitely-generated free abelian group \(F\), and isomorphisms \(F/p\cong V_{p}\) for all but finitely-many primes. A morphism \(f:(V,F)\to(W,G)\) comprises a tuple of \(\mathbb{F}_{p}\)-linear morphisms \(f_{p}:V_{p}\to W_{p}\) (one for each prime \(p\)) and a group homomorphism \(f_{0}:F\to G\) such that for all but finitely-many primes \(p\) we have \(f_{p}=f_{0}/p\) under the canonical identifications \(V_{p}=F/p\) and \(W_{p}=G/p\). Composition and \(\otimes\) are defined in the evident "componentwise" manner. **Notation 4.2.19**.: _The category \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) is equipped with a canonical object, which we denote \(S\), corresponding to the free abelian group \(\mathbb{Z}\) with its reduction mod \(p\) at each prime \(p\). For a prime \(p\), we denote by \(S/p\) the object of \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) corresponding to \(\mathbb{Z}/p\) at the prime \(p\) and \(0\) at all other primes. For \(n\in\mathbb{Z}\), we denote by \(S(m)\) the object of \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) whose free part is \(\mathbb{Z}\), with its standard at all \(\ell\) not dividing \(m\), but whose component at the prime \(p\) is \(0\) for all \(p\) dividing \(m\). Note that every object is canonically of the form \((S/p_{1})^{\oplus e_{1}}\oplus\cdots\oplus(S/p_{r})^{\oplus e_{r}}\oplus S(p_{ 1}\cdots p_{r})^{\oplus f}\), and that if \(p\) is coprime to \(m\), there is a canonical isomorphism \(S(m)\cong S(p)\oplus S(mp)\)._ **Lemma 4.2.20**.: _The category \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) admits in a canonical way the structure of an object of \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\)._ Proof.: Clearly \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) is an additive \(1\)-category. Let us show that \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) has cofibers. Observe that \(\mathrm{Hom}(S/p,S/\ell)=0\) for \(p\neq\ell\), \(\mathrm{Hom}(S/p,S/p)=\mathbb{Z}/p\), \(\mathrm{Hom}(S/p,S(p))=\mathrm{Hom}(S(p),S/p)=0\), and \(\mathrm{Hom}(S(m),S(m))\) contains a copy of \(\mathbb{Z}\) corresponding to those maps which are the reduction mod \(\ell\) of a fixed map for every \(\ell\nmid m\). So in light of the biproduct decompositions noted in Notation 4.2.19 and the triviality of suspension, it will suffice to construct cofibers of maps \(\phi:(S/p)^{\oplus e}\to(S/p)^{\oplus e^{\prime}}\) and \(\psi:S(m)^{\oplus f}\to S(m)^{\oplus f^{\prime}}\) where \(\psi\) is the reduction of an integral map at all primes not dividing \(m\). In the former case, cofibers are computed as in \(\mathsf{Vect}_{\mathbb{F}_{p}}\); this clearly works because if \((S/p)^{\oplus e^{\prime}}\to X\) is a morphism in \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\), it must factor through the inclusion \(X/p\to X\). In the latter case, let \(Q\) be the cokernel (in the category of abelian groups) of the corresponding map \(\psi:\mathbb{Z}^{f}\to\mathbb{Z}^{f^{\prime}}\). Let \(T\subseteq Q\) be the torsion subgroup and \(F=Q/T\) the torsion-free quotient. Let \(\overline{Q}\) be the object of \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) whose component at \(p\) is \(Q/p\) for \(p\nmid m\) and \(0\) for \(p\mid m\); the free part is \(F\). To see that \(\overline{Q}\) is the cokernel of \(\psi\), consider first a map \(\theta:S(m)^{\oplus f^{\prime}}\to S/p\). Then \(\theta\) must kill \(S(mp)^{\oplus f^{\prime}}\). If \(p\mid m\), this means that \(\theta=0\), and so \(\theta\psi=0\) and \(\theta\) factors uniquely through \(Q\) via the zero map. Otherwise, \(\theta\) factors through \(S(m)^{\oplus f^{\prime}}\to(S/p)^{\oplus f^{\prime}}\). Thus \(\theta\psi=0\) if and only if \(\theta(\psi/p)=0\), if and only if \(\theta\) factors uniquely through \(Q/p\). As any factorization through \(\overline{Q}\) must kill \(\overline{Q}(p)\), this implies that the factorization through \(\overline{Q}\) is unique. Next, consider a map \(\theta:S(m)^{\oplus f^{\prime}}\to S(mn)\), where \(n\) is the torsion exponent of \(Q\). Then \(\theta\psi=0\) if and only if \(\theta\) factors uniquely through \(F\). This gives a unique factorization through \(\overline{Q}\). Finally, let us construct our splittings. The splitting of \(S/\ell\to(S/\ell)/p\) is the identity when \(p=\ell\) and zero when \(p\neq\ell\). The splitting of \(S(m)\to S(m)/p\) is zero when \(p\mid m\) and the canonical inclusion \(S/p\to S(m)=S/p\oplus S(mp)\) when \(p\nmid m\). We extend these definitions by taking direct sums. In doing this, we must check that our definitions are consistent with the relation \(S(m)=S/p\oplus S(mp)\) when \(p\nmid m\). They are. Now we check that these splitting maps are natural. They are. **Proposition 4.2.21**.: _Evaluation at \(S\in(\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) determines an equivalence of categories \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}((\prod_{p}\mathsf{Vect}_{ \mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}},\mathcal{D})\simeq \mathcal{D}^{\sim}\) for any \(\mathcal{D}\in\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\)._ Proof.: Let \(D\in\mathcal{D}\), and let \(F:(\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const. }}\to\mathcal{D}\) be a morphism of \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\) with \(F(S)=D\). Then \(\mathrm{id}_{S}\) must be carried to \(\mathrm{id}_{D}\), and integral multiples of \(\mathrm{id}_{S}\) must be carried to the corresponding integral multiples of \(\mathrm{id}_{D}\). \(F\) must carry \(S/p\) to \(D/p\) and \(S(p)\) to the cokernel of the canonical splitting \(D/p\to D\), which we denote \(D(p)\). Note that \((D/p)/\ell=0\) for \(\ell\neq p\). Thus the idempotents defining the various \(D(p)\)'s commute (with their product being \(0\) for \(p\neq\ell\)), and it follows that for any integer \(m\), \(S(m)\) is carried to the intersection of the \(D(p)\)'s for \(p\mid m\); we denote this object \(D(m)\). Because \(F\) commutes with direct sums, its behavior on objects is now entirely forced. A nonzero morphism \(S/p\to S/\ell\) exists only if \(p=\ell\), in which case it is the reduction mod \(p\) of a morphism \(S\to S\), so its image in \(\mathcal{D}\) is forced. There are no nonzero morphisms \(S/p\to S(m)\) or \(S(m)\to S/p\) for \(p\nmid m\). Any morphism \(S(m)\to S(m)\) which is the reduction of a morphism \(S\to S\) has image in \(\mathcal{D}\) forced as well. Now, any morphism in \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) is a sum of direct sums of the morphisms considered so far, and thus its image in \(\mathcal{D}\) is determined by additivity. So if such an \(F\) exists, it is unique. Let us verify that such an \(F\) in fact exists. We have seen that we must have \(F((S/p_{1})^{\oplus e_{1}}\oplus\cdots\oplus(S/p_{r})^{\oplus e_{r}}\oplus S( p_{1}\cdots p_{r})^{\oplus f}=(D/p_{1})^{\oplus e_{1}}\oplus\cdots\oplus(D/p_{r})^{ \oplus e_{r}}\oplus D(p_{1}\cdots p_{r})^{\oplus f}\). This is well-defined on objects because \(D(m)=D(pm)\oplus D/p\) for \(p\nmid m\). To see that \(F\) is well-defined on morphisms, we must check first that \(\mathcal{D}(D/p,D/p)\) is \(p\)-torsion. This is indeed the case by the universal property of \(D/p\). We must also check, for \(p\nmid m\), that the identity on \(D(m)\) agrees with the sum of the identity on \(D/p\) and the identity on \(D(mp)\), which it does. It is clear that \(F\) commutes with addition on hom-sets. To see that \(F\) is functorial is straightforward. It is clear that \(F\) commutes with the canonical splittings objects \(X\to X/p\) - for instance, when \(X=S\), this holds by definition. \(F\) commutes with finite coproducts, so it remains only to check that \(F\) commutes with cofibers. This is clear from the description given in Lemma 4.2.20. **Theorem 4.2.22**.: _The \(\infty\)-operad \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\) is a symmetric monoidal \(\infty\)-category, and the inclusion \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\to\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\) preserves binary tensor products up to equivalence. The unit object is \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\)._ Proof.: We have seen in Proposition 4.2.21 that the unit is representable. So it will suffice to show that the tensor product is representable and that the associativity constraints are isomorphisms. Let \(\mathcal{C},\mathcal{D}\in\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\), and let \(U\mathcal{C},U\mathcal{D}\) be the underlying objects in \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\). We start by showing that \(U\mathcal{C}\otimes U\mathcal{D}\) admits splittings making it an object of \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\). In fact, the data of such splittings on \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\) is equivalent to the data of splittings \(\mathcal{E}=\mathcal{E}/p\times\mathcal{E}(p)\) for each prime \(p\), where \(\mathcal{E}/p\) has the property that its hom-spaces are \(p\)-torsion and \(\mathcal{E}(p)\) has the property that the endomorphism \(p:E\to E\) is an epimorphism for any \(E\in\mathcal{E}\). Moreover, the tensor product on \(\mathsf{Cat}_{\mathrm{rex}}\), and hence on \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\), preserves finite products (which are also finite coproducts in \(\mathsf{Cat}_{\mathrm{rex}}\) and hence also in \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv}}\)) separately in each variable: one way to see this is that the category \(\mathsf{Cat}_{\mathrm{rex}}\) is semiadditive, with the addition operation on hom-spaces being \(\oplus\); since \(\otimes\) preserves this addition in each variable separately, it must preserve direct sums of objects of \(\mathsf{Cat}_{\mathrm{rex}}\) separately. So if \(\mathcal{C}=\mathcal{C}/p\oplus\mathcal{C}(p)\) and \(\mathcal{D}=\mathcal{D}/p\oplus\mathcal{D}(p)\), then \(\mathcal{C}\otimes\mathcal{D}=(\mathcal{C}/p\otimes\mathcal{D}/p)\oplus( \mathcal{C}/p\otimes\mathcal{D}(p))\oplus(\mathcal{C}(p)\otimes\mathcal{D}/p) \oplus(\mathcal{C}(p)\otimes\mathcal{D}(p))\). The middle two terms have the property that \(p=0\) and \(p\) is an epimorphism on each object, so they vanish. We are left with the first term, which has the property that \(p=0\) on each object, and the last term, which has the property that \(p\) is an epimorphism on each object. That is, we have the desired splitting. Moreover, from this description we see that if \(F:\mathcal{C}\times\mathcal{D}\to\mathcal{E}\) preserves these splitting separately in each variable, then the induced functor \(\overline{F}:\mathcal{C}\otimes\mathcal{D}\to\mathcal{E}\) preserves the designated splittings as well. Thus \(\mathsf{Cat}_{\mathrm{rex,add,\Sigma-triv,split}}\subset\mathsf{Cat}_{ \mathrm{rex,add,\Sigma-triv}}\) is closed under binary tensor products. **Corollary 4.2.23**.: _The initial object of \(\mathsf{SMC}_{\mathrm{rex,add,\Sigma-triv,split}}\) is \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\)._ Proof.: The unit of a symmetric monoidal category, with its unique \(E_{\infty}\) structure, is always the initial object of the category of \(E_{\infty}\)-algebras. **Proposition 4.2.24**.: _The initial object of \(\mathsf{SMD}_{\mathrm{gp,\Sigma-triv,\mathsf{cof}}}\) is \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\)._ Proof.: There is a fully faithful forgetful functor \(\mathsf{SMD}_{\mathrm{gp,\Sigma-triv,\mathsf{cof}}}\to\mathsf{SMC}_{\mathrm{rex,add,\Sigma-triv,split}}\), where the splittings come from the clopen idempotent structure of \(S/p\). So in order to verify that \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) is the initial object of \(\mathsf{SMD}_{\mathrm{gp,\Sigma-triv,\mathsf{cof}}}\), it will suffice by Corollary 4.2.23 to verify that \((\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{\mathrm{f.d.}})_{\mathrm{ev.const.}}\) has duals for objects. But it is clear that every object is self-dual. #### 4.2.4. All together In this subsection, we product together the initial objects from the rest of Section 4.2 to describe the initial object of the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with duals, finite coproducts, and cofibers. **Theorem 4.2.25**.: _The initial object of \(\mathsf{SMD}^{\mathrm{II,cof}}\) is \(\mathsf{Spt}^{\mathrm{fin}}\times(\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{ \mathrm{f.d.}})_{\mathrm{ev.const.}}\times\mathsf{Span}(\mathsf{Fin})\)._ Proof.: By Proposition 4.1.13, the initial object \(\mathcal{I}\) of \(\mathsf{SMD}^{\mathrm{II,cof}}\) splits as \(\mathcal{I}=\mathcal{I}_{\mathrm{stab}}\times\mathcal{I}_{\mathrm{gp,\Sigma-triv }}\times\mathcal{I}_{\mathrm{\neg gp}}\). Moreover, the localization functors \(\mathcal{I}\to\mathcal{I}_{\mathrm{stab}}\), \(\mathcal{I}\to\mathcal{I}_{\mathrm{gp,\Sigma-triv}}\), \(\mathcal{I}\to\mathcal{I}_{\mathrm{\neg gp}}\) exhibit their codomains respectively as the initial objects of \(\mathsf{SMD}_{\mathrm{stab}}\), \(\mathsf{SMD}_{\mathrm{gp,\Sigma-triv,\mathsf{cof}}}\), and \(\mathsf{SMD}_{\mathrm{\neg gp,cof}}\) respectively. So this follows from Proposition 4.2.6, Proposition 4.2.24, and Corollary 4.2.12. **Theorem 4.2.26**.: _The initial object \(\mathcal{I}\) of \(\mathsf{SMD}^{\mathrm{rex}}\) is \(\mathsf{Spt}^{\mathrm{fin}}\times(\prod_{p}\mathsf{Vect}_{\mathbb{F}_{p}}^{ \mathrm{f.d.}})_{\mathrm{ev.const.}}\times\mathcal{I}_{\mathrm{\neg gp}}\)._ Proof.: We have \(\mathsf{SMD}^{\mathrm{rex}}_{\mathrm{stab}}=\mathsf{SMD}^{\mathrm{II,cof}}_{ \mathrm{stab}}\) and \(\mathsf{SMD}^{\mathrm{rex}}_{\mathrm{gp,\Sigma-triv}}=\mathsf{SMD}^{\mathrm{II, cof}}_{\mathrm{gp,\Sigma-triv}}\). So these two factors agree with the ones from Theorem 4.2.25. The final factor will not be exactly \(\mathsf{Span}(\mathsf{Fin})\) **Remark 4.2.27**.: We do not know precisely what the category \(\mathcal{I}_{\neg\mathrm{gp}}\) is. However, \(\mathcal{I}_{\neg\mathrm{gp}}\) receives a functor from \(\mathsf{Span}(\mathsf{Fin})\) by Theorem 4.2.25. ## 5. Application to Equivariant Homotopy Theory In this chapter, we apply the foregoing results to prove Corollary 5.1.5, which gives a new universal property for equivariant stable homotopy theory. We give several further, essentially equivalent universal properties in Corollary 5.1.6. There are a number of ways to formulate this universal property, which was suggested in a preliminary form by Charles Rezk ([10]). ### Equivariant homotopy theory Let \(G\) be a compact Lie group, and let \(G\mathsf{Top}\) be the \(\infty\)-category of \(G\)-spaces, considered as symmetric monoidal under cartesian product. Let \(G\mathsf{Top}^{\mathrm{fin}}\subset G\mathsf{Top}\) be the full symmetric monoidal subcategory of \(G\)-spaces with finitely many cells. We also have pointed versions \(G\mathsf{Top}_{*},G\mathsf{Top}_{*}^{\mathrm{fin}}\), considered as symmetric monoidal under smash product. These are to be contrasted to the \(\infty\)-categories \(\mathsf{Top}^{BG},\mathsf{Top}_{*}^{BG}\) of _Borel_\(G\)-spaces (pointed and unpointed respectively). Consider also the symmetric monoidal \(\infty\)-category \(G\mathsf{Spt}\) of genuine \(G\)-spectra, and the full subcategory \(G\mathsf{Spt}^{\mathrm{fin}}\) of finite genuine \(G\)-spectra. Let \(\mathsf{SMC}_{\mathrm{rex}}\) denote the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with finite colimits. Let \(\mathsf{SMC}_{*,\mathrm{rex}}\) denote the \(\infty\)-category of pointed symmetric monoidal \(\infty\)-categories with finite colimits. Let \(\mathsf{SMD}_{\mathrm{rex}}\) denote the \(\infty\)-category of symmetric monoidal \(\infty\)-categories with finite colimits and duals for objects. Let \(\mathbb{D}_{\mathrm{rex}}:\mathsf{SMC}_{\mathrm{rex}}\to\mathsf{SMD}_{\mathrm{rex}}\) denote the left adjoint to the inclusion, and let \(\mathbb{D}_{\mathrm{rex}}:\mathsf{SMC}_{*,\mathrm{rex}}\to\mathsf{SMD}_{ \mathrm{rex}}\) also denote the left adjoint to that inclusion. Let also \(\mathsf{SMP}\) denote the \(\infty\)-category of presentably symmetric monoidal \(\infty\)-categories and left adjoint symmetric monoidal functors. Before going further, let us remind ourselves of some basic facts about equivariant homotopy theory: **Lemma 5.1.1**.: _Let \(G\) be a compact Lie group and \(H\subseteq G\) a closed subgroup. Then induction from \(H\)-spaces to \(G\)-spaces carries finite \(H\)-CW-complexes to finite \(G\)-CW-complexes._ Proof.: Induction carries \(H\)-orbits to \(G\)-orbits, so this follows by induction on cells, since induction preserves homotopy colimits. **Lemma 5.1.2**.: _Any compact \(G\)-manifold \(M\) is \(G\)-homotopy equivalent to a finite \(G\)-CW-complex. In particular, if \(S^{V}\) is a \(G\)-representation sphere, then it is \(G\)-homotopy equivalent to a finite \(G\)-CW-complex._ Proof.: We will prove the theorem by induction on the dimension \(d\) of \(M\). By [11, Corollary 4.11], there exists an equivariant Morse function on \(M\), which decomposes \(M\) via a finite filtration \(M_{0}\subset M_{1}\subset\dots\subset M_{n}\). We have that \(M_{i+1}=M_{i}\cup_{S(V)}D(v)\), where \(V\) is an equivariant bundle over a \(G\)-orbit \(G/H\), and \(S(V)\subset D(V)\) are its associated sphere and disc bundles. Of course, \(D(V)\) is \(G\)-homotopy equivalent to the orbit \(G/H\), which is finite cellular by definition. The dimension of \(D(V)\) is at most the dimension \(d\) of \(M\), so the dimension of \(S(V)\) is strictly less. By induction \(S(V)\) is also finite cellular. Thus, inducting on \(i\) we obtain that \(M\) is finite cellular as desired. **Lemma 5.1.3**.: _The symmetric monoidal \(\infty\)-category \(G\mathsf{Spt}^{\mathrm{fin}}\) of genuine finite \(G\)-spectra has all duals. Likewise, the symmetric monoidal \(\infty\)-category \(G\mathsf{Spt}^{\mathrm{f.d.}}\) of finitely-dominated \(G\)-spectra has all duals._ Proof.: The second statement follows from the first because dualizable objects are closed under retracts (Lemma 3.2.8). For the first statement, we may assume by induction that this is true for all proper closed subgroups \(H\subset G\). Following [10, Theorem 4.10], it follows from the Wirthmuller isomorphism that the dual of an orbit \(G/H_{+}\) is given by inducing up \(S^{-L(H)}\) from \(H\)-spectra, where \(L(H)\) is the tangent space of the identity of \(G/H\). By Lemma 5.1.2, \(S^{L(H)}\) is a a finite \(H\)-spectrum. If \(H=G\), then \(L(H)=0\) and \(S^{L(H)}=S^{0}\) is trivially a finite \(H\)-spectrum. Otherwise, \(S^{-L(H)}=(S^{L(H)})^{\vee}\) is an \(H\)-spectrum by induction on the subgroups of \(G\). Therefore by Lemma 5.1.1, the dual of \((G/H)_{+}\) is a finite \(G\)-spectrum. Since \(G\mathsf{Spt}^{\mathrm{fin}}\) is by definition generated under finite colimits and desuspensions by the orbits, it now follows from Lemma 3.2.8 that \(G\mathsf{Spt}^{\mathrm{fin}}\) has all duals. We noted in Example 2.1.6 that for any \(G\)-representation \(V\), the \(1\)-point compactification \(S^{V}\) has twisted-trivial braiding in \(G\mathsf{Top}_{*}\). Let \(S\subset G\mathsf{Top}\) denote the set of finite-dimensional \(G\)-representation spheres. For each \(S^{V}\in S\), Theorem 3.2.10 tells us that the \(\infty\)-category \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}\) decomposes as \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}=\mathbb{D}_{S}^{\mathsf{SMP}}G \mathsf{Top}_{*}[{S^{V}}^{-1}]\times\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top }_{*}/S^{V}\) where in the first factor \(S^{V}\) is \(\wedge\)-invertible, while in the other it is trivial. Our present goal is to show that the second factor vanishes, i.e. **Theorem 5.1.4**.: _Let \(G\) be a compact Lie group, let \(S\subset G\mathsf{Top}\) denote the set of finite-dimensional \(G\)-representation spheres, and let \(S^{V}\in S\). Then \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}[(S^{1})^{-1}]/S^{V}=0\)._ The proof of Theorem 5.1.4 is deferred to the next section, Section 5.2. In the meantime, let us pause to deduce a major result of this thesis: **Corollary 5.1.5**.: _Let \(G\) be a compact Lie group, and let \(S\subset G\mathsf{Top}_{*}\) denote the set of finite-dimensional representation spheres. Then \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]=G\mathsf{Spt}\). That is, \(G\mathsf{Spt}\) is the free presentably symmetric monoidal stable \(\infty\)-category on \(G\mathsf{Top}_{*}\) where the representation spheres become dualizable._ Proof.: Let \(V\) be a finite-dimensional \(G\)-representation. By Theorem 5.1.4, \[\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]/S^{V}=0\] By the product decomposition, this implies that \[\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]=\mathbb{D}_{S}^ {\mathsf{SMP}}G\mathsf{Top}^{\mathrm{fin}}[(S^{1})^{-1},(S^{V})^{-1}]\] i.e. \(S^{V}\) is invertible in \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]\). Now, \(G\mathsf{Spt}\) is freely obtained from \(G\mathsf{Top}_{*}\) by inverting all the representation spheres \(S^{V}\) (see [10, Theorem A.2] for the case where \(G\) is finite, and [10, Corollary C.7] in general, in both cases the result is deduced from results of [11]). By this universal property, we obtain a symmetric monoidal, left adjoint functor \(G\mathsf{Spt}\to\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]\). The presentably symmetric monoidal \(\infty\)-category \(G\mathsf{Spt}\) is stable, and by Lemma 5.1.3, the representation spheres are dualizable in \(G\mathsf{Spt}\), so we obtain a functor in the other direction as well by the universal property of \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}\). The composite of the two functors in either order looks like the identity functor after precomposing with the canonical functor from \(G\mathsf{Top}_{*}\), and so by the universal property this composite is in fact the identity. Thus we have a symmetric monoidal equivalence under \(G\mathsf{Top}_{*}\) between \(G\mathsf{Spt}\) and \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]\) as claimed. We may deduce from Corollary 5.1.5 several related universal properties: **Corollary 5.1.6**.: _Let \(G\) be a compact Lie group. Let \(\Sigma_{G}^{\infty}:G\mathsf{Top}_{*}\to G\mathsf{Spt}\) denote the equivariant suspension functor. Then_ 1. _The functor_ \(\Sigma_{G}^{\infty}\) _is the universal presentably symmetric monoidal functor from_ \(G\mathsf{Top}_{*}\) _to a presentably symetic monoidal stable_ \(\infty\)_-category carrying each representation sphere to a dualizable object._ 2. _The functor_ \(\Sigma_{G}^{\infty}\) _is the universal presentably symmetric monoidal functor from_ \(G\mathsf{Top}_{*}\) _to a presentably symmetric monoidal stable_ \(\infty\)_-category carrying each object of_ \(G\mathsf{Top}_{*}^{\mathrm{fin}}\) _to a dualizable object._ 3. _The functor_ \(\Sigma_{G}^{\infty}\) _is the universal presentably symmetric monoidal functor from_ \(G\mathsf{Top}_{*}\) _to a presentably stable symmetric monoidal_ \(\infty\)_-category carrying each compact object of_ \(G\mathsf{Top}_{*}\) _to a dualizable object._ 4. _The functor_ \(\Sigma_{G}^{\infty}\) _is the universal compactly-generated symmetric monoidal functor from_ \(G\mathsf{Top}_{*}\) _to a compactly-generated, stable symmetric monoidal_ \(\infty\)_-category with duals for all compact objects._ _Now let \(\Sigma_{G}^{\infty,\mathrm{fin}}:G\mathsf{Top}_{*}^{\mathrm{fin}}\to G\mathsf{ Spt}^{\mathrm{fin}}\) denote the restriction / corestriction of \(\Sigma_{G}^{\infty}\) to finite \(G\)-spaces / finite \(G\)-spectra._ 1. _The functor_ \(\Sigma_{G}^{\infty,\mathrm{fin}}\) _is the universal right exact symmetric monoidal functor from_ \(G\mathsf{Top}_{*}^{\mathrm{fin}}\) _to a stable symmetric monoidal_ \(\infty\)_-category carrying each representation sphere to a dualizable object._ 2. _The functor_ \(\Sigma_{G}^{\infty,\mathrm{fin}}\) _is the universal right exact symmetric monoidal functor from_ \(G\mathsf{Top}_{*}^{\mathrm{fin}}\) _to a stable symmetric monoidal_ \(\infty\)_-category carrying each object to a dualizable object._ 3. _The functor_ \(\Sigma_{G}^{\infty,\mathrm{fin}}\) _is the universal right exact symmetric monoidal functor from_ \(G\mathsf{Top}_{*}^{\mathrm{fin}}\) _to a stable symmetric monoidal_ \(\infty\)_-category with all objects dualizable._ _Finally, let \(\Sigma_{G}^{\infty,\mathrm{f.d.}}:G\mathsf{Top}_{*}^{\mathrm{f.d.}}\to G\mathsf{ Spt}^{\mathrm{f.d.}}\) denote the restriction / corestriction of \(\Sigma_{G}^{\infty}\) to finitely-dominated \(G\)-spaces / finitely-dominated \(G\)-spectra._ 1. _The functor_ \(\Sigma_{G}^{\infty,\mathrm{f.d.}}\) _is the universal right exact symmetric monoidal functor from_ \(G\mathsf{Top}_{*}^{\mathrm{f.d.}}\) _to a right exact, idempotent-complete stable_ \(\infty\)_-category carrying each representation sphere to a dualizable object._ 2. _The functor_ \(\Sigma_{G}^{\infty,\mathrm{f.d.}}\) _is the universal right exact symmetric monoidal functor from_ \(G\mathsf{Top}_{*}^{\mathrm{f.d.}}\) _to a right exact, idempotent-complete, stable symmetric monoidal_ \(\infty\)_-category carrying each object to a dualizable object._ 3. _The functor_ \(\Sigma_{G}^{\infty,\mathrm{f.d.}}\) _is the universal right exact symmetric monoidal functor from_ \(G\mathsf{Top}_{*}^{\mathrm{f.d.}}\) _to a right exact, idempotent-complete, stable symmetric monoidal_ \(\infty\)_-category with all objects dualizable._ Proof.: Item 1 is the statement of Corollary 5.1.5. Item 2 then follows because every finite \(G\)-spectrum is dualizable (Lemma 5.1.3), and likewise Item 3 follows because every finitely-dominated \(G\)-spectrum is dualizable (again Lemma 5.1.3). Recall now that a symmetric monoidal left adjoint between compactly-generated symmetric monoidal \(\infty\)-categories is said to be _compactly-generated_ if it preserves compact objects (or equivalently, if its right adjoint preserves filtered colimits). So for Item 4, it suffices to verify that \(G\mathsf{Spt}\) has duals for all compact objects (which is Lemma 5.1.3), that the functor \(\Sigma_{G}^{\infty}\) preserves compact objects (which it does), and that if \(\mathcal{K}\) is any compactly-generated symmetric monoidal \(\infty\)-category with duals for compact objects, and if \(F:G\mathsf{Top}_{*}\to\mathcal{K}\) is a compactly-generated symmetric monoidal functor, then the functor \(\tilde{F}:G\mathsf{Spt}\to\mathcal{K}\) induced by Item 3 also preserves compact objects. This last point follows because the compact objects \(G\mathsf{Spt}^{\mathrm{f.d.}}\subset G\mathsf{Spt}\) are contained in (in fact, coincide with) the closure of the image of \(\Sigma_{G}^{\infty,\mathrm{f.d.}}\) under idempotent splitting; by hypothesis, \(\tilde{F}\) carries the objects in the image of \(\Sigma_{G}^{\infty,\mathrm{f.d.}}\) to dualizable objects, and dualizable objects are closed under retracts (Lemma 3.2.8), so the result follows. For Item 5, let \(F:G\mathsf{Top}_{*}^{\mathrm{fin}}\to\mathcal{C}\) be a right exact symmetric monoidal functor to a stable symmetric monoidal \(\infty\)-category carrying each representation sphere to a dualizable object. We may assume without loss of generality that \(\mathcal{C}\) is small. There is an induced symmetric monoidal left adjoint \(\mathrm{Ind}(F):G\mathsf{Top}_{*}\to\mathrm{Ind}(\mathcal{C})\), which continues to carry each representation sphere to a dualizable object. Thus by Item 1 we obtain an essentially unique extension \(\widetilde{\mathrm{Ind}(F)}:G\mathsf{Spt}\to\mathrm{Ind}(\mathcal{C})\). The restriction \(\tilde{F}:G\mathsf{Spt}^{\mathrm{fin}}\to\mathrm{Ind}(\mathcal{C})\) is right exact symmetric monoidal. Moreover, every object of \(G\mathsf{Spt}^{\mathrm{fin}}\) is in the image of \(\Sigma_{G}^{\infty,\mathrm{fin}}\) and therefore is contained in \(\mathcal{C}\). Thus \(\tilde{F}\) corestricts to an extension \(\bar{F}:G\mathsf{Spt}^{\mathrm{fin}}\to\mathcal{C}\) of \(F\). We have \(\mathrm{Ind}(\bar{F})=\widetilde{\mathrm{Ind}(F)}\); because \(\mathrm{Ind}\) is a fully faithful functor, the essential uniqueness of \(\widetilde{F}\) follows from the essential uniqueness of \(\widetilde{\mathrm{Ind}(F)}\). For Item 6, the argument is similar, using Item 2 instead of Item 1. Item 7 follows because every object of \(G\mathsf{Spt}^{\mathrm{fin}}\) is in fact dualizable (Lemma 5.1.3). Item 8 follows from Item 5, using that \(G\mathsf{Spt}^{\mathrm{f.d.}}\) is the idempotent completion of \(G\mathsf{Spt}^{\mathrm{fin}}\). Similarly, Item 9 follows from Item 6, and Item 10 follows from Item 7. **Remark 5.1.7**.: Via the equivalence provided by the \(\mathrm{Ind}\) functor (known as \(\infty\)-categorical Gabriel-Ulmer duality), Items 8 to 10 also have equivalent statements in terms of compactly-generated symmetric monoidal \(\infty\)-categories. For example, under Gabriel-Ulmer duality the statement equivalent to Item 10 is Item 4; we have not recorded the other two equivalent statements explicitly. ### The proof of Theorem 5.1.4 This section is devoted to the proof of Theorem 5.1.4. The proof is by induction on the structure of the orbit category \(\mathcal{O}_{G}\). More precisely, Definition 5.2.3 associates to each _interval_\(I\) in the lattice of conjugacy classes of closed subgroups \(H\subseteq G\), a representation sphere \(S^{I}\). Starting from the assumption of a symmetric monoidal left adjoint \(F:G\mathsf{Top}_{*}\to\mathcal{C}\) with \(\mathcal{C}\) stable presentably symmetric monoidal with compact unit and \(F(S^{V})=0\) for some representation sphere \(V\), a geometric argument first shows (Lemma 5.2.9) that if \(I=\overline{\{H\}}\) is a "singleton" interval, then \(F(S^{\overline{\{H\}}})=0\). Then comes (Theorem 5.2.10) an induction on the size of the interval \(I\), showing that \(F(S^{I})=0\) for any interval \(I\) (it is in this step that we use that hypothesis that \(\mathcal{C}\) have compact unit - this is one delicate point). Taking \(I=G\), we have \(F(S^{0})=0\), which implies that \(\mathcal{C}=0\). In a final step, the hypothesis that \(\mathcal{C}\) have a compact unit is removed, and Theorem 5.1.4 follows by the defining universal property of \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]/S^{V}\). The argument involves threading the needle between the infinitary setting of \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]/S^{V}\) and the finitary setting of \(\mathbb{D}_{S}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}[(S^{1})^{-1}]/S^{V}\). The main argument of Theorem 5.2.10 is carried out in the infinitary setting but with a compactness assumption. On the one hand, the infinitary setting is necessary in order to exploit a certain infinite colimit (Lemma 5.2.8), while on the other hand the compactness is necessary in the inductive argument of Theorem 5.2.107 as alluded to above. Luckily, the universal properties involved here are rather robust (cf. Corollary 5.1.6), and it is possible to pass back and forth between these settings rather easily. Footnote 7: In fact, if \(G\) is finite the argument will go through without the compactness hypothesis. **Lemma 5.2.1** (Untwisting Lemma).: _For any \(X\in G\mathsf{Top}_{*}\), there is a \(G\)-homeomorphism between \(G_{+}\wedge X\) with the diagonal action, and \(G_{+}\wedge X\) with the action on \(G_{+}\) (forgetting the original action on \(X\))._ Proof.: We define a map \(\phi:G\times X\to G\times X\) by \((g,x)\mapsto(g,gx)\). This is equivariant from the left action \(\cdot^{l}\) to the diagonal action \(\cdot^{d}\): \(\phi(h\cdot^{l}(g,x))=\phi((hg,x))=(hg,hgx)\) while \(h\cdot^{d}\phi(g,x))=h\cdot^{d}(g,gx)=(hg,hgx)\). In the other direction, define \(\psi(g,x)=(g,g^{-1}x)\). Then \(\phi\) and \(\psi\) are inverse to each other (and so \(\psi\) is also equivariant). Moreover, these maps descend: we have \(G_{+}\wedge X=G\times X/G\times*\), and \(\phi(g,*)=(g,g*)=(g,*)\) while \(\psi(g,*)=(g,g^{-1}*)=(g,*)\). **Lemma 5.2.2**.: _For any \(H\subseteq G\), and any finite-dimensional \(G\)-representation \(V\), the underlying space of \((S^{V})^{H}\) is a sphere._ Proof.: The \(H\)-fixed points \(V^{H}\) in the representation \(V\) are a linear subspace. So \((S^{V})^{H}=S^{V^{H}}\) is a sphere. **Definition 5.2.3**.: Recall that we have fixed a compact Lie group \(G\). An _upset_ is an upwards-closed set in the lattice of conjugacy classes of subgroups of \(G\). A _downset_ is a downwards-closed set in the lattice of conjugacy classes of subgroups of \(G\), i.e. the complement of an upset. A _interval_ is the intersection of an upset and a downset. If \(S\) is a subset of the poset of closed subgroups of \(G\), write \(\overline{S}\) for the closure of \(S\) under conjugacy in \(G\). In particular, if \(H\subseteq G\) is a closed subgroup, then \(\overline{\{H\}}\) denotes the conjugacy closure of the singleton set \(\{H\}\), and \(\overline{\downarrow H}\) denotes the collection of subgroups conjugate to a subgroup of \(H\). For any interval \(I\), define \(S^{I}\) to be the following pointed \(G\)-space, viewed as a presheaf on the orbit category: we have \((S^{I})^{H}=S^{0}\) for \(H\in I\) and \((S^{I})^{H}=0\) for \(H\not\in I\). All transition maps between \(S^{0}\)'s are identities, and all other transition maps are zero as they must be. **Lemma 5.2.4**.: _For any interval \(I\), we have \(S^{I}\wedge S^{I}=S^{I}\) canonically. More generally, for any intervals \(I,J\), we have \(S^{I}\wedge S^{J}=S^{I\cap J}\), canonically._ **Lemma 5.2.5**.: _If \(D\) is a downset and \(U\) is the complementary upset, then there is a natural cofiber sequence \(S^{D}\to S^{0}\to S^{U}\)._ **Lemma 5.2.6**.: _For any conjucacy closure of a singleton \(\overline{\{H\}}\), the full subcategory of objects of the form \(X=S^{\overline{\{H\}}}\wedge Y\) consists of those pointed presheaves \(X\) on \(\mathcal{O}_{G}\) which send all orbits other than \(G/H\) to \(0\) (equivalently, pointed \(G\)-spaces \(X\) such that \(X^{K}\) is contractible for \(K\) not conjugate to \(H\)). This is monoidally equivalent to the \(\infty\)-category \(\mathsf{Top}_{*}^{W}\) of pointed Borel \(W\)-spaces, where \(W=W_{G}(H)=N_{G}(H)/H\) is the Weyl group of \(H\) in \(G\)._ This category is referred to as the category of _pointed \(G\)-spaces concentrated at \(H\)._ Proof.: By Elemendorf's theorem, \(G{\sf Top}_{*}\) is equivalent to the category of \({\sf Top}_{*}\)-valued presheaves on the orbit category \(\mathcal{O}_{G}\); the equivalence sends a \(G\)-space \(X\) to the functor \(G/H\mapsto X^{H}\). The smash product is computed levelwise in this presheaf category. Thus smashing with \(S^{\overline{\{H\}}}\) kills the \(K\)-fixed points for \(K\) not conjugate to \(H\), and leaves them unchanged for \(K=H\), verifying the first statement. From this we see that the category of pointed \(G\)-spaces concentrated at \(H\) is equivalent to the category of pointed presheaves on the full subcategory of the orbit category on the orbit \(G/H\). Recall that as a \(G\)-space, every endomorphism of \(G/H\) is an automorphism, and its automorphism group is the Weyl group \(W=W_{G}(H)=N_{G}(H)/H\). Thus this full subcategory is a connected \(\infty\)-groupoid equivalent to the \(1\)-object \(\infty\)-groupoid \(BW\) whose automorphism group is \(W\). As presheaves on \(BW\) are naturally identified with Borel \(W\)-spaces, the second statement follows. **Lemma 5.2.7**.: _Under the equivalence of Lemma 5.2.6, \(S^{\overline{\{H\}}}\wedge(G/H)_{+}\) corresponds to \(W_{+}\in{\sf Top}_{*}^{W}\). Moreover, \(S^{\overline{\{H\}}}\) corresponds to \(S^{0}\), and the equivalence of course respects suspension._ Proof.: The equivalence functor from \(H\)-concentrated spaces to \(W\)-spaces is given by taking \(H\)-fixed points. And indeed, the \(H\)-fixed points of \((G/H)_{+}\) are \(W_{+}\) and the \(H\)-fixed points of \(S^{\overline{\{H\}}}\) are \(S^{0}\). The final statement is true of any equivalence. **Lemma 5.2.8**.: _For any compact Lie group \(W\), the Borel space \(W\in{\sf Top}^{BW}\) admits a free action of \(W\), whose homotopy orbits are the terminal object \({\rm pt}\in{\sf Top}^{BW}\). Thus, \(W_{+}\in{\sf Top}_{*}^{W}\) admits a \(W\)-action whose pointed homotopy orbits are \(S^{0}\in{\sf Top}_{*}^{W}\)._ Proof.: The second statement follows from the first because the functor \((-)_{+}:{\sf Top}^{BW}\to{\sf Top}_{*}^{W}\), which adds a disjoint basepoint, preserves colimits. If our \(W\)-spaces are left \(W\)-spaces, then the action in question comes from \(W\) acting on itself on the right. Because \({\rm pt}\in{\sf Top}^{BW}\) is terminal, there is a map \(W_{hW}\to{\rm pt}\). Because we are in the Borel setting, to see this is an equivalence it suffices to check on underlying spaces. Moreover, the "underlying space" functor \({\sf Top}^{BW}\to{\sf Top}\) preserves colimits, so we may compute the map \(W_{hW}\to{\rm pt}\) at the level of underlying spaces. Since the \(W\)-action on \(W\) is free, its homotopy orbits are its orbits, namely \({\rm pt}\). Thus the map \(W_{hW}\to{\rm pt}\) is an equivalence as claimed. **Lemma 5.2.9**.: _Let \(F:G{\sf Top}_{*}\to\mathcal{C}\) be a symmetric monoidal left adjoint to a stable presentably symmetric monoidal \(\infty\)-category \(\mathcal{C}\). Let \(V\) be a finite-dimensional \(G\)-representation, and assume that \(F(S^{V})=0\). Then for any conjugacy closure of a singleton \(\overline{\{H\}}\), \(F(S^{\overline{\{H\}}})=0\)._ Proof.: We have that \(F(S^{\overline{\{H\}}}\wedge S^{V}\wedge(G/H)_{+})=0\), and under the equivalence of Lemma 5.2.6, the space \(S^{\overline{\{H\}}}\wedge S^{V}\wedge(G/H)_{+}\) corresponds to a pointed Borel \(W\)-space of the form \(S^{V^{H}}\wedge W_{+}\) by Lemma 5.2.7. By the untwisting lemma (Lemma 5.2.1), we have in turn that \(S^{V^{H}}\wedge W_{+}\simeq S^{n}\wedge W_{+}\) where \(S^{n}\) has the trivial action (and \(n={\rm dim}(V^{H})\)). By Lemma 5.2.7 again, we have that \(F(S^{\overline{\{H\}}}\wedge S^{n}\wedge(G/H)_{+})=0\). Now, \(F\) commutes with suspension and \(\mathcal{C}\) is stable, so it follows that \(F(S^{\overline{\{H\}}}\wedge(G/H)_{+})=0\). Using Lemma 5.2.7 again, Lemma 5.2.8 tells us that there is a \(W\)-action on \(S^{\overline{\{H\}}}\wedge(G/H)_{+}\) whose homotopy orbits are \(S^{\overline{\{H\}}}\) Because \(F\) preserves homotopy orbits (in fact it preserves all colimits), it follows that \(F(S^{\overline{\{H\}}})=0\). **Theorem 5.2.10**.: _Let \(G\) be a compact Lie group. Let \(F:G\mathsf{Top}_{*}\to\mathcal{C}\) be a symmetric monoidal left adjoint to a stable presentably symmetric monoidal \(\infty\)-category \(\mathcal{C}\) with a compact unit. Let \(V\) be a finite-dimensional \(G\)-representation, and assume that \(F(S^{V})=0\). Then \(\mathcal{C}=0\)._ Proof.: Say that a map in \(G\mathsf{Top}_{*}\) is _local_ if it is carried to an equivalence by \(F\). Let \(\mathcal{U}\) be the collection of all upsets \(U\) such that \(S^{0}\to S^{U}\) is local. Then \(\mathcal{U}\) is downward-closed. For if \(U\subseteq U^{\prime}\) and \(S^{0}\to S^{U}\) is local, then because this map factors through \(S^{0}\to S^{U^{\prime}}\) we have that \(F(S^{0})\) is a retract of \(F(S^{U^{\prime}})\). Therefore, we have that \(F(S^{(U^{\prime})^{c}})=0\), so that \(S^{0}\to S^{U^{\prime}}\) is local and so \(U^{\prime}\in\mathcal{U}\). Moreover, \(\mathcal{U}\) is closed under codirected intersections. For \(S^{\cap_{i}U_{i}}=\varinjlim_{i}S^{U_{i}}\), and if the maps \(S^{0}\to S^{U_{i}}\) are local, then so is the map from \(S^{0}\) to the colimit, as the universal functor \(F\) preserves colimits. Since \(\mathcal{U}\) is nonempty (containing the top element as \(S^{0}\to S^{0}\) is local), by the previous paragraph it satisfies the dual hypotheses of Zorn's lemma. So let \(U\) be a minimal element of \(\mathcal{U}\). Then \(U\) itself is closed under codirected intersections (note that the lattice of conjugacy classes of closed subgroups of \(G\) is has codirected intersections for any topological group \(G\)). For if we suppose otherwise, then we have \(H=\cap_{i}H_{i}\) with \(H_{i}\in U\), \(H\not\in U\), then \(U=\cup_{i}(U\setminus\overline{\downarrow H_{i}})\), and so \(S^{U}=\varinjlim_{i}S^{U\setminus\overline{\downarrow H_{i}}}\). As \(S^{0}\to S^{U}\) is local and \(F(S^{0})\) (the unit of \(\mathcal{C}\)) is compact by hypothesis, it follows that \(F(S^{0})\) is a retract of some \(F(S^{U\setminus\overline{\downarrow H_{i}}})\). Then as before, we see that \(F(S^{(U\setminus\overline{\downarrow H_{i}})^{c}})=0\), so that \(S^{0}\to S^{U\setminus\overline{\downarrow H_{i}}}\) is local, contradicting the minimality of \(U\). Suppose for contradiction that \(U\) is nonempty. Then by the previous paragraph the dual hypotheses of Zorn's lemma apply and \(U\) has a minimal element \(H\). Then \(S^{\overline{\downarrow H}}\to S^{\overline{\downarrow H}}\wedge S^{U}\) is local. But \(F(S^{\overline{\downarrow H}}\wedge S^{U})=F(S^{\overline{\{H\}}})=0\) by Lemma 5.2.9. Therefore \(F(S^{\overline{\downarrow H}})=0\), and so \(S^{0}\to S^{(G\setminus\overline{\downarrow H})}\) is local by Lemma 5.2.5. Therefore \(S^{0}=S^{0}\wedge S^{0}\to S^{U}\wedge S^{(G\setminus\overline{\downarrow H}) }=S^{U\setminus\overline{\{H\}}}\) is local, contradicting the minimality of \(U\). Therefore \(U\) must be empty, and so in fact \(S^{0}\to S^{\emptyset}=0\) is local, i.e. \(F(S^{0})=0\). But \(F\) is strong monoidal and in particular \(F(S^{0})\) is the unit object of \(\mathcal{C}\). Thus \(\mathcal{C}=0\). **Remark 5.2.11**.: In the beginning of the section, the proof of Theorem 5.2.10 was characterized as an induction on the structure of intervals \(I\), but in the actual proof this induction is phrased in terms of Zorn's lemma, obfuscating the content a bit. In the case where \(G\) is finite, then there are finitely many upsets on \(G\) and moreover it is immediate that any upset \(U\) in \(G\) has a minimal element \(H\). So in this case, the two uses of Zorn's lemma may be avoided and replaced with a straightforward induction on the poset of upsets \(U\) in \(G\). Proof of Theorem 5.1.4.: In Theorem 5.2.10, we may take \(\mathcal{C}=\mathrm{Ind}(\mathbb{D}_{S}^{\mathrm{rex}}G\mathsf{Top}_{*}^{ \mathrm{fin}}[(S^{1})^{-1}]/S^{V})\) and \(F\) to be induced by the identity functor on \(G\mathsf{Top}_{*}\) to conclude that \[\mathrm{Ind}(\mathbb{D}_{S}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}[(S ^{1})^{-1}]/S^{V})=0\] It follows that \(\mathbb{D}_{S}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}[(S^{1})^{-1}]/S^ {V}=0\). Now let \(F^{\prime}:\mathbb{D}_{S}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}[(S^{1}) ^{-1}]/S^{V}\to\mathbb{D}_{S}^{\mathrm{IMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]/S^ {V}\) be universally induced by inclusion \(\mathbb{D}_{S}^{\mathrm{rex}}G\mathsf{Top}_{*}^{\mathrm{fin}}\subset G \mathsf{Top}_{*}\). Because this functor is symmetric monoidal and its domain is 0, it follows that the codomain \(\mathbb{D}_{S}^{\mathsf{SMP}}G\mathsf{Top}_{*}[(S^{1})^{-1}]/S^{V}=0\) as desired.
2308.11137
Towards Validating Long-Term User Feedbacks in Interactive Recommendation Systems
Interactive Recommender Systems (IRSs) have attracted a lot of attention, due to their ability to model interactive processes between users and recommender systems. Numerous approaches have adopted Reinforcement Learning (RL) algorithms, as these can directly maximize users' cumulative rewards. In IRS, researchers commonly utilize publicly available review datasets to compare and evaluate algorithms. However, user feedback provided in public datasets merely includes instant responses (e.g., a rating), with no inclusion of delayed responses (e.g., the dwell time and the lifetime value). Thus, the question remains whether these review datasets are an appropriate choice to evaluate the long-term effects of the IRS. In this work, we revisited experiments on IRS with review datasets and compared RL-based models with a simple reward model that greedily recommends the item with the highest one-step reward. Following extensive analysis, we can reveal three main findings: First, a simple greedy reward model consistently outperforms RL-based models in maximizing cumulative rewards. Second, applying higher weighting to long-term rewards leads to a degradation of recommendation performance. Third, user feedbacks have mere long-term effects on the benchmark datasets. Based on our findings, we conclude that a dataset has to be carefully verified and that a simple greedy baseline should be included for a proper evaluation of RL-based IRS approaches.
Hojoon Lee, Dongyoon Hwang, Kyushik Min, Jaegul Choo
2023-08-22T02:34:47Z
http://arxiv.org/abs/2308.11137v1
# Towards Validating Long-Term User Feedbacks ###### Abstract. Interactive Recommender Systems (IRSs) have attracted a lot of attention, due to their ability to model interactive processes between users and recommender systems. Numerous approaches have adopted Reinforcement Learning (RL) algorithms, as these can directly maximize users' cumulative rewards. In IRS, researchers commonly utilize publicly available review datasets to compare and evaluate algorithms. However, user feedback provided in public datasets merely includes instant responses (e.g., a rating), with no inclusion of delayed responses (e.g., the dwell time and the lifetime value). Thus, the question remains whether these review datasets are an appropriate choice to evaluate the long-term effects in IRS. In this work, we revisited experiments on IRS with review datasets and compared RL-based models with a simple reward model that greedily recommends the item with the highest one-step reward. Following extensive analysis, we can reveal three main findings: First, a simple greedy reward model consistently outperforms RL-based models in maximizing cumulative rewards. Second, applying higher weighting to long-term rewards leads to degradation of recommendation performance. Third, user feedbacks have mere long-term effects in the benchmark datasets. Based on our findings, we conclude that a dataset has to be carefully verified and that a simple greedy baseline should be included for a proper evaluation of RL-based IRS approaches. Interactive Recommender System, Reinforcement Learning + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems Recomm Related Work Interactive recommender system (IRS) is designed to model the sequential interaction between a user and the recommender system. Traditionally, contextual bandits (Kang et al., 2015; Li et al., 2016; Li et al., 2017; Li et al., 2018) are used to learn the empirical utilities from the online interaction while handling the explore/exploit dilemma of the recommendations. However, learning directly from the online experience is expensive and may hurt user experience. Therefore, the focus has been shifted to learning the recommender system through the user's logged behavior data. Recent works on IRS has adopted RL algorithms to identify optimal recommendation strategies from the logged behavior data. These approaches have shown great success in real-world e-commerce and social media / networking platforms such as _Alibaba_(Li et al., 2017), _TikTok_(TikTok, 2017), and _YouTube_(Li et al., 2018). However, such work utilizes a private dataset, making them inaccessible for the research community. As an alternative, researchers commonly rely on public review datasets to compare and develop RL-based recommendation algorithms. The RL-based IRS can be mainly categorized into two groups, policy-based and value-based methods. Policy-based methods attempt to learn an optimal recommendation policy by applying policy gradient to a parameterized policy (Li et al., 2016; Li et al., 2017; Li et al., 2018). On the other hand, value-based methods aim to learn the Q-value for each state and perform the action with the highest Q-value (Li et al., 2017; Li et al., 2018). Actor-critic algorithms (Li et al., 2017; Li et al., 2018), which integrates policy- and value-based methods have also been studied. There, the recommendation policy serves as an "actor" and is trained to maximize the value from the trained "critic". ## 3. Problem Formulation We first illustrate how the recommender system and a user jointly build an interactive process. As the user enters the service or platform, the recommender system constructs the user's profile based on the items they have interacted with and corresponding feedback. By inferring the user's latent interests behind the interaction history, the system provides an item to the user and receives feedback on the recommended item. The feedback can be either explicitly provided such as via ratings, or inferred from an implicit reaction such as views or clicks. After receiving the feedback, the system updates the user's profile and continues to recommend the next item. This interaction loop proceeds until the user leaves the platform. The goal of the recommender system is to maximize the cumulative "rewards" during the interaction, which is the numerical representation of the feedback. Following previous work (Li et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018), this interactive process is formulated as a Markov Decision Process (MDP). Formally, MDP consists of a tuple of \((\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{P},\gamma)\) as follows: * \(\mathcal{S}:\) a continuous state space to describe the user's profile. A state \(s_{t}^{u}\in\mathcal{S}\) is defined as the previous interaction history of the user \(u\) before time-step \(t\). For each time-step \(t\), the interaction consists of the recommended item \(i_{t}^{u}\), and its corresponding feedback \(f_{t}^{u}\). \[s_{t}^{u}=\{(i_{1}^{u},f_{1}^{u}),(i_{2}^{u},f_{2}^{u}),...,(i_{t-1}^{u},f_{ t-1}^{u})\}\] (1) In this work, we consider the user's provided rating for the recommended item \(i_{t}^{u}\) as the feedback. * \(\mathcal{A}:\) a discrete action space with candidate items to recommend. An action \(a_{t}^{u}\in\mathcal{A}\) denotes the recommended item by the recommender system at time-step \(t\) for the user \(u\). * \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function where \(r(s_{t}^{u},a_{t}^{u})\) denotes the immediate reward from user \(u\) at state \(s_{t}^{u}\) by taking action \(a_{t}^{u}\). The flexibility of designing the reward function allows the integration and optimization of user's diverse feedbacks. Following from (Li et al., 2018; Li et al., 2018), we set the provided rating as the reward. * \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) is a state transition probability. * \(\gamma\) is a discount factor that controls the weights for future rewards. Without loss of generality, we aim to learn the recommendation policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) which maximizes the cumulative rewards as: \[\pi^{*}=\operatorname*{argmax}_{\pi}\mathbb{E}[\sum_{u=1}^{U}\sum_{t=1}^{T}r(s_ {t}^{u},a_{t}^{u})] \tag{2}\] where \(T\) denotes the total number of steps of the interaction process and \(U\) denotes the number of all users. ## 4. Revisiting IRS Experiments In this section, we revisit the experiments of the IRS papers (Li et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018) that utilized RL algorithms into recommender system. Although these RL-based recommendation models directly optimize the long-term satisfaction of the users, we show that a simple greedy recommendation model yields competitive or even better results. ### Experimental Setup **Datasets** We evaluate our model using four standard recommendation review datasets in IRS: EachMovie, Movielens-1M, -20M, and Netflix, where explicit ratings are available for each interaction. For each dataset, interacted items are grouped by users and ordered by timestamps. Following (Li et al., 2018), we only kept users and items that have at least five interactions. Statistical details are outlined in Table 1. **Models** We compared the following representative baselines: * Random: A model that randomly recommends the items. * POP: A model that recommends the most popular item. * SASRec (Li et al., 2017): A uni-directional Transformer model that is trained to predict the user's next interacted item. * DQNR (Li et al., 2018): A DQN (Li et al., 2018)-based model that estimates Q-value for each action and recommends the item with the highest Q-value. * NICF (Li et al., 2018): A model similar to DQNR where the target Q-value is computed within the user's previous interacted items. * SQN (Li et al., 2018): A model that jointly optimizes the Q-value and the probability of interaction for each item. * SAC (Li et al., 2018): The actor-critic version of SQN where the interaction probability is weighted by the estimated Q-value. \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & \# Users & \# Items & \# Int & Avg.Int \\ \hline EachMovie & 56,071 & 1,613 & 2,798,088 & 49.90 \\ ML-1M & 6,040 & 3,416 & 999,611 & 165.50 \\ ML-20M & 138,493 & 18,345 & 19,984,024 & 144.30 \\ Netflix & 472,987 & 17,769 & 100,461,928 & 212.40 \\ \hline \hline \end{tabular} \end{table} Table 1. Statistics of the processed dataset. The Avg.Int denotes the average number of interacted items per user. * DDPGR [(20)]: A DDPG [(9)]-based model that addresses the large discrete action space by learning a ranking vector. * GreedyRM: Our proposed simple baseline that estimates the one-step reward for each action and greedily recommends the item with the highest reward. (i.e., DQNR with \(\gamma=0\)). **Architecture** For a fair comparison between the learning algorithms, we unified the network architecture with a uni-directional Transformer as SASRec [(6)] to encode the states. Given a state \(s_{t}\), the input embedding is constructed by summing up the embedded items \([i_{1},\dots,i_{t}]\), feedbacks \([f_{1},\dots,f_{t}]\), and pre-defined sinusoidal positional embedding. Then, the input embedding is fed into the self-attention block to construct the hidden representation \([h_{1},\dots,h_{t}]\). The last hidden representation \(h_{t}\) is passed onto the prediction head with two fully-connected layers and projected onto the item embedding space. For each item, the projection score indicates the reward for GreedyRM, Q-value for DQNR, NICF, and un-normalized log probability in SASRec. SQN, SAC, and DDPGR have two separate prediction heads, one for policy and another for Q-value. **Training** We re-implemented the aforementioned baselines by following the details of each paper. Following [(6; 24)], we set the number of layers as \(N=2\), hidden dimension size as \(d=64\), maximum sequence length as \(s=200\), and set the discount factor \(\gamma\) as 0.95. For training, we used an Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) with gradient clipping when its \(l_{2}\) norm exceeds 5 and set the batch size to 256. For all models, we tune the hyper-parameters based on the performance by varying the learning rate \(lr\) from \(\{0.01,0.001,0.0001\}\) and \(l_{2}\) regularization weight \(wd\) from \(\{0.001,0.0001,0.00001\}\). ### Evaluation Protocol Following [(23; 24; 1)], the datasets were split into three disjoint sets: 85% of users for training, 5% of users for validation, and 10% for testing. In the evaluation, the user profile was constructed based on their initial 40 steps of interaction history and the interactive recommendation process was run through for 40 time-steps to measure the recommendation performance. However, unless one happens to own a service or a platform, obtaining true feedback from the test user is infeasible and is also prohibitively expensive. Therefore, a common evaluation protocol is to build an environment simulator that mimics the user's behavior and measures the performance within the learned simulator. Often, matrix factorization is used as the simulator to predict the user's behavior [(23; 5; 10)]. However, matrix-factorization is limited in modeling the dynamic changes of the user's preferences, hence produce predictions that deviate from the actual user's behaviors. Therefore to accurately model the user's feedback, we construct the simulator based on the self-attentive architecture as Figure 1. We report the RMSE value of our simulator for each dataset in Table 3 to assess the effectiveness of the constructed simulator, **Metrics** We use three standard evaluation metrics to measure the recommendation performance during \(T\) time-steps of interactions: average reward (RW@T) (i.e., cumulative rewards divided by \(T\)), precision (PR@T), and recall (RC@T). Following [(23; 24)], the precision and recall were computed by setting the positive items as the \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{EachMovie} & \multicolumn{2}{c}{ML-1M} & \multicolumn{2}{c}{ML-20M} & Netflix & \\ \cline{2-11} & RW@40 & PR@40 & RC@40 & RW@40 & PR@40 & RC@40 & RW@40 & PR@40 & RC@40 & RW@40 & PR@40 & RC@40 \\ \hline Random & 3.167 & 0.0203 & 0.0246 & 3.241 & 0.0900 & 0.0094 & 2.885 & 0.0450 & 0.0010 & 2.949 & 0.0668 & 0.0011 \\ POP & 4.021 & 0.0723 & 0.1620 & 4.353 & 0.8521 & 0.1415 & 4.002 & 0.5445 & 0.0802 & 3.680 & 0.3757 & 0.0294 \\ SASRec [(6)] & 3.449 & 0.0872 & 0.0677 & 3.879 & 0.5243 & 0.0896 & 3.455 & 0.2235 & 0.0124 & 3.648 & 0.3399 & 0.0195 \\ \hline DQNR [(21)] & 5.079 & 0.6420 & 0.6552 & **4.689** & **0.9950** & 0.1597 & 4.483 & 0.9475 & 0.1269 & 4.608 & 0.9530 & 0.1226 \\ NICF [(24)] & 5.075 & 0.6289 & 0.6434 & 4.677 & 0.9947 & 0.1662 & 4.478 & 0.9421 & 0.1262 & 4.655 & 0.9548 & 0.1230 \\ SQN [(19)] & 4.521 & 0.3734 & 0.4468 & 4.134 & 0.6285 & 0.0662 & 3.973 & 0.5048 & 0.0557 & 3.497 & 0.2374 & 0.0105 \\ SAC [(19)] & 4.225 & 0.1104 & 0.2352 & 3.961 & 0.4943 & 0.0728 & 3.845 & 0.4047 & 0.0463 & 3.589 & 0.2601 & 0.0119 \\ DDPGR [(10)] & 4.063 & 0.0681 & 0.0917 & 3.750 & 0.3696 & 0.0683 & 3.673 & 0.2370 & 0.0310 & 3.588 & 0.0730 & 0.01389 \\ \hline GreedyRM & **5.116** & **0.6690** & **0.6578** & 4.680 & 0.9948 & **0.1695** & **4.536** & **0.9712** & **0.1318** & **4.689** & **0.9596** & **0.1244** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance of different models on the interactive recommender system. The results are averaged over 10 random seeds. Bold scores indicate the best model for each metric and underlined scores indicate the second best model. Figure 1. Unified architecture for the recommendation models. Each model only differs in the Predictor Head. \begin{table} \begin{tabular}{l c c c c} \hline \hline Methods & EachMovie & ML-1M & ML-20M & Netflix \\ \hline Matrix Factorization & 1.402 & 0.990 & 0.979 & 1.073 \\ Transformer & **1.184** & **0.826** & **0.949** & **0.852** \\ \hline \hline \end{tabular} \end{table} Table 3. RMSE value of the environment simulators. ground truth labels. We define an item to be positive if the rating provided from the simulator exceeded 5-points in EachMovie and 4-points in the other datasets. ### Experimental Results Table 2 summarizes the performances of all baseline models for each benchmark dataset. Here, we observe the followings: Out of all RL-based models, DQNR and NICF consistently outperformed traditional recommendation models (i.e., Random, POP, SASRec), which do not consider the long-term utilities of the users. By observing the poor performance of SQN and SAC, we found that jointly optimizing the next item prediction task is not beneficial. In addition, DDPGR also struggled to learn an effective recommendation policy due to the inaccessibility to online interactions. Surprisingly, our simple baseline model, GreedyRM, achieved the best performance for all metrics for the EachMovie, ML-20M, and Netflix datasets. Though DQNR showed best performance in the ML-1M dataset, GreedyRM achieved competitive results compared with DQNR. GreedyRM aims to recommend items that maximize the immediate one-step reward and is analogous to DQNR when its discount factor \(\gamma\) is 0. Therefore, this naturally raises the question of whether putting more weighting on future rewards harms the recommendation quality. ### Influence of Future Rewards To further examine the influence of accounting for future rewards, we conducted an ablation study referring to the discount factor \(\gamma\). If the long-term effects of user feedback are significant, assigning a higher weight (i.e., large discount factor) will be beneficial. However, if the long-term effect is insignificant, assigning greater weight on future rewards may serve as extra noise rather than as a beneficial learning signal. For all experiments, we varied the discount factor from 0 to 0.99 and fixed the remaining hyper-parameters to each model's optimal configurations. Figure 2 displays the average reward (i.e., RW@40) with respect to each discount factor. We observe that RL-based models display a gradual decrease in cumulative rewards as the models put more weight on future rewards. Thus, we suspect that long-term effects merely exist between the user feedbacks in the benchmark review datasets. If this were true, a simple greedy one-step recommendation model will be able to directly maximize the cumulative rewards. Therefore, we further investigate whether a greedy algorithm can maximize the cumulative rewards in the review datasets. ### Analyzing Long-Term Effects in Datasets Here, we investigated the significance of the long-term effects in the public review dataset. By comparing the performance between the greedy- and optimal- recommendation policies, we were able to verify whether the greedy algorithm could maximize the cumulative rewards (i.e., no long-term effects in the dataset). The performance of the optimal policy can be measured by searching across all actions with the simulator until the last time-step and selecting the best action sequence that maximizes the cumulative rewards. However, this measurement is computationally infeasible since it requires a search across all possible actions for 40 time steps (i.e., \(|A|^{40}\)). Therefore, to approximate the optimal recommendation performance using the simulator, we adopt beam search (Han et al., 2017), a commonly used decoding algorithm to generate the sequence that considers the long-term effects (Beng et al., 2017). A beam search maintains a beam of \(k\) possible trajectories, updating them incrementally by ranking their extensions via the objective score. Since the memory requirements for beam search is proportional to the action size and the review dataset has a large number of actions (e.g., 17,769 for ML-20M), we use beam size \(k\in\{1,10\}\). Here, we note that the beam search (\(k=1\)) is identical to the greedy policy. Table 4 reports the relative recommendation performance of the greedy search (i.e., beam search at \(k=1\)) with respect to the beam search performance at \(k=10\). We found that the greedy search matches the performance of the beam search by more than 99.5% for EachMovie, ML-20M, and Netflix dataset. This implies that the benefits of considering the long-term effects are marginal and provides an explanation for the high performance of GreedyRM for these datasets in Table 2. We also observed a slight performance gap between the greedy search and the beam search in ML-1M. This indicates that considering the long-term effects can be slightly beneficial for ML-1M, which matches with our experimental findings in Table 2 where DQNR achieved the best performance in ML-1M. \begin{table} \begin{tabular}{l c c c c} \hline \hline Beam size & EachMovie & ML-1M & ML-20M & Netflix \\ \hline \(k=1\) & 0.9960 & 0.9844 & 0.9981 & 0.9974 \\ \hline \hline \end{tabular} \end{table} Table 4. Relative performance of the greedy policy against the performance of the beam search at \(k=10\). Figure 2. Performance comparison of RL-based recommendation models with varying discount factor \(\gamma\). Conclusion and Discussion Recently, a lot of community effort in IRS has been devoted to develop the RL-based recommendation algorithms to effectively model the long-term effects between the recommendations. However, our findings imply that the benefits of accounting for the long-term effects can be marginal in the public review dataset. Therefore, to accurately benchmark the performance of the RL-based recommendation algorithms, it is crucial to validate the significance of the long-term effects prior to the evaluation. We suggest that an evaluation protocol should (i) perform a dataset validation procedure by comparing beam- and greedy-search performance to verify the existence of beneficial long-term rewards; and (ii) include a simple reward model that greedily selects items as a baseline. We will make our code publicly available to ensure the reproducibility of our work. ###### Acknowledgements. This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2019-0-00075, Artificial Intelligence Graduate School Program(KAIST), and No. 2020-0-00368, A NeuralSymbolic Model for Knowledge Acquisition and Inference Techniques).
2301.05449
Doping control of magnetism and emergent electromagnetic induction in high-temperature helimagnets
Ac current-driven motions of spiral spin textures can give rise to emergent electric fields acting on conduction electrons. This in turn leads to the emergent electromagnetic induction effect which may realize quantum inductor elements of micrometer size. ${\rm YMn}_{6}{\rm Sn}_{6}$ is a helimagnet with a short helical period (2-3 nm) that shows this type of emergent inductance beyond room temperature. To identify the optimized materials conditions for ${\rm YMn}_{6}{\rm Sn}_{6}$-type room-temperature emergent inductors, we have investigated emergent electromagnetic inductance (EEMI) as the magnetism is modified through systematic partial substitution of Y by Tb. By small angle neutron scattering and inductance measurements, we have revealed that the pinning effect on the spin-helix translational mode by Tb doping selectively and largely suppresses the negative component of EEMI, while sustaining the positive inductance arising from the spin tilting mode. We also find that in addition to the spin helix, even the spin-collinear antiferromagnetic structure can host the positive EEMI due to thermally enhanced spin fluctuations. The present study highlights the facile control of both the magnitude and sign of EEMI beyond room temperature, and thus suggests a route to expand the range of emergent inductor candidate materials.
Aki Kitaori, Jonathan S. White, Naoya Kanazawa, Victor Ukleev, Deepak Singh, Yuki Furukawa, Taka-hisa Arima, Naoto Nagaosa, Yoshinori Tokura
2023-01-13T09:14:13Z
http://arxiv.org/abs/2301.05449v1
# Doping control of magnetism and emergent electromagnetic induction in high-temperature helimagnets ###### Abstract Ac current-driven motions of spiral spin textures can give rise to emergent electric fields acting on conduction electrons. This in turn leads to the emergent electromagnetic induction effect which may realize quantum inductor elements of micrometer size. YMn\({}_{6}\)Sn\({}_{6}\) is a helimagnet with a short helical period (2-3 nm) that shows this type of emergent inductance beyond room temperature. To identify the optimized materials conditions for YMn\({}_{6}\)Sn\({}_{6}\)-type room-temperature emergent inductors, we have investigated emergent electromagnetic inductance (EEMI) as the magnetism is modified through systematic partial substitution of Y by Tb. By small angle neutron scattering and inductance measurements, we have revealed that the pinning effect on the spin-helix translational mode by Tb doping selectively and largely suppresses the negative component of EEMI, while sustaining the positive inductance arising from the spin tilting mode. We also find that in addition to the spin helix, even the spin-collinear antiferromagnetic structure can host the positive EEMI due to thermally enhanced spin fluctuations. The present study highlights the facile control of both the magnitude and sign of EEMI beyond room temperature, and thus suggests a route to expand the range of emergent inductor candidate materials. ## I I. Introduction The inductor, a most important element of contemporary electric circuits, is characterized by the relation \(V\) = \(LdI/dt\), where \(V\), \(I\), and \(L\) are voltage, current and inductance, respectively. Since \(L\) of the conventional inductor coil is in proportion to \(n^{2}A\), \(n\) being the number of coil windings, and \(A\) the coil cross-section, it is technically difficult to reduce the dimension of the inductor of coil form. Recently, a new and simpler scheme of the electromagnetic induction has been proposed which may dramatically miniaturize the inductor element, namely via the use of current-induced spin dynamics in a helical-spin system [1]. The idea is based on the time-dependent emergent electromagnetic field, or Berry phase dynamics [2; 3] of the conduction electrons flowing on a helical spin texture [1]. Noncoplanar spin textures, as exemplified by magnetic skyrmions and spin hedgehogs [4], are endowed with a non-zero scalar spin chirality \(\mathbf{S}_{i}\cdot(\mathbf{S}_{j}\times\mathbf{S}_{k})\): \(i\), \(j\) and \(k\) being neighboring spin sites, that exerts emergent magnetic field on conduction electrons [2; 3]. The effect of scalar spin chirality has been investigated via topological or geometrical Hall effects stemming from the real-space emergent magnetic field \(\mathbf{b}\)[5; 6; 7]; here \(b_{i}=\frac{\hbar}{2e}\epsilon_{ijk}\mathbf{n}\cdot(\partial_{j}\mathbf{n}\times\partial _{k}\mathbf{n})\), \(\epsilon_{ijk}\) is the Levi-Civita symbol and \(\mathbf{n}=\mathbf{S}/|\mathbf{S}|\). In recent years, the dynamics of emergent fields have also attracted great attention [1; 8]. The time (\(t\)) evolution of emergent magnetic field (\(\mathbf{b}\)) can give rise to the emergent electric field (\(\mathbf{e}\)) according to the generalized Faraday's law [9], \(\mathbf{\nabla}\times\mathbf{e}=-d\mathbf{b}/dt\), or equivalently \(e_{i}=\frac{\hbar}{e}\mathbf{n}\cdot(\partial_{i}\mathbf{n}\times\partial_{i}\mathbf{n})\), giving rise to the emergent electromagnetic induction phenomenon [9; 10; 11; 12; 13; 14; 15]. Among the ideas to detect or utilize the emergent electric field, the emergent electromagnetic inductance (EEMI) based on dynamical spin spiral structures stands out in light of the large signal magnitude [1]. Spiral spin textures are noncollinear with the vector spin chirality \(\mathbf{S}_{i}\times\mathbf{S}_{j}\), but occasionally take coplanar structure (e.g., proper screw) with no static scalar spin chirality \(\mathbf{S}_{i}\cdot(\mathbf{S}_{j}\times\mathbf{S}_{k})\). Nevertheless, the dynamically-swept noncoplanar spin structure under an ac current excitation along the spin helix propagation vector (\(\mathbf{q}\)) can generate the emergent electric field \(\mathbf{e}\) along the direction of \(\mathbf{q}\), as expressed as \(e_{q}=\frac{\hbar q}{2e}\frac{\partial m}{\partial t}\), \(m\) being the projection of \(\mathbf{n}\) along \(\mathbf{q}\), according to the above-described formula of \(\mathbf{e}\)[1]. This emergent electric field becomes \(90^{\circ}\) out of phase with the applied ac current, and can be equated with the electric field caused by the imaginary part of the complex impedance. The intensity of emergent electric field gets larger when the ac current frequency is higher, due to the time-derivative term of spin motion. Therefore, this imaginary impedance behaves as the inductance \(L\). Here, note that \(L\) is expected to be proportional to \(\mathbf{q}\); namely the shorter helix period \(\lambda\) is favorable to attain the larger \(L\). This is the emergent electromagnetic inductance (EEMI) caused by the spin-tilting mode out of the spin spiral plane, as shown in Fig. 1(a). To be precise, the ac current excitation on the spin helix can induce not only the tilting mode but also the spin-helix translational mode or phason mode [Fig. 1(b)], in which the helical spin rotates uniformly within the plane and hence ap pears to propagate along the \(\mathbf{q}\) direction. This phason excitation is originally the gapless Nambu-Goldstone mode but energy-gapped with the extrinsic pinning frequency \(\omega_{\rm pin}\) when the helix is subject to commensurate pinning or impurity pinning. Then, depending on the observation frequency \(\omega_{\rm obs}\) being lower (higher) than \(\omega_{\rm pin}\), the inductance value \(L\) for the phason mode takes a negative (positive) value, whereas the tilting mode contribution always leads to \(L>0\)[16; 17]. In reality, the EEMI arising from the spin helix dynamics may be positive or negative, depending on the commensurate/incommensurate modulation, magnetic field, temperature, and ac current amplitude [16; 18]. In this context, the control of the commensurate- or impurity-pinning effects by the chemical doping procedure in the target materials may be useful to identify the microscopic origin of the emergent inductance. The EEMI of spin helix states was demonstrated experimentally for the first time in the short-period helical magnet Gd\({}_{3}\)Ru\({}_{4}\)Al\({}_{12}\) at low temperatures below 20 K [17], and found to show large inductance values comparable with commercial coil-inductors in spite of the much smaller \(\mu\)m-sized device [17]. Recently, the room-temperature emergent induction phenomena have been observed with the spiral magnet YMn\({}_{6}\)Sn\({}_{6}\), whose transition temperature to the spin helical order is beyond room temperature [19]. The magnetic phase diagram [e.g. Fig. 1 (e)] with the progressive magnetic-field-induced variation from screw, transverse conical, fan like and to forced spin-collinear ferromagnetic phases is analogous between the two compounds, YMn\({}_{6}\)Sn\({}_{6}\) and Gd\({}_{3}\)Ru\({}_{4}\)Al\({}_{12}\) with the similar magnetic Kagome lattice. Notably, the magnetic transition temperatures differ by more than one order of magnitude, i.e., 400 K vs. 20 K, and the magnetic modulation direction is different, i.e., along the normal to the Kagome plane in YMn\({}_{6}\)Sn\({}_{6}\) and along the Kagome plane in Gd\({}_{3}\)Ru\({}_{4}\)Al\({}_{12}\), respectively. That the EEMI phenomena are robustly observed in both helimagnetic systems in spite of such dramatic changes in spin helix structure and magnetic energy scale is highly nontrivial, but these facts should be taken as an important implication that the EEMI emerges as a consequence of the general physics of spin helix dynamics. Therefore, in this study we have reinvestigated the detailed phase space occupied by the spin helix using small-angle neutron scattering (SANS), in order to reveal the connection with the EEMI characteristics. In the case of Gd\({}_{3}\)Ru\({}_{4}\)Al\({}_{12}\), it is observed that the negative intrinsic inductance both monotonously and nonlinearly increases in absolute magnitude with the excitation ac current density. In the case of YMn\({}_{6}\)Sn\({}_{6}\), by contrast, a sign change of the inductance to the positive value was observed with changes of temperature, magnetic field and exciting current density; these experimentally observed results imply the variations of the dominant helix dynamic mode, e.g., the phason and the tilting mode, and/or the phason pinning effect. However, control of the sign and magnitude of the emergent inductance, even for a single target material, remains elusive due to such enigmatic temperature- and magnetic-field dependent behaviors of EEMI. The purpose of the present study is to obtain key insights for enhancing EEMI, as well as for controlling its sign around room temperature, by examining the interrelation between various magnetic structures/characteristics and EEMI. For this goal, we attempt to control magnetism systematically by chemical modification of the archetypal high-temperature helimagnet YMn\({}_{6}\)Sn\({}_{6}\). Helinagnetic structures of YMn\({}_{6}\)Sn\({}_{6}\) are formed by the magnetic frustration among exchange interactions along the \(c\)-axis [20; 21; 22; 23; 24; 25; 26; 27]. There are two types of nearest-neighbor inter-layer interaction between Mn spins, one is ferromagnetic \(J_{1}(>0)\) and the other is antiferromagnetic \(J_{2}(<0)\) [Fig. 1(c)]. Only with these \(J_{1}\) and \(J_{2}\), the spin texture would be an up-up-down-down like double-antiferromagnetic structure. Here, due to an additional ferromagnetic second-nearest-neighbor coupling \(J_{3}(>0)\), a spiral spin texture becomes stable [Fig. 1(d)], whereas the antiferromagnetic-type order is dominant just below the transition temperature [23; 24; 25; 26]. In this study, the moderate chemical modification of the magnetism was done by partially substituting Y with Tb in YMn\({}_{6}\)Sn\({}_{6}\). It is known that the other end compound TbMn\({}_{6}\)Sn\({}_{6}\) hosts a collinear magnetic structure in a whole temperature range below the magnetic transition temperature, and that the magnetic phases change significantly in the solid solution system with YMn\({}_{6}\)Sn\({}_{6}\)[28; 29; 30; 31; 32]. We investigate the change in magnetic structure with light Tb substitution (up to 10 %) by means of small angle neutron scattering (SANS) and compare it with the variation of EEMI. Here, the local effect of Tb substitution on the magnetic interactions is to be noted. The original helical magnetism in YMn\({}_{6}\)Sn\({}_{6}\) (\(x=0\)) transforms into the collinear ferromagnetic state upon increasing the Tb content \(x\) above 0.4 in (Y\({}_{1-x}\)Tb\({}_{x}\))Mn\({}_{6}\)Sn\({}_{6}\), which is due to the strong antiferromagnetic coupling between Mn and Tb [29; 31]. Therein, upon lowing temperature, the Mn-plane ferromagnetic state with the easy plane anisotropy changes to one with easy-axis (\(\parallel c\)) anisotropy. Among the inter-plane (\(\parallel c\)) magnetic exchange interactions, \(J_{1}\), \(J_{2}\) and \(J_{3}\), the \(J_{2}\) is locally transformed to positive (ferromagnetic) around the doped Tb site as mediated with the strong antiferromagnetic interactions Mn (lower)-Tb and Tb-Mn (upper), which is contrary to the case for \(J_{2}<0\) with nonmagnetic Y ion. Furthermore, as described above, the Tb moment shows the easy-axis (\(\parallel c\)) magnetic anisotropy. Thus, while the global change of the spin helix states remains modest in the case of lightly doped case, e.g., 7 %, the local effect around the doped Tb site occurs through (a) the change of the local \(J_{2}\) exchange interaction from negative (antiferromagnetic) to positive (ferromagnetic) as well as (b) the local change of the magnetic anisotropy from the easy-plane to the easy-axis type. These features are important in the consideration of the pinning characteristics for the current-driven phason mode relevant to the EEMI [16]. ## II II. Experiment The single crystals of (Y, Tb)Mn\({}_{6}\)Sn\({}_{6}\) were synthesized by Sn-flux method [27]. A mixture of ingredient elements with atomic ratio of (Y, Tb):Mn:Sn = 1:6:30 was put in an evacuated quartz tube and heated to 1050 \({}^{\circ}\)C, subsequently cooled slowly to 600 \({}^{\circ}\)C and then quenched to room temperature. The remaining flux was centrifuged. The single crystallinity was indicated by the well-developed facet structures and was also confirmed by Laue X-ray diffraction. The concentration of Tb was determined by energy dispersive X-ray spectroscopy (EDX). For electric transport measurements, we cut thin plates out of the single crystals by using the focused ion beam (FIB) technique (NB-5000, Hitachi). The thin plates were mounted on silicon substrates with patterned electrodes. We fixed the thin plates to the substrates and electrically connected them to the electrodes by using FIB-assisted tungsten-deposition. We made Au/Ti-bilayer electrode patterns by an electron-beam deposition method. All small-angle neutron scattering (SANS) experiments were carried out using SANS-I instrument at the Swiss Spallation Neutron Source (SINQ), Paul Scherrer Insti Figure 1: Magnetic phases and phase diagrams in the plane of magnetic field (\(H\)) and temperature (\(T\)) of (Y, Tb)Mn\({}_{6}\)Sn\({}_{6}\) in field decreasing process. (a)-(b) Schematic illustrations of (a) spin-tilting mode and (b) phason mode of the spin-helix state responsible for emergent electromagnetic induction during the half cycle of the ac current (\(j=j_{0}\) sin(\(2\pi ft\)) excitation. (c)-(d) Interlayer exchange interactions and (c) double antiferromagnetic structure and (d) helical structure on crystal lattice of YMn\({}_{6}\)Sn\({}_{6}\). The illustration was drawn by using VESTA [34]. (e)-(g) Schematic illustrations of (e) proper-screw helical (H), (f) transverse conical (TC) and (g) fan (F) structures. (h)-(m) Overall magnetic phase diagrams of (h), (i) YMn\({}_{6}\)Sn\({}_{6}\), (j), (k)Y\({}_{0.93}\)Tb\({}_{0.07}\)Mn\({}_{6}\)Sn\({}_{6}\), and (l), (m)Y\({}_{0.90}\)Tb\({}_{0.10}\)Mn\({}_{6}\)Sn\({}_{6}\). The blue, yellow, green, and red regions represent proper-screw helical (H), transverse conical (TC), fan (F), and antiferromagnetic (AF) phases, respectively. The phases below 100 K in Y\({}_{0.90}\)Tb\({}_{0.10}\)Mn\({}_{6}\)Sn\({}_{6}\) under \(H\parallel c\) are not identified in the present study. tut, Switzerland, using neutrons with wavelength of either 5 or 6 A. The single-crystalline samples were attached onto an Al plate holder and loaded into a cryomagnet installed at the sample position of the beamline. For measurements with \(H\parallel a\) (\(H\parallel c\)), the magnetic field was applied nearly parallel (perpendicular) to the incident collimated neutron beam. The diffracted neutrons were collected by a two-dimensional multidetector placed 1.85 m behind the sample, and translated 0.46 m horizontally in the plane perpendicular to the incoming beam direction in order to access an extended \(q\)-range along \(c\)*. The diffraction measurements were done by recording SANS patterns over a range of sample-cryomagnet ensemble rotation angles that were sufficient to move the magnetic peaks through the Bragg condition at the detector. Magnetic-field dependence of complex resistivity was measured with use of lock-in amplifiers (SR-830, Stanford Research Systems). We input a sine-wave current and recorded both in-phase (Re \(V\)) and out-of-phase (Im \(V\)) voltage with a standard four-terminal configuration. Background signals were estimated by measuring a short circuit where the terminal pads were connected by Au/Ti-bilayer electrode patterns. We subtracted the background signals from the measured data. Magnetization was measured by using Quantum Design PPMS-14 T with ACMS option. ## III III. Overall magnetic phase diagrams of \(\mathbf{(Y,Tb)Mn_{6}Sn_{6}}\) The magnetic phase diagrams of \(\mathbf{(Y,Tb)Mn_{6}Sn_{6}}\) were determined by magnetization and small angle neutron scattering (SANS) measurements. In the known case of \(\mathrm{YMn_{6}Sn_{6}}\)[22; 23; 24; 25], an incommensurate helical state (H [Fig. 1(e)]) with the wavevector (\(\mathbf{q}\)) parallel to the \(c\)-axis is stable at zero magnetic field below 330 K. As increasing the magnetic field applied parallel to the \(a\)-axis (\(H\parallel a\)), the spin texture changes to a transverse conical state (TC [Fig. 1(f)]) and then to a fan state (F [Fig. 1(g)]) at above 7 T below 150 K. With further increasing \(H\parallel a\), the state reaches a forced ferromagnetic state (FF). With the magnetic field parallel to the \(c\)-axis (\(H\parallel c\)), on the other hand, the helical state is continuously deformed to a longitudinal conical state and then a forced ferromagnetic state at higher fields. The spiral states always propagate along the \(c\)-axis. Above 250 K, the magnetic periodicity becomes commensurate with \(\mathbf{q}_{\mathrm{C}}=(0~{}0~{}0.5)\), namely the up-up-down-down like double antiferromagnetic state (AF) is stabilized near the magnetic phase boundary. From the previous study on polycrystalline samples [28; 29; 30; 31; 32], when the Tb concentration is less than 10 %, it is expected that there will be no drastic change in the magnetic phase at zero magnetic field. This time, we targeted the compounds with 0 %, 7 %, and 10 % concentrations of Tb dopant on Y-site in \(\mathbf{(Y,Tb)Mn_{6}Sn_{6}}\). The samples are single crystals with fairly homogeneous (Y,Tb) compositions grown by the Sn flux method. 7 % is the low Tb concentration limit for high quality samples that could be synthesized by the flux method. For smaller Tb concentrations, a phase separation was observed to occur. Figures 1(h)-(m) show the magnetic phase diagrams of the respective compositions as deduced in the magnetic-field descending procedure. The phase boundaries were determined on the basis of magnetization curves (see also Figs. S1-S3 of Supplemental Material [33]). As for the AF phase, the phase boundary cannot be clearly defined from the \(M\)-\(H\) curves, and instead it is determined by the SANS measurements as described later. Previous neutron diffraction studies have revealed that the magnetic moment of Tb is antiferromagnetically coupled to that of Mn [31]. This assignment is consistent with the present magnetization measurements; the saturation magnetization at low temperature (10 K) is in accord with the value based on the assumption that Mn spins (2.1 \(\mu_{\mathrm{B}}\)/atom) and Tb moments (9.0 \(\mu_{\mathrm{B}}\)/atom) are antiferromagnetically coupled. This antiferromagnetic coupling is robustly sustained up to at least 14 T. When the Tb concentration is below 10 %, the magnetic ordering temperature does not change significantly in the range of 330-335 K. For Tb 10 % doping, an additional faint peak on the \(M\)-\(T\) curves was observed around 100 K (see also Fig. S3 of Supplemental Material [33]). The magnitude of the transition magnetic field decreases as the amount of Tb increases, regardless of the direction of the magnetic field. Since the behavior of the transition under a magnetic field is similar at each concentration, it can be reasonably inferred that the magnetic structure of the Mn spin shows similar temperature-magnetic field dependence for these undoped and Tb-doped crystals, as shown in Figs. 1(h)-(m). ## IV IV. Magnetism of \(\mathbf{(Y,Tb)Mn_{6}Sn_{6}}\) revealed by small angle neutron scattering ### A. \(\mathbf{YMn_{6}Sn_{6}}\) Figure 2(a) shows the setup in the SANS measurement. A neutron beam propagates nearly parallel to the \(a\)-axis of the single crystal sample and hence nearly perpendicular to the reciprocal \(a\)*-\(c\)* plane. The horizontal (vertical) direction of the two-dimensional detector corresponds to the \(c\)* (\(a\)*) direction. No diffraction spots were observed along the \(a\)*-direction at any temperature and magnetic field condition. Figure 2(b) shows SANS patterns obtained at zero magnetic field and various temperatures. Diffraction peaks are observed at three different wavevectors; (i) the incommensurate wavevector \(\mathbf{q}_{\mathrm{IC}}\) corresponding to helical spiral states, (ii) the commensurate \(\mathbf{q}_{\mathrm{C}}=(0~{}0~{}0.5)\) corresponding to the AF structure, and (iii) \((0~{}0~{}1)\)\(-\)\(\mathbf{q}_{\mathrm{IC}}\). While the two kinds of inter-layer distances between neighboring Mn planes of \(\mathrm{YMn_{6}Sn_{6}}\) are similar to each other, the interplane rotation angle of the ferromagnetically-aligned in-plane moment must be different between the pair coupled via ferromagnetic \(J_{1}\) and the pair coupled via antiferromagnetic \(J_{2}\). The spirals of the second neighbor Mn spins along \(c\)-axis have a nearly constant angle of rotation. This double spiral state produces \((0~{}0~{}1)\)\(-\)\(\mathbf{q}_{\rm IC}\) spots, and the nonmonotonicity of the rotation angle is reflected in the intensity ratio of \((0~{}0~{}1)\)\(-\)\(\mathbf{q}_{\rm IC}\) and \(\mathbf{q}_{\rm IC}\) (see also Fig. S4 of Supplemental Material [33]). At 10 K the helical period is about 3.3 nm. As the temperature rises, it shortens to 2.2 nm at 320 K, a temperature where the helical and AF states are found to coexist. Figure 2(c) shows the diffraction profile at each temperature. The H state is dominant at low temperatures, and the intensities of the peaks at \(\mathbf{q}_{\rm IC}\) and \(\mathbf{q}_{\rm C}\) become comparable near the transition temperature (320 K). In detail, the peak of \(\mathbf{q}_{\rm IC}\) is known to be composed of multiple spirals with close periods [26; 24]. Multiple peaks are observed in the present SANS profiles as well; here the position and intensity are fitted with a single Gaussian peak [Fig. 2(d)] and discussed as a whole hereafter. We note that such a discommensuration-like feature may contribute to the dynamics of the spin texture and enhance the emergent inductance discussed later. By tracking the temperature and magnetic field dependence of \(\mathbf{q}_{\rm IC}\) and \(\mathbf{q}_{\rm C}\), the spiral and AF phases are clarified in the temperature-magnetic field plane. The development of the SANS intensities for \(H\parallel a\) is summarized as an intensity contour map in Fig. 3(a) for the helical spiral order described by \(\mathbf{q}_{\rm IC}\), and in Fig. 3(b) for the AF structure described by \(\mathbf{q}_{\rm C}\). It is known that the magnetic field along the \(a\)-axis stabilizes the AF state near the transition temperature (330-335 K) [26; 24]. The present study confirms that the AF state remains in a magnetic field along the \(c\)-axis as well. Figures 3(c) and 3(d) show the distribution of each SANS spot intensity for \(H\parallel c\), while the clear coexistence region of H+AF is rather narrow. The main magnetic phase changes from spiral to AF around the high-temperature phase boundary. Here all the presented magnetic field dependent data are obtained in a magnetic-field descending process at fixed temperatures. Figures 3(e) and 3(f) show SANS profiles at 300 K for several magnetic fields. In general, the signal strength of spin modulation decreases as the magnetic field increases because of the tilt of spins toward the external magnetic field. From the above results, we confirm that the magnetic phase of YMn\({}_{6}\)Sn\({}_{6}\) falls into the valley of AF phase when ramping the field down from the FF phase at high temperature. This phenomenon is independent of the magnetic field direction, \(H\parallel c\) or \(H\parallel a\). B. Y\({}_{\bf 0.93}\)Tb\({}_{\bf 0.07}\)Mn\({}_{\bf 6}\)Sn\({}_{\bf 6}\) (Tb7 %) and Y\({}_{\bf 0.90}\)Tb\({}_{\bf 0.10}\)Mn\({}_{\bf 6}\)Sn\({}_{\bf 6}\) (Tb10 %) Using the same experimental setup as for YMn\({}_{6}\)Sn\({}_{6}\), the magnetic structures of the Tb-doped compounds, Y\({}_{0.93}\)Tb\({}_{0.07}\)Mn\({}_{6}\)Sn\({}_{\bf 6}\) (Tb 7 %) and Y\({}_{0.90}\)Tb\({}_{0.10}\)Mn\({}_{\bf 6}\)Sn\({}_{\bf 6}\) (Tb 10 %), were also examined by SANS measurements. Figures 4(a) and 4(b) show SANS patterns at zero magnetic field and at various temperatures for the Tb 7 % and Tb 10 % compounds, respectively. It is clear that the AF peak locating at \(\mathbf{q}\) = (0 0 0.5) is totally extinguished by the Tb doping. The temperature dependence of the observed wavevectors of the diffraction spots is summarized in Fig. 4(c) together with the corresponding result for the undoped (Tb 0 %) crystal. The magnetic period \(\lambda\) (\(=2\pi/q\)) in the H phase of the Tb 7 % doped crystal is comparable or slightly longer in comparison with the H phase (2.2 nm \(\leq\lambda\leq 3.3\) nm) of the undoped (Tb 0 %) one. For the Tb 10 % sample the spiral period is clearly longer than the undoped compound, varying from 3.7 nm \(\leq\lambda\leq 4.1\) nm and with a weaker temperature-dependence. It appears from the change of \(\lambda\) that the contribution from the ferromagnetic interaction is strengthened via the modification of \(J_{2}\) by Tb substitution. Even near the transition temperature, the position of \(\mathbf{q}_{\rm IC}\) is far from the commensurate (0 0 0.5), and there is no diffraction spot corresponding to the AF (\(\mathbf{q}_{\rm C}\)) state in any temperature-field region. Incidentally, the peak at (0 0 1)\(-\)\(\mathbf{q}_{\rm IC}\), is barely observed for Tb 7 % around (0 0 0.8), which corresponds to the upper limit of detectable \(q\) range in the present SANS setup, and beyond the detectable \(q\)-range for Tb 10 %. Another feature in the SANS data from the Tb 10 % sample that differs from YMn\({}_{6}\)Sn\({}_{6}\) is the presence of weaker broad peaks on the lower \(q\) side of the main incommensurate peak as can be seen in Fig. 4(b). This broad peak shifts to a further lower \(q\) as the temperature increases. Such additional H states, composed of multiple peaks with close \(q\) values, may be induced by the further magnetic frustration added by the Tb substitution. By using SANS in various magnetic fields, we have revealed the spiral magnetic phases for the lightly Tb-doped crystals. Figs. 5(a) and 5(b) exemplify the SANS profiles of the \(\mathbf{q}_{\rm IC}\) diffraction for the Tb 7 % and Tb 10 % crystals, respectively, at 300 K under various magnetic fields applied along the \(a\)-axis. There is only slight change in the magnetic period (\(q\) value) with applied magnetic field; see also Fig. 4(c). In Figs. 5(c) and 5(d), the \(\mathbf{q}_{\rm IC}\) diffraction intensity is plotted on the magnetic phase diagrams based on the magnetization measurements [Figs. 1(h)-(m)]. The incommensurate magnetic modulations are observed in the whole region below the ferromagnetic transition magnetic field. From these results, we have revealed that low-concentration Tb substitution tends to slightly expand the magnetic period of the spiral structures while totally eliminating the AF phase around the phase boundary. The thermal fluctuation increased by elevating temperature is known to cause the incommensurate to commensurate (e.g., AF) crossover of the magnetic structure, due perhaps to the spin-lattice interaction or magnetostriction effect, as reported for other magnetically frustrated systems [35]. In that case, the introduction of the pinning centers like the present Tb doping appears to lead to proliferation of discommensurations and hence to destroy the commensurate order while nonetheless sustaining the originally stable incommensurate state. ## V V. Emergent electromagnetic inductance (EEMI) of \(\mathbf{(Y,Tb)Mn_{6}Sn_{6}}\) Keeping in mind the above-described Tb substitution-induced modification of the magnetic structure, we turn to discuss the effect of Tb-substitution on emergent electromagnetic inductance (EEMI) of YMn\({}_{6}\)Sn\({}_{6}\). The emergent inductance value \(L\) is directly related to the imaginary part of the ac electric resistivity Im \(\rho\) via the relation, \[L=\mathrm{Im}\rho\ d/2\pi fS\] Here, \(d\) is the distance between electrodes on the sample, \(S\) is the cross-sectional area of sample, and \(f\) is the ac current frequency. We fabricated micro-scale devices with the dimension of \(d\) = 25-35 \(\mu\)m and \(S\) = (2-3) \(\mu\)m \(\times\) (8-10) \(\mu\)m by the focus-ion-beam (FIB) method. Figures 6(a)-(l) show the magnetic field (\(H\parallel a\) and \(H\parallel c\)) dependence of Im \(\rho\) at various temperatures (200 K and 300 K) and Tb concentrations (0 %, 7 %, and 10 %), measured using the ac input current density \(j=j_{0}\) sin(2\(\pi ft\)) (\(j_{0}\) = 2.5\(\times\)10\({}^{4}\) A/cm\({}^{2}\), \(f\) = 500 Hz, \(j\parallel c\)). All devices used in the experiment showed larger signals than the false background signals, which come from parasitic impedance and signal delay between current source and detector. In the case of YMn\({}_{6}\)Sn Figure 2: SANS profiles of YMn\({}_{6}\)Sn\({}_{6}\) at zero magnetic field. (a) Setup and configuration of small angle neutron scattering (SANS). (b) SANS patterns and positions of diffraction spots along the \(c^{*}\)-axis at 0 T. The upper horizontal scale is for the propagation vector \(|\mathbf{q}|\) value in Å\({}^{-1}\) and the lower scale for the reciprocal lattice unit. The pink dashed line corresponds to the commensurate \(L\) = 0.5 position. Vertical dotted lines with asterisk indicate blind spot positions on the present SANS detector; when the diffraction spot coincides with the blind spot, the diffraction intensity is dumped as an artifact. (c) SANS profiles along \(c^{*}\). The inset shows a magnified view of higher angles. (d) The SANS profile of incommensurate \(\mathbf{q}_{\mathrm{IC}}\) at 300 K. The observed profile (orange) is not a sharp single peak, but more likely composed of multiple peaks. The purple dashed line indicates the result of fitting to a single Gaussian for a course-grain analysis. Figure 4: SANS profiles along (0 0 \(L\)) of Tb doped YMn\({}_{6}\)Sn\({}_{6}\) crystals at zero magnetic field, (a) for Y\({}_{0.93}\)Tb\({}_{0.07}\)Mn\({}_{6}\)Sn\({}_{6}\) (Tb 7 %) and (b) for Y\({}_{0.90}\)Tb\({}_{0.10}\)Mn\({}_{6}\)Sn\({}_{6}\) (Tb 10 %). The pink dashed lines correspond to the commensurate (0 0 0.5), at which no peak is discerned. Vertical dotted lines with asterisk indicate the blind spot position of the present SANS detector; when the diffraction spot coincides or overlaps with the blind spot, the diffraction intensity is dumped as an artifact. (c) Temperature dependence of the spiral wavevector \(q\) values for Tb undoped, 7 %, and 10 % doped crystals at zero field and at a magnetic field of 3 T applied along the \(a\)-axis. The Tb undoped crystal shows the transition from incommensurate (\(\mathbf{q}_{\rm IC}\)) helical (H, at 0 T) or transverse-conical (TC, at 3 T) to commensurate (\(\mathbf{q}_{\rm C}\)) antiferromagnetic (AF) state, as shown by vertical dashed (0 T) and solid (3 T) lines. Figure 3: Magnetic field dependence of SANS magnetic satellite in YMn\({}_{6}\)Sn\({}_{6}\). (a)-(d) Color maps summarizing intensities profiles of (a), (c) incommensurate (\(\mathbf{q}_{\rm IC}\)) spiral and (b), (d) antiferromagnetic (\(\mathbf{q}_{\rm C}\)) states in magnetic fields along (a), (b) the \(a\)-axis and (c), (d) the \(c\)-axis. (e)-(f) SANS profile along the \(c^{*}\)-axis at 300 K at various magnetic fields along (e) the \(a\)-axis and (f) the \(c\)-axis. The inset of (e) shows a magnified view of profile at magnetic fields of 3 T and 4 T. Figure 5: SANS results of Y\({}_{0.93}\)Tb\({}_{0.07}\)Mn\({}_{6}\)Sn\({}_{6}\) (Tb 7 %) Y\({}_{0.90}\)Tb\({}_{0.10}\)Mn\({}_{6}\)Sn\({}_{6}\) (Tb 10 %) under various magnetic fields applied along the \(a\)-axis. (a)-(b) SANS profiles of the incommensurate spiral state (\(\mathbf{q}_{\rm IC}\)) under magnetic fields at 300 K, for (a) Tb 7 % and (b) Tb 10 % doped crystals. Vertical dotted lines with asterisk indicate the blind spot position of the present SANS detector; when the diffraction spot coincides with the blind spot, the diffraction intensity is dumped as an artifact. (c)-(d) The SANS intensity contour maps on the magnetic phase diagrams for (a) Tb 7 % and (b) Tb 10 % doped crystals. [Tb 0 %, Figs. 6(a)-(d)], a negative inductance appears in the H phase at zero magnetic field, in accord with the previous result [19]. With this current density, the origin of the negative inductance has been assigned mainly to the current-induced phason motion [Fig.1b] with the extrinsic pinning frequency (a few kHz above the observation frequency [19]. The large enhancement of the negative inductance around the H and TC coexisting region for \(H\parallel a\) [Fig. 6(a)] is due perhaps to the H-TC domain-wall (DW) motions driven by ac current, as a sort of phason-like motion, while the extrinsic pinning frequency should be still above the observation frequency for this DW state. At a higher temperature, e.g., 300 K, the inductance turns to positive with the increase of magnetic field. In a magnetic field along the \(a\)-axis, a sign change occurs upon entering the TC state from the H state, while in a magnetic field along the \(c\)-axis, a positive peak structure appears near the ferromagnetic transition where the AF state coexists with the H (to be precise, longitudinal conical) state. ### A. Doping-induced pinning effect on negative EEMI As for the Tb 7 % doping, the negative inductance is significantly suppressed in comparison with the pristine (Tb 0 %) case, while the positive inductance survives quantitatively [Figs. 6(e)-(h)]. Such a large impact of the Tb doping confirms that the positive and negative inductance components have different microscopic origins, as respectively assigned to the tilting [Fig. 1(a)] and phason [Fig. 1(b)] motions of the spin spiral [16]. Namely, the extrinsic pinning effect on the phason motion strengthened by Tb doping likely leads to the critical suppression of the negative component of the EEMI that is dominant in the H phase of the pristine (Tb 0 %) compound. By contrast, the tilting motion is less affected by the pinning effect, and hence the positive component of the emergent inductance is largely preserved. To gain insight into the pinning mechanism of the phason mode, an important point that needs to be taken into account is the difference between the spin-spiral state (the present case) and the conventional collinear spin density wave (SDW). In the latter case, the phason is readily pinned by the impurity, showing the finite extrinsic pinning frequency of its spectrum. In sharp contrast, the phason in the incommensurate spin spiral state with the constant spin moment amplitude remains gapless even under the disorder, as long as the spin rotational symmetry of the Hamiltonian is kept intact. To gap the phason mode, needed is a coupling between the impurity (Tb) perturbation and the more or less elliptic, not perfectly circular, helical modulation of the spin moment amplitude, which means the admixture of the sinusoidal (SDW-like) moment modulation into the helical spin moment. In contrast to the case of rare-earth \(4f\) moments, some ellipticity of the helix or resultant local charge density modulation is quite plausible for \(3d\) Mn moments that also contribute to the conduction band formation as in the conventional SDW metals (e.g., chromium [36]). On one hand, even in the presence of local modulations of the exchange interaction (a) and the magnetic anisotropy (b), as long as the U(1) symmetry in spin space remains like the present Tb-doped case, the U(1) rotation of the whole spins is still gapless. This corresponds to the usual magnon together with the tilting mode (\(m_{z}\): the uniform magnetization perpendicular to the spin rotation place) as the canonical conjugate generator. On the other hand, in the presence of the elliptic helical modulation, the phason, which is separated from the spin rotation mode, is pinned due to impurities. These characteristic features of the phason mode in the helix may explain why the EEMI measurement on the nominally pure crystal of YMn\({}_{6}\)Sn\({}_{6}\) indicates the ultra-low extrinsic pinning frequency \(f\leq\) 10 kHz [17; 19]. In contrast, the intentional doping of Tb likely escalates the extrinsic pinning frequency to become much higher than the observation frequency (\(f\)\(\leq\) 1 kHz) and suppresses the negative EEMI signal. In a future study, it is desirable to experimentally verify the correlation between this ellipticity of the spin helix and the frequency characteristics of EEMI. The rather moderate effect of Tb doping observed for the positive EEMI at relatively high temperatures can also be explained by the fact that the tilting mode \(m_{z}\), which is responsible for the positive EEMI, is already subject to the influence of the easy plane anisotropy energy \(Km_{z}^{2}\) and hence rather insensitive to the presence of the magnetic (Tb) impurities. When the Tb concentration is further increased to 10 %, the negative inductance is almost completely eliminated, while the reduction of the positive inductance is less significant, remaining at the level of about 20 % [Fig. 6(i)-(l)]. A possible mechanism for the reduced positive emergent inductance (tilting mode) may stem from the static local canting of Mn moment off the spiral plane induced by Tb doping, which as magnetic disorder may cause the suppression of the current-induced tilting mode dynamics. One other mechanism to reduce the EEMI with Tb doping is the elongation of magnetic period \(\lambda\) as seen upon Tb 10 % doping; e.g., from 2.4 nm (IC) or 1.8 nm (AF) for Tb 0 % to 3.7 nm (IC) for Tb 10 % at 300 K, see Fig. 3(c). However, the reduction of magnitude of negative emergent inductance is by far larger than that expected from the change in \(\lambda\) (\(=2\pi/q\)); Im \(\rho\propto q\). Thus, the other important mechanism, namely the pinning-induced suppression of the phason mode, plays a dominant role for the drastic reduction of negative EEMI. By contrast, the positive inductance component is relatively preserved and its qualitative magnetic field dependence is commonly observed irrespective of the Tb concentration (\(\leq\) 10 %). To be noted here is that there is no AF phase even at 300 K in the cases of Tb 7 % and 10 %, as opposed to the case of YMn\({}_{6}\)Sn\({}_{6}\) (see Fig. 5). Therefore, the origin of the positive inductance common to the pristine and Tb-doped compounds is not an exclusive characteristic of the AF phase itself. In other words, the current-induced tilting motion of the AF state at relatively high temperatures (e.g., 300 K) may be responsible for the positive emergent inductance as in the other incommensurate TC state near the phase boundary (see Fig.7(b)). ### B. Domain wall EEMI In common with the Tb 7 % and 10 % crystals, the sharp positive peak anomalies of the inductance are observed at the phase boundary between H and TC for \(H\parallel a\) or between H and FF for \(H\parallel c\), as typically seen in Figs. 6(e), (g), (h), (i), (k) and (l). These may indicate the current-driven free or depinned motion of DWs in analogy to the gapless or depinned phason mode, which can give the positive sign of EEMI [16; 19]. On the contrary, as seen in Fig. 6(a), in the field region between the H and AF phases for Tb 0 %, there is observed a rather broad negative EEMI peak. In this case, the microscopic coexistence of the TC and AF domains viewed as accumulated DWs is likely to be the origin of the enhanced negative EEMI response. The negative sign of EEMI observed therein indicates the bound motion of DW due to pinning at a relatively low ac current density, \(j_{0}\) = 2.5\(\times\)10\({}^{4}\) A/cm\({}^{2}\). Later (in V-D), we note the current-induced depinning transition of such a DW phason-like mode accompanied by the change of EEMI toward the positive value. Incidentally, a discrepancy between the inductance peak field and the phase boundary field is sometimes discerned, for example, in Fig. 6(g). This is due perhaps to the slight (within \(\pm\)0.7 %) deviation of the Tb stoichiometry of the micro-device samples from that of the corresponding bulk crystals, on which the magnetization measurements were done for deducing the phase diagrams [Figs. 1(j), 1(k), 1(l) and 6(m)]. As seen in Figs. 1(k) and (m), for example, the H-FF phase-transition fields steeply decrease with Tb doping, crudely at a rate of 0.4 T per Tb 1% at 300 K. The discrepancy between the magnetic field values for the sharp EEMI signal and the phase boundary is of this order, and we speculate that the DW EEMI signal field-position sensitively reflects the variation of the Tb content in the Tb 7 % (averaged value) crystal. ### C. Doping effect on positive EEMI Figures 7(a)-(f) show the color contour maps of Im \(\rho\) overlaid on the magnetic phase diagrams. It is clear that with the increase of Tb doping the negative component (shown with blue color shading in the figures) in the H and TC states are rapidly and conspicuously suppressed. By contrast, the positive inductance (shown in red shading) appears around the high-temperature phase boundaries and persists robustly against Tb doping, and irrespective of the dominant magnetic modulation, i.e., commensurate (AF) or incommensurate (H) state. This observation points to the importance of the local spin dynamics, rather than the static form of magnetic order, as the origin of the positive inductance. Spin fluctuations are anticipated to be large around high-temperature phase boundary. These enhanced magnetic fluctuations and their response to the ac current are likely to play a part in the origin of the positive inductance. When the AF state is viewed in spin projective space, only the two points at the north pole and the south pole are occupied. Even when it is driven by current, it dynamically sweeps only a line with zero-solid angle, and hence does not contribute to the generation of emergent electric field. In the presence of thermally enhanced spin fluctuations, however, the AF state can cover a finite area around the poles in the spin projective space, which can result in emergent electric fields when driven by current. This scenario is in accord with the mechanism of the positive emergent inductance based on tilting motion, while the phason-like motion as the origin of negative inductance is suppressed in such a thermally disordered state. In this context, the possible gap of the phason in the commensurate AF state present in the pristine (Tb 0 %) crystal may play some role in suppressing the phason excitations (which give the negative component of emergent inductance) to make the tilt excitations (which give the positive component) more dominant. Nevertheless, the thermal fluctuations, irrespective of commensurate or incommensurate spin orders, are likely to be more important, judging from the common behavior of the positive inductance signals irrespective of the Tb-doping levels 0 % or 7 %. It is also to be noted here that the present commensurate (AF) phase does not show a simple up-down-up-down spin configuration, but instead the up-up-down-down type along the \(c\)-axis; intuitively, the latter configuration appears more favorable to host the dynamically noncoplanar configuration with assist of thermal fluctuation. This scenario of the thermal-fluctuation-induced generation of emergent induction needs to be verified in spin-collinear magnets without any adjacent spiral phases, and analyzed in a more quantitative way by elaborate model simulations. ### D. Current-nonlinear EEMI Lastly, we investigate the nonlinear current-density dependence of the observed emergent inductance. Figs. 8(a)-(f) show the magnetic field dependence of Im \(\rho\) when the current density is varied up to \(j_{0}\) = 5.0\(\times\)10\({}^{4}\) A/cm\({}^{2}\) for each Tb composition with \(H\parallel a\). Figure 8(g) is a plot of Im \(\rho\) as a function of current density at some selected fixed magnetic fields. The magnitude of Im \(\rho\) generally increases with the increase of current density, Figure 6: (a)-(l) Magnetic field (\(H\parallel a\), \(H\parallel c\)) dependence of emergent electromagnetic induction with variations of Tb-concentration (0 %, 7 %, and 10 %) and temperatures (200 K and 300 K) for \(\rm(Y,Tb)Mn_{6}Sn_{6}\). The imaginary part of the complex resistivity Im \(\rho\) as the materials quantity representing the inductance was measured with an ac input current density \(j=j_{0}\sin(2\pi ft)\) (\(j_{0}=2.5\times 10^{4}\) A/cm\({}^{2}\), \(f=500\) Hz, \(j\parallel c\)). The blue, yellow, red, green and gray shadows represent proper-screw helical (H), transverse conical (TC), antiferromagnetic (AF), fan (F) and forced ferromagnetic (FF) phases, respectively. Figure 7: (a)-(f) Color maps of Im \(\rho\) at \(j_{0}=2.5\times 10^{4}\) A/cm\({}^{2}\), \(f=500\) Hz in the \(T\)-\(H\) plane at each Tb concentration and magnetic field orientation. Red and blue areas correspond to positive and negative Im \(\rho\), respectively. The color depth corresponds to the magnitude of the value; see the color scale bar. showing a strongly nonlinear behavior, in particular, in the region of the negative inductance value, e.g., for the Tb 0 % compound at 200 K and 0 T, where the phason mode contribution is dominant. For most conditions, a higher current density leads to a monotonous increase of the absolute magnitude of Im \(\rho\). For the Tb 0 % compound at 300 K and 1.5 T, however, Im \(\rho\) shows a monotonic behavior with respect to current density; first becoming more negative, then changing towards positive. The EEMI from the phason mode in YMn\({}_{6}\)Sn\({}_{6}\) is anticipated to show the resonance-type current-density dependence; it may stay negative when the pinning frequency (\(\omega_{\rm pin}\)) is larger than the ac current frequency (\(\omega_{\rm obs}\)), while changing to positive for the \(\omega_{\rm pin}\) reduced below \(\omega_{\rm obs}\) by the increased current density due to the phason depinning transition [16; 19]. The behavior of the inductance for the Tb 0 % compound at 300 K and 1.5 T may reflect such a crossover at \(\omega_{\rm pin}\sim\omega_{\rm obs}\) around \(j_{0}\sim\) 4\(\times\)10\({}^{4}\) A/cm\({}^{2}\), while the EEMI signal therein may include the contribution from the current-induced depinning transition of the H-AF domain wall excitations as described above. As opposed to the complex current density dependence of the phason-mode inductance, the monotonous current-density dependence is always observed for both the positive inductance of the AF phase (Tb 0 %, 300 K, 3.5 T), and the high-temperature magnetic phase boundary region (Tb 7 %). This is in accord with the scenario in which the spin fluctuation-enhanced inductance is in line with the tilting mode mechanism irrelevant to the depinning current density. ## VI VI. Conclusion We have investigated the impact of chemical doping on both the collinear/noncollinear magnetic structures and the emergent electromagnetic induction near room temperature in YMn\({}_{6}\)Sn\({}_{6}\). The SANS experiments on Tb-doped Y\({}_{1-x}\)Tb\({}_{x}\)Mn\({}_{6}\)Sn\({}_{6}\) crystals have clarified that the incommensurate spin-helix period increases from \(\lambda=2.4\) nm (300 K, 0 T) at \(x=0\) to \(\lambda=3.7\) nm (300 K, 0 T) for \(x=0.1\), and the collinear double-antiferromagnetic (AF) state existing near the high-temperature magnetic phase boundary for \(x=0\) is eliminated \(x\geq 0.07\). The latter effect is likely due to the introduction of pinning centers via Tb doping. Our systematic investigation of the emergent electromagnetic inductance (EEMI) in (Y, Tb)Mn\({}_{6}\)Sn\({}_{6}\) crystals has clarified the following features; (1) The negative EEMI observed in the low-temperature helix phase is greatly suppressed upon Tb doping, and (2) the positive inductance at relatively high temperatures near the magnetic phase boundary mostly survives against Tb doping. As for the feature (1), for example, the negative inductance value at 200 K is conspicuously reduced to 20 % and \(<\) 1 % upon Tb doping of 7 % and 10 %, respectively. This reduction of the EEMI upon Tb doping is too large to be ascribed simply to the change of magnetic period \(\lambda\) of the incommensurate helix; the EEMI would be in proportion to \(\lambda^{-1}\), if other characteristics were the same. The negative EEMI is necessarily caused by the phason mode dynamics driven by the ac current. Therefore, the drastic change in negative EEMI can be ascribed to an impurity (Tb) induced pinning effect on the dynamics of the spin helix. As for the feature (2), the positive inductance is anticipated to stem generally from the tilting mode of the helix [16; 19]. We find that in the present case, it can appear around the high temperature phase boundary regardless of whether the dominant magnetic order is commensurate (collinear) AF or incommensurate (noncollinear) helix. This fact leads us to conclude that this positive inductance is also derived from the spin noncollinearity induced via magnetic fluctuations near the phase boundary; the mechanism can be regarded as an extension of the tilting-mode one. The present investigation on the chemically doped helimagnets allows us to propose useful disciplines of materials design for the high-temperature, i.e., around/beyond room temperature, EEMI. To promote the negative EEMI, in addition to the short-period (a few nm) helix and high conductivity of the material, crystals of high-purity, low levels of imperfection, and low magnetic anisotropy will be favorable to enhance the phason mode dynamics exerted by ac current. Such a negative EEMI can potentially be turned into the large positive EEMI with increasing the exciting current density through the current-induced phason depinning transition. On the other hand, to target the positive emergent inductance, the introduction of pinning centers like those due to impurity doping, and/or the increase of the thermal spin fluctuations, will suppress the phason mode but keep the tilting mode relatively intact. This situation thus favors the tilting mode-induced positive inductance component that is otherwise cancelled or overwhelmed by a competing negative component. The present observation of the positive inductance under the thermal agitation provides a hint for expanding the range of emergent inductor candidate materials to the collinear-spin magnetic materials. ## VII Acknowledgements This work was supported by Core Research for Evolutional Science and Technology (CREST), Japan Science and Technology Agency (JST) (Grant No. JPMJCR1874 and No. JPMJCR16F1), Fusion Oriented Research for Disruptive Science and Technology (FORSET), Japan Science and Technology Agency (JST) (Grant No. JPMJFR2038), the Japan Society for the Promotion of Science (JSPS) KAKENHI (Grant No. JP20H01859, JP20H05155 and No. JP21J11830), the Swiss National Science Foundation (SNSF) Sinergia network "NanoSkyrmionics" (Grant No. CRSII5_171003), the SNSF Project No. 200021_188707 and an ETH Zurich Research Partnership Grant RPG_072021_07. This work is based partly on experiments performed at the Swiss spallation neutron source SINQ, Paul Scherrer Institute, Villigen, Switzerland.
2308.09778
Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models
Large vision-and-language models (VLMs) trained to match images with text on large-scale datasets of image-text pairs have shown impressive generalization ability on several vision and language tasks. Several recent works, however, showed that these models lack fine-grained understanding, such as the ability to count and recognize verbs, attributes, or relationships. The focus of this work is to study the understanding of spatial relations. This has been tackled previously using image-text matching (e.g., Visual Spatial Reasoning benchmark) or visual question answering (e.g., GQA or VQAv2), both showing poor performance and a large gap compared to human performance. In this work, we show qualitatively (using explainability tools) and quantitatively (using object detectors) that the poor object localization "grounding" ability of the models is a contributing factor to the poor image-text matching performance. We propose an alternative fine-grained, compositional approach for recognizing and ranking spatial clauses that combines the evidence from grounding noun phrases corresponding to objects and their locations to compute the final rank of the spatial clause. We demonstrate the approach on representative VLMs (such as LXMERT, GPV, and MDETR) and compare and highlight their abilities to reason about spatial relationships.
Navid Rajabi, Jana Kosecka
2023-08-18T18:58:54Z
http://arxiv.org/abs/2308.09778v3
# Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models ###### Abstract With the advances in large scale vision-and-language models (VLMs) it is of interest to assess their performance on various visual reasoning tasks such as counting, referring expressions and general visual question answering. The focus of this work is to study the ability of these models to understanding spatial relations. Previously, this has been tackled using image-text matching [15] or visual question answering task, both showing poor performance and a large gap compared to human performance. To better understand the gap, we present fine-grained compositional grounding of spatial relationships and propose a bottom up approach for ranking spatial clauses and evaluating the performance of spatial relationship reasoning task. We propose to combine the evidence from grounding noun phrases corresponding to objects and their locations to compute the final rank of the spatial clause. We demonstrate the approach on representative vision-language models [14, 15, 16] and compare and highlight their abilities to reason about spatial relationships. ## Introduction Visual reasoning is among the general goals of vision-and-language models (VLM). The astonishing advances in multi-modal vision-language models were fueled a variety of self-supervised pretraining objectives on a web the scale datasets of image-text pairs. The evaluation methodologies of the final models usually resort to measuring performance on downstream tasks that include visual question answering (VQA), referring expression comprehension/generation, image-to-text/text-to-image retrieval, image-text matching or generative tasks, like image captioning. The majority of these tasks use as an input contextualized holistic representations \(\mathbf{h}_{[IMG]}\) of the image and \(\mathbf{h}_{[CLS]}\) of the text caption computed by vision and language model and build the classifier on the top of these representations. Multiple recent studies have shown that vision-language models lack fine-grain understanding of verbs [1], spatial relationship [15], word order [13] and lack general visio-linguistic compositionality [20] that is critical for compositional reasoning and generalization. We scrutinize the reason for observed poor performance and focus on presence of grounding of linguistic concepts in images. Using recently released Visual Spatial Reasoning (VSR) benchmark [15], we use explainability tools [1] to identify problems with image-text matching methodology and propose alternative ranking approach using outputs of General Purpose Vision (GPV) encoder-decoder transformer model [16]. Our contributions can be summarized as follows: 1. We analyze the top-performing LXMERT model [14] on VSR benchmark [15] and identify problems with noun grounding and the image-text matching evaluation methodology. 2. We decompose spatial clauses into simpler primitives by grounding and localizing _subject_ and _object_ using an encoder-decoder GPV model [16] and train separate model for predicting their _relation_. This yields more traditional structured approach for combining the evidence about objects and spatial relations and ranking the spatial clauses. 3. The approach is evaluated on a subset of VSR benchmark Figure 1: Although the ground-truth caption for this image is: “The **head** is **next to** the **bicycle**”, multiple different **spatial clauses** from the language domain can be inferred from the visual domain (image) as to fill out the spatial clauses correctly, like _behind_, _left of_, _near_, _close to_, _touching_, etc. This type of intrinsic ambiguities from the language-side makes formulating the spatial reasoning task more challenging. demonstrating the effectiveness of our strategy for spatial clause understanding and suitability of ranking based approach, that avoids the biases of VQA/GQA setting and achieves better relative performance compared to random chance. ## Related Works Recent years witnessed a fast-pace emergence of different multi-modal vision and language architectures proposed to tackle different visual reasoning tasks. We first review the representative vision-language models, briefly discuss their strengths, weaknesses, and limitations pertaining to the study of visual spatial reasoning task. ### Vision-Language Models The model categories below differ in their architectures, the level of supervision used in pretrainig and their ability to provide quantitative evaluation of grounding and fine-grained understanding. Naturally the models that use stronger supervision are trained on smaller datasets with ground truth bounding boxes annotations. #### Cross-Modal Encoder Transformers These models follow the architecture of ViLBERT [10], LXMERT [20], UNITER [21]. The visual input in these models is typically tokenized using object proposals and their associated region of interest features (and bounding box coordinates) obtained by the state of the art object detectors such as Faster-RCNN. While these models achieve impressive performance using smaller amounts of training data (\(\sim\)9.7 million image-text pairs), they and are not end-to-end trainable and their performance on downstream tasks is affected by the quality of the detected regions. UNITER-based models like OSCAR [13] and VinVL [22] demonstrated improvements by incorporating world level embeddings of detected object labels as an additional input modality or fine-tuning their own object detector. The region proposal bottleneck continues to exists in models that freeze the visual token extraction stage. The pretraining typically includes Masked Image Modeling (MIM), Masked Language Modeling (MLM), and Image-Text Matching (ITM) and Object Label Prediction (OLP). The noise propagation imposed by the off-the-shelf Faster-RCNN [19, 20] object detector pointed out by [20] is also of concern. The performance of these models is evaluated on downstream tasks such as Visual Question Answering (VQA) [17], Natural Language for Visual Reasoning (NLVR) [23], image/text retrieval using separate heads that are added to the model followed by task specific fine-tuning. While these models showed considerable improvements on downstream tasks over the previous, mostly non-transformer based approaches, the baselines showed a large gap between human and model performance. These developments were followed by several probing studies using specially curated datasets that demonstrated that these models lack the understanding of attribution, relations, order, verbs and counting [24, 25, 26]. We show that image-text matching methodology is another factor contributing to low robustness and poor performance observed by probing. Image-text matching requires creation of negative image-text pairs that are used for balancing the training data for ITM binary head classification and inherits the well known problems of sampling hard negatives observed previously in contrastive learning approaches. Since LXMERT was the best performing model on the VSR [13] benchmark, we include this model in our experiments as the representative baseline from this class or architectures and demonstrate the above-mentioned challenges quantitatively and qualitatively in the next section. #### Dual-Encoder Transformers These architectures first introduced by CLIP [10] use contrastive learning and large datasets of image-text pairs (\(\sim\)400 million) and ALIGN [14] (using \(\sim\)1.8 billion image-text pairs) to align holistic image and text representations. While CLIP demonstrated high performance on image, scene and action classification benchmarks, multiple probing studies of fine-grained understanding have shown poor performance on tasks that require more compositional reasoning. For example [23] showed on CLEVR dataset that if a spatial clause is added to the caption the performance of image-text matching is at the level of random chance. Follow up works tried to combine using holistic representations of Dual-Encoders with tokenized representations of Cross-Modal Encoders like ALBEF [13], adding single modality pretraining to this new architecture like FLAVA [22]. The lack of compositionality and fine-grained understanding still remains, according the findings of [26]. Since the size pretraining corpus size of [13, 22] is notably smaller than [10] and [14], it makes it difficult to compare the effect of model architecture on the same footing. Another major drawback of this class of architectures is inability to quantitatively study grounding, since they start with patch features as oppose to object detection. #### Modulated Transformers These models are trained end-to-end, can predict bounding box coordinates associated with noun phrases in the text description and require correspondences between bounding boxes and noun phrases in the training. This makes them suitable for evaluation of phrase grounding, referring expression comprehension, and open world object detection tasks. MDETR [1] is representative of this category which is built on top of DETR [1], and trained for fine-grained alignment between regions of interest in the image and associated noun phrases using Hungarian matching algorithm. Additional pretraining tasks include VQA and GQA with their own classification heads. GLIP [13] follows similar approach but using a different architecture using a series of cross-attention layers from language encoder and transformer-based object detection module (named DyHead), followed by a contrastive learning module to learn the correspondences between detected regions and phrases using \(\sim\) 27 million examples. Another example of this category is OWL-ViT [12], that starts with patch-based ViT for the alignments, not training any object detector within the process. Due to the specific capabilities of MDETR (i.e. grounding and pre-trained GQA head), we used this model as another baseline for our experiments. **Encoder-Decoder Transformers** To extend the ideas introduced in MDETR, General Purpose Vision (GPV) [13] model can handle in addition to object localization functionality, image classification, captioning, and _VQA_ tasks. The task is the specified as input to the model in natural language enabling simultaneous multi-task training. The noun phrase localization task has bounding box ground truth available during training, while other tasks such as classification, question answering, and captioning have ground truth text associated with images. Other task-agnostic vision-language models that are end-to-end trainable on multiple tasks and have generative decoder branch for auto-regressing text include [11, 12, 13]. BLIP [14] combines Dual-Encoders, Cross-Encoders, and Transformer Decoder to learn a more robust alignment between image-text pairs, as well as caption generation task. While SimVLM [12] feeds the the image patches and a portion of the caption to the Transformer Encoder (following the _PrefixLM_ idea for the language-side), then feeding the output to the Transformer Decoder to generate the rest of the caption in the Causal LM setting. These later examples along with CoCa [13] have shown high performance gain in multiple down-stream tasks, but are not suitable for studying the noun phrase grounding problems in a quantitative manner due to the lack of generation of bounding boxes. The localization and VQA capabilities of GPV make this model suitable for our experiments. We built our ranking model on top of the GPV Localization module. ### Spatial Relationship Understanding Previous works on spatial relationship understanding use the synthetic CLEVR [15] dataset, focus on simpler spatial relationships and neglects the challenges in real image posed by object detection and representation learning. More general approaches study spatial relationships as part of VQA [1] or GQA [10] tasks. According to the GQA benchmark, only 8% and 22.4% of VQA and GQA is allocated to _Spatial questions_, respectively. The existing GQA questions that probe spatial understanding typically have binary YES/NO answers and hence inherit the well known biases of VQA/GQA task. VSR [15] is the most-recent dataset curated for studying a spatialic understanding in a more visually-realistic setting using MSCOCO [14] images. The authors collect annotations of spatial relationships including positive and negative spatial clauses. They report performance on image-text matching tasks by fine-tuning the existing models (LXMERT, VisualBERT and ViLT) on the training portion of this dataset that has \(\sim\) 10K image-text pairs. We use this dataset in our approach and propose alternative evaluation methodology for understanding spatial relationships. ## Probing Analysis of LXMERT VSR We start by pointing out some problems with image-text matching approach on LXMERT that was the best performing model on VSR dataset. Since LXMERT takes as inputs ROI features obtained by state of the art object detector we first quantify the performance of the object detector on VSR dataset. Due to the simple grammatical structure of the captions in the VSR dataset, the process of splitting and extracting _subject_, _relationship_, and _object_ from the captions is straightforward. ### Quantitative Input Analysis We quantify the effect of Faster-RCNN failing to detect regions associated with nouns corresponding to one or both of objects in the spatial clause. The rows marked \(1\). to \(6\). In Table 1 report the ratio of the cases that LXMERT's prediction is right or wrong, given that one, both, or none of the subject & object is/are being detected by Faster-RCNN, where left number is the exact match, which right one is the WordNet [12, 13] synonym match. On Table 1, **ZS** stands for VSR Zeros-shot split (in which there is no concept overlap between splits), while **Rand** stands for VSR Random split (in which the entire dataset is randomly distributed into train/val/test) For example, row \(1\). refers to the cases that the binary ITM is Successful (indicated as **S**). The reason why we get two numbers for each split (ZS or Rand) means that for the ZS split, among all the instances that the binary ITM label prediction was correct/successful, only in 27.08% of them both subject and object phrases were among all the detected category labels of FasterRCNN outputs for that image. However, in 30.41% of those successful cases one of the WordNet synonyms (not the exact word match) for both subject and object was among the FasterRCNN detected labels. The same applies to the Rand columns, while in row \(2\), in 72.91% of the successful predictions, only one \begin{table} \begin{tabular}{l l l l l l} \hline **Case** & **S** & **One** & **Both** & **ZS** & **Rand** \\ \hline \(Rand\) & – & – & – & 50 & 50 \\ \hline \(Acc.\) & – & – & – & 65.66 & 74.11 \\ \hline \(1.\) & ✓ & ✓ & ✓ & 27.08 / 30.41 & 27.93 / 32.93 \\ \(2.\) & ✓ & ✓ & ✗ & 72.91 / 55.41 & 72.06 / 50.73 \\ \(3.\) & ✓ & ✗ & ✗ & 16.25 / 14.16 & 18.93 / 16.33 \\ \hline \(1\)-\(Acc.\) & – & – & – & 34.34 & 25.89 \\ \hline \(4.\) & ✗ & ✓ & ✓ & 29.48 / 34.26 & 25.19 / 28.43 \\ \(5.\) & ✗ & ✓ & ✗ & 70.51 / 54.98 & 74.80 / 52.86 \\ \(6.\) & ✗ & ✗ & ✗ & 11.95 / 10.75 & 20.61 / 18.70 \\ \hline \end{tabular} \end{table} Table 1: LXMERT’s results on original VSR test set [15] and Faster-RCNN error analysis. ZS refers to the zero-shot setting in VSR, in which there is no concept overlap between the train/dev/test splits, while in Rand all the data is splitted randomly. (\(Acc\) stands for accuracy as % of correctly predicted ITM binary labels. Figure 2: **LXMERT Relevancy Scores**: Four different cases that LXMERT predicts image-text labels successfully in all of them. The first column shows the original image-text pairs from the VSR dataset. The second and third columns demonstrate the regions with highest relevancy score computed from attention weights using [12], for _subject_ and _object_, respectively. The first row shows an example that both _subject_ and _object_ attentions imply successful grounding. While the second row demonstrates relevant activations for the subject (_potted plant_), but irrelevant attentions for _bus_. However, the third and forth rows depict irrelevant attentions for both subjects and objects, demonstrating inconsistency in LXMERT’s fine-grained grounding while predicting the binary labels correctly. of the subject and object phrases were among the detections, following the same interpretation of row \(1\), while in row \(3\), non of them were among the detected labels. On the other hand, the second half of Table 1 shows the cases were the binary ITM label prediction was incorrect, while we mostly focus on the first half to validate our hypothesis. Considering the row \(1\), these numbers imply that for a limited portion of all the successful cases both subject and object labels are detected by the FasterRCNN feature extractor. On the other hand, row \(3\) indicates that \(\sim\)15% of the successful predictions happen when both subject and objects were not among the detected labels (either exact or synonym matches). While these numbers are not direct indicative of final ability of the model to associate words with region features, the weak correlation between them is inferred, as it was also studied in previous works like [14], as a problem in this type of architecture design. ### Qualitative Output Relevancy Scores Analysis We also use method of [10] to directly calculate the relevancy scores of the contextualized region features and visualize the image input tokens with highest relevancy score suggesting correct noun grounding in the caption. These results indicate that the big factor in understanding spatial relationships in image is lack of noun grounding. Furthermore, the fact that image and text can be matched successfully in the absence of noun phrase grounding skews the conclusions made by ITM probing tasks. ### Baselines Performance Comparison In order to have a fair comparison between state-of-the-art VLMs from different architecture designs on the VSR benchmark, we chose: (1) LXMERT from _Cross-Modal_, (2) MDETR from _Modulated_, and (3) GPV from _Encoder-Decoder_ Transformers. As the original VSR benchmark is curated as image-text pairs with binary label supervision (ITM), we had to unify this with other models, as the don't the ITM classification head in-place. Therefore, we modified the image-text matching into a "yes/no" VQA/GQA task by turning the captions into a question (using the same triplet information) and considering "yes" as 1, and "no" as 0 labels. Then, we ran the them on the golden test set (both for Zero-shot and Random split of VSR), and the results are shown in Table 2. Although these models (specifically their GQA classification heads) are already fine-tuned on multiple V+L tasks, including grounding and compositional reasoning (to some extent), but they still perform slightly above the random chance in the zero-shot inference. Apart from that, even these number may not reflect the truth about these models understanding of the fine-grained concepts, due to the lack of explainability in binary ITM/VQA/GQA formulations of benchmarks and tasks. ## Methodology We propose a more _compositional_ approach for spatial reasoning by _decoupling_ the process of: (1) grounding \(Subject\), and \(Object\) of the \((Subject,Relationship,Object)\) triplet extracted from the caption and (2) predicting the \(Relationship\) by training a multi-class classification model using location features obtained by the detector. This approach is motivated by earlier structured conditional random fields methods for modelling image content using scene graphs using [11] used for image retrieval tasks. Details of the process are explained as follows: ### Explicit Ranking Approach The distinguishing feature of encoder-decoder models [13, 14] is the ability to query the locations of objects. We exploit this in our alternative ranking based approach outlined in Figure 3. **Grounding Module** takes as input an image and spatial clause in the form \(\{Subject,Relation,Object\}\) and queries \(GPV_{Localizer}\) with \(Q_{i}\) = "_Locate the Subject_", getting back normalized bounding box coordinates as \(l_{i}=[x_{i},y_{i},h_{i},w_{i}]\) and confidence \(p(i)=Pr(o_{i}|I,Q_{i})\) of the most confident prediction given the query \(Q_{i}\). Similarly for \(p(j)=Pr(o_{j}|I,Q_{j})\) with \(Q_{j}\) = "_Locate the Object_". The concatenation of the bounding box coordinates is fed to the \(MLP\) to generate the initial probability distribution over all \(r_{k}\) spatial relationships1. \(\big{\{}Pr_{ij}^{k},...,Pr_{ij}^{k}\big{\}}=MLP([l_{i},l_{j}])\), where \(Pr_{ij}^{k}=Pr(r_{k}|l_{i},l_{j})\). The score of the spatial clause is then calculated using simple scoring function where score \(S_{k}(I,T)\) is computed as follows: Footnote 1: \(k=9\) in our case. \[S_{k}(I,T)=p(i)Pr(r_{k}|l_{i},l_{j})p(j).\] **Re-ranking Module** then uses the prior probability \(Pr[R_{k}(i,j)]\) of two objects \(i\) and \(j\) appearing in certain relation \(k\), computed by counting all prior co-occurrences of \(i\) and \(j\) appearing in relation \(k\). Then the final relation probability is then ranked by \(r_{k}(i,j)=Pr(r_{k}|l_{i},l_{j})Pr[R_{k}(i,j)]\) yielding the final ranking function: \[S_{k}(I,T)=p(i)r_{k}(i,j)p(j).\] ## Experiments **Data Pre-processing.** Because of the modified problem formulation from binary to multi-class classification, we first filtered out the instances from the original VSR benchmark with the positive (1) label. We then, discarded the orientation-based clauses like _"facing away"_, _"parallel to"_, \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **ZS** & **Rand** \\ \hline Random Chance & 50 & 50 \\ \hline GPV VQA [14] & 52.25 & 54.69 \\ MDETR GQA [13] & 53.35 & 54.54 \\ LXMERT GQA [14] & **54.71** & **55.23** \\ \hline \hline \end{tabular} \end{table} Table 2: VSR’s test set performance using SOTA models as zero-shot VQA. which require the pose or 3D information. VSR also reported this class of spatial clauses as the worst-performing one, even below the random chance, and the depth-based clauses like _"in front of"_, _"behind"_, that require depth signal. Finally, we grouped the semantically similar spatial clauses in VSR together and ended up with 9 classes as: (1) _below_, (2) _above_, (3) _far from_, (4) _right of_, (5) _left of_, (6) _inside_, (7) _outside_, (8) _near_, and (9) _contains_, to have a more controlled experiments over primitive spatial clauses (detail of spatial clauses merging/grouping can be found on Table 3). In total, we ended up with 3895 data samples, that we divided randomly into 3116 (80%) and 779 (20%) splits using StratifiedSampling to be used as the fixed/gold train and test sets for our experiments. We also tried adding geometry-based features for further disambiguation by computing the vector direction from the subject to the object bounding box center, as well as the absolute distance (norm) of the vector. Finally, we tried augmenting the dataset (train set) by the factor of 2, such that in symmetric clauses (like _"near"_) we swap the subject and object and keep the spatial clause the same, while for the asymmetric ones (like _"left of"_), we swap the subject and object, and reverse the spatial clause. ### Implementations Details For the localization part, we used GPV-1-Loc checkpoint, pre-trained on COCO split. For the spatial relationship reasoning, our MLP model consists of: (1) an input size of 8, (2) two hidden layers of 16 and 32, and (3) an output layer with the 9 neurons, respectively. Both hidden layers are followed by BatchNormld layers, then ReLU activation. Training has been done for 100 epochs until the convergence, with the batch-size of 12 and learning-rate of 1e-5, using the CrossEntropyLoss criterion and Adam optimizer. ### Results Analysis Table 4 shows the performance of our ranking approach, trained and tested on a subset of VSR. Our top-performing settings of 54.04% and 88.96% perform 42.93% and 55.63% above the random chances reflecting the % of times the correct spatial relationship is the _top-1_ or among the _top-3_ ranked spatial clauses. To have a fair comparison, we also run our baseline models in the binary VQA mode (LXMERT, MDETR, and GPV) on our modified test set containing only 9 spatial relationships and compare the performance of SOTA models with the random chance in Table 5. According to Table 5, MDETR _pre-trained_ on GQA Hudson and Manning (2019) achieves the highest performance on our test set, performing 45.63% above the binary random chance. Note that the biggest improvement was marked by models pre-trained on GQA (MDETR GQA and LXMERT GQA). We hypothesize that GQA contains more spatial clauses that resemble our modified dataset. Comparing our best performance after re-ranking (showed as **Ours Best**\(\Delta\) **= 55.63** in Table 4) to the best performing model of the baselines (showed as **SOTA Best**\(\Delta\) **= 45.63** on Table 5), we demonstrated that our approach outperforms the best SOTA (MDETR GQA) by 10%, in terms of the relative accuracy over the random chance, on the same test set. ## Discussion The major limitation of spatial reasoning with LXMERT-style transformers is lack of precise interpretability of image-text matching evaluation. The proposed ranking based approach of GPV encoder-decoder quantifies the ability of the model to grounding the noun-phrases explicitly and handles well the intrinsic ambiguity of spatial relations, where multiple spatial clauses are suitable description of the image. In the presented approach the reasoning is done purely in 2D with decoupled locations and region ROI features. Further disambiguation of more complex relations requires knowledge of 3D, that can be either revealed by depth and object pose estimates or novel larger datasets, additional means of supervision or variations in model architectures. The additional drawback of VSR dataset is the lack ground-truth bounding boxes annotations of subject and object. While we considered the GPV localization relevance scores as silver labels (as the majority of the scores fall between 0.8 to 1.0 for the entire dataset). In terms of the data augmentation, we noticed that our automated augmentation process of swapping the subject and object for both order-dependent and order-independent spatial clauses was not as helpful in our re-ranking technique. We think that a larger Figure 3: **Our Approach** consists of two main modules: (a) Grounding module predicts locations of objects along with their confidences, which MLP takes the bounding box coordinates and predicts the distribution of spatial relationships. These are then combined to compute the initial ranking of spatial clauses (2) Re-ranking module adjusts the ranking given the co-occurrences priors. This example shows the effectiveness of the Re-ranking Module in adjusting the spatial clause distribution (which brings \(inside\) to the 1st rank), while the initial top-3 prediction were semantically correct. dataset, covering a wider-range of visual/textual concepts, augmented with the above-mentioned supervisions would be beneficial for further performance improvements. ## Conclusions and Future Works We demonstrated low zero-shot performance of several state-of-the-art VLMs on spatial reasoning task and challenges of input and image-text matching with LXMERT-style architectures that output only contextualized tokens along with \(\mathbf{h}_{[IMG]}\) and \(\mathbf{h}_{[CLS]}\) tokens. We proposed compositional approach for spatial reasoning using outputs of GPV encoder-decoder model with explicit quantification of grounding and outperformed the SOTA models in terms of relative increase over the random chance for each setting. Another major advantage of our approach is the modularity that makes it possible to replace the localization module with the upcoming SOTA models in the future. Future directions include the extensions of the compositional framework to more complex reasoning and grounding tasks, such as referring expression [22] and finer-grained understanding of spatial relations requiring 3D cues.
2306.01688
Packet Reception Probability: Packets That You Can't Decode Can Help Keep You Safe
This paper provides a robust, scalable Bluetooth Low-Energy (BLE) based indoor localization solution using commodity hardware. While WiFi-based indoor localization has been widely studied, BLE has emerged a key technology for contact-tracing in the current pandemic. To accurately estimate distance using BLE on commercial devices, systems today rely on Receiver Signal Strength Indicator(RSSI) which suffers from sampling bias and multipath effects. We propose a new metric: Packet Reception Probability (PRP) that builds on a counter-intuitive idea that we can exploit packet loss to estimate distance. We localize using a Bayesian-PRP formulation that also incorporates an explicit model of the multipath. To make deployment easy, we do not require any hardware, firmware, or driver-level changes to off-the-shelf devices, and require minimal training. PRP can achieve meter level accuracy with just 6 devices with known locations and 12 training locations. We show that fusing PRP with RSSI is beneficial at short distances < 2m. Beyond 2m, fusion is worse than PRP, as RSSI becomes effectively de-correlated with distance. Robust location accuracy at all distances and ease of deployment with PRP can help enable wide range indoor localization solutions using BLE.
Subham De, Deepak Vasisht, Hari Sundaram, Robin Kravets
2023-06-02T17:03:30Z
http://arxiv.org/abs/2306.01688v1
# Packet Reception Probability: Packets That You Can't Decode Can Help Keep You Safe ###### Abstract This paper provides a robust, scalable Bluetooth Low-Energy (BLE) based indoor localization solution using commodity hardware. While WiFi-based indoor localization has been widely studied, BLE has emerged a key technology for contact-tracing in the current pandemic. To accurately estimate distance using BLE on commercial devices, systems today rely on Receiver Signal Strength Indicator(RSSSI) which suffers from sampling bias and multipath effects. We propose a new metric: Packet Reception Probability (PRP) that builds on a counter-intuitive idea that we can exploit packet loss to estimate distance. We localize using a Bayesian-PRP formulation that also incorporates an explicit model of the multipath. To make deployment easy, we do not require any hardware, firmware, or driver-level changes to off-the-shelf devices, and require minimal training. PRP can achieve meter level accuracy with just 6 devices with known locations and 12 training locations. We show that fusing PRP with RSSI is beneficial at short distances (\(\leq\)\(2\,\mathrm{m}\)). Beyond \(\geq\)\(2\,\mathrm{m}\), fusion is worse than PRP, as RSSI becomes effectively de-correlated with distance. Robust location accuracy at all distances and ease of deployment with PRP can help enable wide range indoor localization solutions using BLE. ## 1 Introduction Indoor positioning is a widely studied problem in academia and industry [48, 47, 24, 28, 57, 14, 44]. Coupled with the high penetration of consumer radio devices (e.g. smartphones), indoor positioning can re-imagine use of indoor spaces like retail spaces, malls, museums, and warehouses. Today, the contact-tracing challenge due to the pandemic has put an urgent, renewed focus on developing a robust, low-cost, scalable, indoor localization solution. Indoor-localization based contact-tracing1 that helps us determine if a pair of individuals are "social-distancing," separated by more than 6ft, may safely re-open the world economy. Footnote 1: Contact-tracing requires us to calculate relative distance between individuals. Inferring relative distance from location is straightforward. Technological solutions for contact tracing that use smartphones are an important complement to normative (e.g., wearing a mask) and policy (e.g. stay-at-home) interventions for mitigating effects of the pandemic. Bluetooth Low-Energy (BLE) is emerging as the key contact-tracing technology and is being used in contact-tracing apps around the world. For example, the Aarogya Setu contact-tracing app2 in India, uses BLE and has been downloaded 120M times. The open-source, privacy-preserving contact-tracing framework, BlueTrace3 (deployed in Singapore) uses BLE packets to detect presence (i.e., a smartphone that can hear another must be in proximity of the other), _not_ distance. BLE is preferable to WiFi for contact-tracing: BLE uses 10\(\times\)_less power_ than does WiFi; BLE can be easily used to infer the presence of nearby peers without presence of WiFi infrastructure. The newly proposed Exposure Notification Service by Apple-Google4 also relies on BLE beacons and signal strength measurements. Footnote 2: [https://www.mygov.in/aarogya-setu-app/](https://www.mygov.in/aarogya-setu-app/) Footnote 3: [https://bluetrace.io](https://bluetrace.io) Footnote 4: [https://www.apple.com/covid19/contacttracing/](https://www.apple.com/covid19/contacttracing/) ### _Overcoming Key Technological Limitations for Contact Tracing_ In this paper, we ask: _Can we develop robust Bluetooth based contact tracing, with existing measurements, deployed on low-cost commodity hardware?_ To do so, we need to overcome four fundamental limitations--deployability, bias in RSSI, high packet loss in Bluetooth and multipath effect. **Deployability on commercial smartphones:** Bluetooth Low-Energy based apps for contact-tracing have two well-known shortcomings. These apps primarily use either RSSI (Received Signal Strength Indicator) or presence to determine risk to COVID exposure. Prior work [3, 16, 58] demonstrates that RSSI-based methods experience large errors (order of several meters) in positioning, especially in the low RSSI-large distance regime. RSSI has an important benefit: it is present on all modern devices. In contrast, we cannot use CSI (Channel State Information) [24, 53, 48], a recent method that enables sub-meter accuracy, since off-the-shelf devices typically do not report CSI. A recent work [40] has enabled CSI for WiFi in some smartphones, but cannot be applied for BLE. Some contact-tracing apps also use 'presence'--if one device can hear another--to determine if an individuals is close to another infected person. Presence is a poor proxy for distance since devices can hear Bluetooth beacons well beyond 6 ft social distancing radius, and also hear them across aisles and walls. **Biased RSSI Estimates due to Packet Loss:** We explain with a conceptual example in Figure 1(a) that shows a Normally distributed RSSI at the receiver, for a fixed transmitter and receiver. In free space, with increasing distance between the transmitter and the receiver, the RSSI distribution shifts to the left, implying a decreasing RSSI at the receiver. RSSI-based methods [3, 30] empirically measure RSSI and use the mean RSSI estimate to infer distance. However, as the distance between the transmitter and the receiver increases (i.e., the RSSI distribution shifts to the left), packet loss increases with almost certain packet loss at the low-RSSI decoding threshold. Since devices only report RSSI for successfully decoded packets, RSSI-based distance methods suffer from a sampling bias: they use RSSI from decoded packets only. Since they cannot know RSSI values of packets they cannot decode, these methods introduce systematic error in their mean RSSI estimates. This error increases with distance, so much so that at large distances (few meters for BLE), as we shall show in this paper, the mean RSSI estimate becomes de-correlated with distance and is an unreliable indicator. The error is different from the typical reduction in SNR due to increase in distance. The error stems from a sampling bias fundamental to RSSI measurements. **Packet Losses are higher in BLE: Packet loss is a fundamental problem in a low-power protocol like Bluetooth Low Energy. At distances as small as \(1\,\mathrm{m}\), in line of sight, around 10% of the packets get dropped in our empirical evaluation, as shown in Figure 1(b). Packet loss rate increases to 50% at \(3\,\mathrm{m}\). Thus, the sampling bias in RSSI measurements is a more significant challenge for BLE compared to high-power WiFi protocol based RSSI methods [3, 57, 10]. As pointed out in [7], BLE limits transmission power to reduce energy consumption. BLE v4.0, v4.1, and v4.2 defined maximum output power is 10m\(\mathrm{W}\), which is 10\(\times\) lower than WiFi.** **Multipath Effects: Multi-path effects [51, 58] are the second large contributor to RSSI errors. Specifically, the error arises due to reflections of the radio signals by objects in the environment. Thus, the signals from the transmitter travel along multiple paths and combine at the receiver. This combination can be constructive (i.e., in-phase) and increase RSSI or destructive (i.e., out of phase) and reduce RSSI. Since this combination is a function of the environment and _not_ the distance between the devices, multipath introduces error in distance measurements.** ### _A Counter-Intuitive Approach: Exploit Packet Loss to infer Distance_ In this paper, we ask a counter-intuitive question: _Could the loss of a packet be a clue to the distance between the transmitter and receiver?_ Intuitively, as the distance between transmitter and receiver increases, the ability to successfully receive packet decreases. In this paper, we build on this intuition to develop a new metric: Packet Reception Probability (PRP), which measures the probability that a receiver successfully receives packets from the transmitter. A simple experiment validates our intuition that PRP can encode distance. We collected packets from BLE beacons transmitting at -20db power at increasing distance values between \(1\,\mathrm{m}\) to \(10\,\mathrm{m}\) in a line-of-sight (LOS) scenario. We use maximum likelihood estimates for PRP. We plot the PRP estimate as a function of distance in Figure 1(b). Notice that Figure 1(b) shows that the probability of receiving a packet _decreases_ with distance, implying that PRP encodes distance. We show in this paper, that for low energy protocols including BLE, PRP is a good indicator of the distance between communicating devices. Our approach, Bayesian Packet Reception Probability (B-PRP), is suitable for public spaces including retail stores or libraries, places that are important to current social distancing and contact tracing efforts. B-PRP is a PRP-based approach that develops a novel Bayesian framework to explicitly model multipath reflections in the environment and deliver robust and accurate localization. The Bayesian framework helps to minimize system deployment costs. A public environment like a retail store contains obstructing materials in the form of stacks or shelves. The shelves (including the items placed on them) absorb or reflect the radio signals directed at them. This leads to a lower packet reception probability at the receiver. At a fixed distance, the packet reception probability will vary based on the number and type of obstacles in the signal path. B-PRP must tease apart the effects of distance from the interference effects of the obstacles, when estimating distance. We observe that we can model public spaces including retail stores in a modular manner comprising open spaces separated by stacks. We explicitly capture the effect of such stacks by modeling the packet reception in absence of stacks and in presence of one stack, two stacks and so on. While we use stacks to model retail spaces, we believe that the abstraction of modeling a geometric element is general enough to apply to other large indoor spaces like libraries, warehouses, factories, etc. Fig. 1: (a) As the mean RSSI decreases, the error in the RSSI estimate increases because of lost packets. (b) Packet reception in Line-of-Sight (LOS) with -20db transmission power decreases with distance. (c) Packet Reception Probability(PRP) technique is more accurate than RSSI [3, 16, 58] and readily deployable on commercial devices than CSI [24, 53, 48]. Finally, we present a method to estimate inter-device distance using our approach, a primitive essential to contact-tracing. We evaluated B-PRP in two real-world public places, an academic library setting and in a real-world retail store, and demonstrate the efficacy of our techniques. In both cases, we did not control for human traffic. Our main results: **Localization Accuracy:**: B-PRP achieves a median localization error of \(1.03\,\mathrm{m}\) (library) and \(1.45\,\mathrm{m}\) (retail store). The state of the art Bayesian RSSI system [30] has errors of \(1.30\,\mathrm{m}\) (library, 26.2% more error) and \(2.05\,\mathrm{m}\) (retail store, 41.3% more error) when trained with the same number of data points and packets per data point. **Distance estimation for contact tracing:**: Our contact tracing distance estimation achieves median error of \(0.97\,\mathrm{m}\) (library) and \(1.22\,\mathrm{m}\) (retail store) with PRP values. The errors with RSSI are \(1.69\,\mathrm{m}\) (library, 74.2% more error) and \(1.25\,\mathrm{m}\)(retail store, 2.4% more error). Using the covid risk metric [46], we see that PRP does 1000X better than RSSI in the library. **B-PRP+RSSI Fusion:**: Fusion of B-PRP and RSSI modestly improves the overall localization accuracy over B-PRP (Table II). We see best fusion results at small distances (\(\leq\)\(2\,\mathrm{m}\)). At larger distances (\(\geq\)\(2\,\mathrm{m}\)), errors in RSSI cause fusion results to be significantly worse than B-PRP. PRP+RSSI also improves contact tracing accuracy by 6% for both library and retail store. **Robustness to Multipath:**: Our multipath model increases the accuracy for PRP from \(1.41\,\mathrm{m}\) to \(1.03\,\mathrm{m}\) in the library (a 26.9% improvement) and from \(1.60\,\mathrm{m}\) to \(1.45\,\mathrm{m}\) (a 9.3% improvement) in the retail store. **Number of Beacons:**: As beacon density decreases, B-PRP error is always within \(2m\) while RSSI errors are higher than \(3m\). With five beacons, B-PRP performs \(65\%\) better in library and \(50\%\) better in the retail store. **Low Training Overhead:**: B-PRP can leverage unknown training data to train the B-PRP model, thereby reducing the deployment effort. Specifically, B-PRP can achieve 1.08 m median accuracy with just 8 labelled data points and 4 unlabelled data points. For completeness, we note that the core limitation of a localization method like B-PRP, a limitation shared with methods including [9, 27, 12, 28] is that it needs deployment of beacons in the public space to locate individuals. However, BLE beacons are inexpensive, and our method, B-PRP, provides meter-level accuracy. A peer-to-peer distance estimation is much more general where we will use devices like smartphones for reception and transmission. We believe that this tradeoff between some upfront infrastructure expense (multiple beacons) and increased localization accuracy is worthwhile in highly frequented public spaces. ## 2 Contributions Our paper makes the following contributions: **Use of Negative Information:**: To the best of our knowledge, we are the first to build an indoor positioning system that can extract information from _absence of packets_. In contrast, state of the art RSSI based techniques [3, 57, 10], use observed RSSI to infer distance. We accomplish this through a Bayesian formulation of the packet reception probability, a metric that we show encodes distance. We develop generic stacking models of reception to address multipath effects. While we use PRP as a sole indicator of distance to highlight its benefits, we show that B-PRP when combined with RSSI, improves the performance of the system at shorter distances. Our finding shows how to use BLE to robustly estimate indoor distances, thus opening the door to reliable BLE based contact-tracing that incorporates distance. **Distance estimation for contact tracing without localization:**: We directly estimate distance between two individuals _without localization_ by exploiting the well-known triangle inequality constraints in Euclidean geometry. In contrast, we may consider estimating contact tracing distance through localization: that is, we first, estimate locations of two persons independently and then calculate the Euclidean distance between the two locations. This approach is sub-optimal--we are estimating location while we are only interested in distance. Also, if we have localization errors for a particular individual, these errors will impact _all_ the distance estimations between this individual and other nearby persons. We extend out Bayesian framework to independently estimate distances between pairs of individuals. With the known beacons, we form triangles and we impose triangle inequalities on these distances to rule out many distance configurations in the real world. We improve our distance estimates by \(\sim 10\%\) by triangle inequality distance estimation. **Sampling Bias in RSSI:**: We show the effect of packet loss on the mean RSSI measurements. Furthermore, we show that with increasing distance, mean RSSI becomes highly unreliable due to sampling bias. Our finding is significant because the state of the art RSSI based techniques [3, 57, 10] when applied to BLE, a low-power protocol, are highly unreliable in the \(2\,\mathrm{m}\) to \(6\,\mathrm{m}\) range (Table II). We highlight that \(2\,\mathrm{m}\approx 6\,\mathrm{ft}\), the social distancing range. **Readily Deployable Solution:**: Our B-PRP framework does not require any hardware, firmware, or driver-level changes in off-the-shelf devices, and requires minimal deployment and re-training costs. In contrast, CSI [24, 53, 48], which can deliver sub-meter accuracy, requires firmware or hardware changes. This is significant: due to the simplicity of the packet reception framework, we can immediately deploy B-PRP as an application on off-the-shelf commodity smartphones. ## 3 Motivation In this work, we focus on localizing individuals in indoor public spaces like retail stores, libraries. In these spaces, indoor positioning using BLE beacons can enable traditional applications like capturing behavioral data about shoppers, as well as novel applications like enforcing social distancing and contact-tracing. BLE offers an unique advantage for localization. Due to its low power budget, it can be turned on frequently and hence, enable more frequent location updates as compared to high power protocols like Wi-Fi. Recall that BLE's maximum transmit power (10 dBm) is 10 times lower than that of Wi-Fi (20 dBm). This factor, in addition with its ubiquitous presence on off-the-shelf smartphones, has made BLE the natural choice for such applications. Traditional BLE localization techniques either use RSSI (Rece Signal Strength Indicator) [3, 16, 58] or CSI (Channel State Information) [24, 2]. CSI for BLE is not available on commercial devices like smartphones. On the other hand, RSSI measurements are noisy due to packet loss and multi-path effect. While multi-path effects [51, 58] are well-documented, lets dig deeper into the challenge of packet loss. First, we identity that packet losses can be mainly attributed to two reasons--random errors and low signal strength. Errors can occur uniformly at random irrespective of the RSSI of the packet. As a result, such errors do not introduce any bias in the aggregate estimate of RSSI. On the other hand, all packets that are received with a signal strength below a certain decoding threshold get dropped. Since we cannot observe the RSSI values of these low RSSI packets, and hence cannot include them in our aggregate estimates, we should expect to see a positive bias introduced in our RSSI measurements. Lets mathematically validate our hypothesis of positive bias in RSSI aggregate estimates. Let us assume that the actual RSSI values at a certain location follow the Gaussian distribution \(\mathcal{N}(\mu,\sigma^{2})\). Lets further assume that the RSSI decoding threshold is \(\alpha\). Since we drop all packets below the threshold, our aggregate RSSI estimates will be based on a Normal distribution truncated at \(\alpha\). The new mean of this truncated normal distribution is given by \[\hat{\mu}=\mu+\frac{\phi(\alpha)}{1-\Phi(\alpha)}\sigma, \tag{1}\] where, \(\phi(\alpha)\) is the pdf of normal distribution evaluated at \(\alpha,\phi(\alpha)\geq 0\). \(\Phi(\alpha)\) is the cdf value of the normal distribution at \(\alpha,\Phi(\alpha)<1\). Thus the estimate \(\hat{\mu}\) that we obtain by measuring received RSSI values is biased by a positive amount of \(\frac{\phi(\alpha)\sigma}{1-\Phi(\alpha)}\). As we move towards the lower RSSI regime, \(\mu\) becomes closer to \(\alpha\). As a result, both \(\phi(\alpha)\) and \(\Phi(\alpha)\) increases with lower RSSI values, which leads to a higher bias in the estimated mean RSSI. Note that we cannot trivially estimate \(\mu\) from \(\hat{\mu}\) in Equation (1) since in practice, multi-path effects alter the values of the RSSI in the received packets. Thus recovering \(\mu,\sigma\) using say Maximum Likelihood Estimates by assuming a value of \(\alpha\) is non-trivial. Based on the above discussion, we identity that our solution requires three important properties--eliminating positive bias due to packet loss, robustness to multi-path effects, and ease of deployability on commercial devices. CSI meets the first two properties but misses the important requirement of deployability. RSSI is deployable, but has positive bias and is sensitive to multi-path. ## 4 System Design In this paper, we solve the challenges with RSSI by asking a different question--_Can we use the loss of packets as a signature itself to measure distance_? We define a random variable, packet reception probability, \(prp(b)\), for a beacon \(b\) whose expected value is defined as: \[\mathbb{E}(prp(b))=\frac{\sum_{i}\mathbf{1}_{i=b}}{R(t_{i}-t_{f})} \tag{2}\] Here, \(\mathbf{1}\) is the indicator function that is \(1\) if and only if packet \(i\) is received from beacon \(b\), \(R\) is the sending rate of the beacon, and where \(t_{l}\) and \(t_{f}\) are the timestamps of the last and the first packet received from beacon \(b\). Notice that the right hand side of Equation (2) is just the frequentist estimate of the probability of packet reception from beacon \(b\): number of packets received divided by the total number of packets sent by beacon \(b\). One might wonder if PRP provides additional information beyond RSSI measurements. Notice that by directly modeling packet reception, we are leveraging absence of information(packet loss). RSSI is measured for packets that are successfully received, but not for dropped packets. Therefore, a system that drops 90% of the packets and 50% of the packets may have the same measured RSSI, but we know one of them has lower true RSSI, and hence is farther off, by looking at the packet reception probability. Also, **packets that are successfully received and influenced by multipath effects, only impact RSSI mean estimate but not expected PRP value \(\mathbb{E}(prp(b))\)**. Now we will focus on how to use PRP to measure location. ### _Estimating Location using PRP_ Recall, in Figure 1(b), PRP degrades with distance. In this section, we discuss how we can model the relationship between PRP and distance, and use this relationship to infer location. Specifically, PRP (\(prp\)) depends on three factors: (a) distance (\(d\)), (b) sending rate (\(R\)), and (c) transmission power (\(p_{t}\)). In this subsection, we model the relationship in free space ( Figure 3(A)). We will incorporate the effect of multipath in subsequent sections. We use a Bayesian model to model the relationship between PRP estimates from multiple beacons and the underlying physical location. Our choice of the Bayesian approach is motivated by two key design benefits: (a) It allows us to infer not just the location, but also quantifies the uncertainty in the location estimate. Such estimates are very helpful when the location is used for higher-layer applications like customer behavior analytics, contact tracing.(b) It can be extended to scenarios when the beacon Fig. 2: **Graphical model**: Shaded nodes are observed, while we need to estimate the unshaded ones. We use the data on number of received packets \(c_{i}\) measured from \(B\) beacons at \(N_{R}\) reception locations to train the PRP parameters \([w]\). During tracking, we use the trained parameters \([w]\) and \(c_{i,t}\) to estimate location \(l_{t}\). location itself is unknown or the training set is small. As we show in Section 5, this reduces the deployment costs. We model \(prp\) as a function \(g\) of the distance \(d\), sending rate \(R\) and power \(p_{0}\) of the beacon. Assume that we receive a packet from beacon \((x_{b},y_{b})\) at location \((x_{r},y_{r})\). We calculate the Euclidean distance \(d\) between the beacon and receiver. Then, assuming that we know sending rate \(R\) and transmission power \(p_{0}\), we can model the number of packets received \(c\) received at \((x_{r},y_{r})\) as drawn from a binomial distribution with parameter \(prp\): \[c \sim Bin\left(N,prp\right),\] binomial distribution, \[prp =g(d,R,p_{0}),\] PRP link function, \[d =\sqrt{(x_{b}-x_{r})^{2}+(y_{b}-y_{r})^{2}},\] distance to beacon \[b\]. \(N\) is the total number of packets sent out by the beacon is proportional to the product of the sending rate \(R\), and the time spent \(T_{r}\) at location \(r\). The function \(g(d,R,p_{0})\) is a link function that connects the underlying infrastructure parameters (\(R,p_{0}\)) and physical distance \(d\), to the packet reception probability. In identifying the right representation of \(g\), we need to keep two considerations in mind: (a) the value of \(g\) has to be between 0 and 1, and (b) \(g\) must encapsulate relationship between \(d\), \(R\) and \(p_{0}\), not just their direct effect on \(prp\). Therefore, we model \(g(d,R,p_{0})\) as a logistic function of quadratic interaction between the parameters. \[\text{logit}\{g(d,R,p_{0})\}=w_{0}+\sum_{i}w_{i}\theta_{i}+\sum_{i,j}w_{i,j} \theta_{i}\theta_{j} \tag{3}\] where, \(\text{logit}(p)=\log(p/1-p)\). And where, \(\theta_{1},\theta_{2},\theta_{3}\) correspond to the variables of \(d,R,p_{0}\) respectively. The coefficients \([w]=[w_{i},w_{i,j}]\) are drawn from a non-informative prior \(N(0,\sigma)\)--a zero mean Normal distribution with variance \(\sigma\). We choose \(\sigma\) to be large in our system to allow for a large range of values. Our Bayesian formulation above is shown in Figure 2(a). Our framework operates as follows: **Training Phase:** During training, we use a data set \(D\) collected in an environment to estimate the underlying parameters. Specifically, we need to estimate the posterior distribution of the unknown parameters \([w]\) given data \(D\) i.e. \(P([w]\mid D)\). The training set, \(D\), comprises BLE logs. Specifically, to obtain \(D\), we stand at \(N_{R}\) locations in our testing area and listen to the packets from \(B\) beacons. Assume further, that we know the \(B\) beacon locations \((x_{b},y_{b})\), \(b\in\{1,\ldots,B\}\) and \(N_{R}\) reception locations \((x_{r},y_{r})\), \(r\in\{1,\ldots,N_{R}\}\). We will relax this assumption in Section 5. **Test Phase:** During test phase, we do not know the reception locations, \((x_{r},y_{r})\)\(r\in\{1,\ldots,N_{R}\}\). We use the measured \(prp\) and the parameters estimated during the training phase to estimate the receiver location. We use PyMC3 [39] framework to do the inference. **Adding Human Mobility:** Finally, we note that human location across time is not independent. Rather, locations are constrained by the time between them and the average moving speed of a person. If we wish to track individuals at temporal resolution \(\delta\), and if a person reaches a location at time \(t\) with speed \(s_{t}\), we can constrain that location in terms of previous location at \(t-1\) \[x_{t} \sim U(0,S_{max}),\] speed, \[x_{t} \mid x_{t-1} \sim\mathcal{N}(0,s_{t}*\delta), x_{t}\text{ constrained by }s_{t}\times\delta,\] \[y_{t} \mid y_{t-1} \sim\mathcal{N}(0,s_{t}*\delta), y_{t}\text{ constrained by }s_{t}\times\delta.\] Where, \(S_{max}\) is a constant in our model denoting maximum movement speed of a human (similar to [11]). We estimate speed and location of a person from \(prp\) data. ### _Combating Multipath Effect_ We have assumed a free-space propagation model so far, but real-world environments have obstacles. We observe that the main contributor to multipath effect in public spaces like retail stores (or libraries) are the stacks used to list products (or books) and to separate aisles. In such scenarios, the \(prp\) value depends not just on the distance, but also on the number of stacks the signal has to cross. Crossing one stack is easier than crossing two and will cause fewer packet drops. To build on this observation, we explicitly model the number of stacks in our framework. This allows us to not just estimate the distance between a beacon and a receiver, but also estimate the number of stacks between them. Estimating this geometric information is useful for both: combating multipath, and exploiting in higher-layer applications. For instance, retail store apps need to estimate what aisle a customer is shopping in, contact-tracing apps want to discount for infection spread if customers are close (but across aisles). To estimate the stack separation, we divide the store layout in Figure 3 into five portions based on the given beacon--free space(F-S), one stack (1-S), two stacks away (2-S), corridor (C) and desk (D). In the figure, the packets to receiver 1 in F-S do not have to go through any obstacles. The packets to receiver 2 in 1-S and 3 in 2-S go through one and two interfering stacks respectively. Receiver 4 is in a corridor. We limit ourselves to two stacks away in the model, because we empirically observe that two stacks or more have similar effects on packet reception (high loss). Then, we parameterize our link function with a variable, \(\gamma\) that denotes the geometric-element separation. We represent the new link function as \(g_{\gamma}(d,R,p_{0})\). Now, at training time, we estimate parameters for the functions--free space \(g_{F-S}\), one stack \(g_{1-S}\), two stack \(g_{2-S}\) and corridor (C) model \(g_{C}\). We use a Bayesian training procedure similar to the free-space scenario. We segment our training data into the different scenarios, and use the segment-specific data to learn the parameters in each \(g_{\gamma}\). For example, the data with one stack separation is used to train \(g_{1-S}\). During testing, B-PRP uses the maximum likelihood model to identify the underlying location as well as stack separation. At first blush, it might seem very complex to identify \(\gamma\) for each of the \(B\) beacons. We exploit the knowledge of the store geometry and beacon arrangements within the store to significantly reduce the number of unknowns. Assume that beacons \(a\) and \(b\) are in the same aisle adjacent to each other. Then, _regardless of where the individual is, beacons \(a\) and \(b\) must have the same model type \(\gamma\) with respect to the receiver._ Similarly, if beacons \(a\) and \(b\) are in neighboring aisles, and the model type \(\gamma\) is \(F-S\) for \(a\), then \(\gamma\) must be \(1-S\) for \(b\). Thus, given a location \(x_{t},y_{t}\), knowledge of store geometry and beacon arrangements help fix the model type for _all_ beacons, given the model type for _any one_ beacon. ### _Estimating Distance for Contact Tracing_ Given our framework, we could trivially estimate the distance between two individuals using a two-step approach: first, estimate their locations independently and second, calculate the Euclidean distance between the two locations. Then, we could use this distance to ascertain whether two individuals were in contact for contact-tracing. However, this approach is sub-optimal. It requires us to determine four unknowns -- \((x,y)\) for the two devices, while we are only concerned about the final distance estimate between the two devices. Also, if we make errors in location estimation for an individual, that impacts all the distance estimations of this individual with other neighboring persons. In other words, we get correlated errors for independent distances between different pairs of individuals. **Can we do better?** At a high level, we can improve the distance estimation process using two insights. _First_, we don't need to model individual locations if we just care about distance. Therefore, we explicitly incorporate the distance between two devices as part of our Bayesian model. This helps us reduce the number of unknowns in our framework, and also helps to model the distance between each pair of individuals as an independent unknown. _Second_, we leverage the triangle inequality. The triangle inequality states that given a triangle, the sum of two edges has to be greater than or equal to the third edge. This helps us rule out many triangular distance configurations. We present a detailed formulation of these insights below. Finally, one might wonder: why do we need all this complexity? Why don't we just use the direct transmission between two devices to estimate distance--device A transmits to device B, device B measures PRP, and we convert that to distance? This approach would create a posterior distribution for distance, but with a large variance because interference by other nearby persons increases the uncertainty in our posterior distance distribution. To shrink this variance, we need other distance measurements, either to known beacons or to many other peers. In this paper we adopt an infrastructure-assisted approach described above to help compute distances between pairs of individuals. Given that public indoor spaces like retail stores or restaurants (or even businesses) are more likely to be crowded, an infrastructure-assisted approach is reasonable in these settings. **Approach:** We explain our approach using a toy example(pictorially represented in Figure 4) which contains two beacons \(b1\), \(b2\) and two receivers \(r1\), \(r2\). We are interested in finding the distance \(d_{r1,r2}\). The latent variables are \((d_{b1,r1},d_{b1,r2},d_{b2,r1},d_{b2,r2})\), on which we have ppr data. \(d_{b1,b2}\) is known. First, we infer latent variables (\(d_{b1,r1},d_{b1,r2},d_{b2,r1},d_{b2,r2}\)) by constructing a joint likelihood function with two components--observed ppr values, and triangle inequalities. Taking the receiver \(r1\) as an example, we have the triangle \((r1,b1,b2)\) which gives us three triangle inequalities that can be converted to likelihood values as \[L_{T} =\log P(d_{r1,b1}+d_{r1,b2}-d_{b1,b2}>0)\] \[+\log P(d_{r1,b1}+d_{b1,b2}-d_{r1,b2}>0)\] \[+\log P(d_{r1,b2}+d_{b1,b2}-d_{r1,b1}>0) \tag{4}\] For estimating the latent variables involving the receiver \(r1\), we can write down the joint log likelihood function as: \[\max[\log P(prp_{r1,b1}|d_{r1,b1})+\log P(prp_{r1,b2}|d_{r1,b2})+L_{T}]\] Second, we infer \(d_{r1,r2}\) by maximizing the likelihood of triangle inequalities involving triangles with two receivers and one beacon. We have two triangles \(T_{1}=(r1,r2,b1)\) and \(T_{2}=(r1,r2,b2)\). We can construct the likelihood functions for \(T_{1}\) and \(T_{2}\) similar to Equation (4). We maximize \[L=\max_{(d_{r1,r2})}[L_{T_{1}}+L_{T_{2}}|d_{b1,r1},d_{b1,r2},d_{b2,r1},d_{b2,r 2}].\] We use PyMC3 potentials to construct these joint likelihood functions and then apply MCMC sampling techniques to solve them. We can also use RSSI or PRP+RSSI instead of PRP in our likelihood functions, which will serve as our different methods in Section 8.3. Fig. 4: Modeling contact tracing distance optimizing joint likelihood of observed PRP values and triangle inequalities Fig. 3: **Modelling Obstacles and Multipath:** In (A), there is no obstruction in the path of the receiver. In retail layout (B), receiver \(1\) is in free space with beacon, \(2\) is one stack away and \(3\) is two stacks away. \(4\) is an open region of the layout, i.e. the corridor. We segregate the retail layout in (C) into geometric elements based on the relative position of beacon and receiver. ## 5 System Deployment and Optimization To summarize, the B-PRP system operates in following steps: * **Deployment:** We deploy BLE beacons at known locations in an environment like a retail store. The location of the beacons as well as the floor plan is uploaded to a B-PRP server. The server can reside on the cloud or be an edge device local to each environment. * **Training:** A user walks to fixed locations in the store with a smartphone app or another BLE receiver and measures the PRP values. The PRP values are uploaded to a server. The server uses these labelled PRP values, the beacon locations, and the floor plan to train the B-PRP model. * **Localization and contact tracing:** Finally, when new users walk in, they measure PRP for beacons already deployed in the store. The app on the smartphone uploads the PRP values to the server. The server uses the trained model to infer location of the users and sends it back to the user. The server also uses the PRP values from multiple users to infer the proximity distance between them. Note that, this system is centered on the user. If the user chooses not to share the PRP values with the server, no location estimation and contact tracing can be performed. Furthermore, the design also conserves power on the smartphone because the user never has to transmit any BLE packets. Finally, we transmit beacons using BLE advertising mode. This prevents the need for making any explicit connection between the user device and the beacon. The user device can ignore the advertising beacons to avoid localization. **Reducing Deployment Overhead:** Deploying the localization infrastructure has two major overheads--setting up the beacons at exact locations, and training. Knowing the location for beacons deployed by a large store is labor intensive. Similarly, training involves standing at multiple known locations inside the layout and collecting data for certain period of time. We ask two questions--_1. Instead of costly human labor, can we infer most beacon locations from training data? 2. Can we leverage data from unlabeled locations of store workers to train our model?_ As it turns out, we can affirmatively answer both these questions in our formulation. We can leverage unlabelled data (without location information) that is collected by store workers as they move around the store to help train the model as well as to infer most beacon locations. We use data collected by store workers \(D\) to solve both problems. \(D\) contains number of packets received from all \(B\) beacons at all \(N_{R}\) training locations. Let us assume that we know the locations of a small number \(b\ll B\)_primary_ beacons, with the remaining \(B-b\) beacon locations unknown; Ideally we will like \(b\) to be as close to \(0\) as possible. Also assume that only a small number \(r\ll N_{R}\) locations are known, with the remaining \(N_{R}-r\) locations unknown. Our goal is to infer \(B-b\) beacon and \(N_{R}-r\) training locations from \(D\) along with the packet reception model parameters \([w]\). To enable this, we view the model through a generative process. We initialize the \((B-b)\) beacon and \((N_{R}-r)\) unknown reception locations from a uniform prior over the testing area which is of dimension \(W\times L\). We want to jointly estimate the distribution of the unknown beacon locations \(\{l_{j}\},j\in\{1,\dots,B-b\}\), \(\{l_{k}\},k\in\{1,\dots,N_{R}-r\}\) and packet reception model parameters \([w]\), given data \(D\). In other words, we want to estimate the posterior distribution \(P([l_{j},l_{k},w]\mid D)\). This can be easily achieved, given the Bayesian nature of our model. We use standard **Markov Chain Monte Carlo (MCMC)** based Bayesian inference techniques to compute the posterior distribution over the unlabelled data points and beacons. We use **No-U-Turn sampling (NUTS)**[17] included with PyMC3 [39] to perform MCMC sampling. Therefore, B-PRP can leverage unlabeled data as well as unlabelled beacon locations to improve its location estimates and reduce the deployment overhead. ## 6 Experimental Set Up We evaluate B-PRP in two testbeds--an academic library and a retail store. Both spaces have shelves segregating the floor space into rectangular regions, i.e. aisles and corridors. The two environments differ in three main aspects--difference in layout, i.e. arrangement of rectangular areas and the presence of walls around the space, difference in material of shelves, and human interference. The retail store had more dynamic customer traffic movement during the experiments. **Library:** We show the layout of the library space, \(14m\) by \(8m\), in Figure 5(a). It has three wooden shelves (each \(11m\) long & \(0.5m\) wide). The aisles between two stacks are \(0.7m\) wide. We placed two rows of 12 beacons on each stack. We manually measured each inter-beacon distance. The distance between two adjacent beacons on the same row is \(0.91m\). The distance between two devices kept opposite each other on the same shelf, but facing two different aisles is \(0.43m\). We carried out our experiments during regular library hours. **Retail Store:** Figure 5(b) shows a retail store with dimensions: \(10m\) by \(10m\). The environment has four steel stacks (\(1.27m\) wide each; three are \(7.5m\) long, one is \(6m\) long). The aisles between two stacks are \(1.8m\) wide. We place two rows of beacons on each stack. The inter-beacon distance on the same row is \(1m\). Retail store is a challenging environment due to the presence of steel structures as well as worker and customer movement during the experiments. ### _Devices_ We use following devices for our experiments--Bluvision iBeeks [20], BiFuI [6], TI packet sniffer, a laptop and Android smartphones(Nexus5x, NuuA4L). iBeeks or iBeacons are battery operated BLE beacons. They support a wide range of broadcasting power from \(-40dBm\) to \(+5dBm\). \(-40dBm\) translates to \(3m\) line of sight range, while \(+5dBm\) gives us a range of \(150m\). For our experiments, the beacons send 10 packets per second at -15 dBm power. We deploy 60 iBeacons in the library and 38 beacons in the retail store. We use three receiver devices for BLE: Texas Instrument Packet Sniffer (CC2540 dongle), Nexus 5X smartphone, NuuA4L smartphone. iBeacons broadcast BLE packets in three channels-- 37, 38 and 39. The sniffer can filter out packets from specific channels. We connect the sniffer to a Windows laptop and use it for packet reception from beacons. For the Android phones, we built an android app using Altebacon [1] library to scan BLE channels. ### _Baselines_ We compare B-PRP against state-of-the-art in RSSI-based positioning: * **Horus**[57] is an RSSI fingerprinting technique that was originally tested with WiFi. We extend it to BLE. For fairness, we use Horus with the same number of training locations as other baselines--12 for library and 9 for retail store. The inter-state distance is \(3.5m\) for library and \(1.85m\) for retail store. * **Bayesian RSSI**[30] uses a generative model based on RSSI to determine location. We set the priors and parameter values following recommendations in [30]. * **Bayesian RSSI Fingerprinting** (or Bayesian FP) [8] is a Bayesian Fusion technique applied to a fingerprinting based method for BLE devices. It stores fingerprints like Horus, but employs fusion technique to combine current RSSI and prior location information. * **MCL**[18] is a range-free localization technique and uses proximity rather than ranging information to localize nodes. It observes whether a packet was received from a device and infers whether the reception location is inside or outside a threshold distance from the beacon. To ensure fair comparison, we use the same training data across all techniques. Furthermore, for RSSI based techniques, we use mean RSSI values over all packets used by the PRP technique. That is, **if PRP uses \(k\) packets at a location, we use the mean RSSI value over the _same \(k\) packets_. This removes inter-packet RSSI variance at the same location, _improving_ RSSI localization. RSSI results are significantly worse without averaging. There is more recent work in CSI-based positioning [24, 53, 47], but CSI data is not available on most commercial smartphones. Hence, we do not cover these baselines. For reference, the state-of-the-art CSI-based method achieves a median localization error of 86 cm[2]. However, this work requires CSI data on phones and multi-antenna beacons, both of which are not mainstream yet, and hence, cannot be deployed at scale for applications like contact tracing. ### _Data Collection_ We collected data for both layouts in two phases--training and localization. We collected data at stationary spots to train B-PRP and competing baselines. We marked some fixed places for each layout and stood there for 1 minute to receive data from the beacons. We used 12 such spots for the library layout and 9 locations for the retail store layout. We collected data in both test-beds to compare the accuracy of localization and contact tracing techniques. To track and test on data from a moving person, we asked users to naturally move inside the layout with the laptop and sniffer in hand. We used fixed movement paths and marked spots along the path. Each path or trace is a simulated movement carried out in real time between such marked spots. We stop at each marked place for 10 seconds, and we move at a normal walking speed of \(0.5m/sec\) between the spots. We can now calculate the ground truth location at any time within the movement trace. Please note that **we evaluate our location estimates throughout the movement trajectory**. They are not restricted to the marked fixed spots. ## 7 Micro Benchmark We present microbenchmarks to better understand PRP: **Relationship to RSSI:** First we ask how PRP varies with RSSI and if packet reception is directly dependent on RSSI. We plot this relationship in Figure 6. As seen in the figure, there is an expected trend between the two parameters, but there is also significant variance for each value of PRP. This implies that the relationship between packet reception and RSSI is not determined by a hard threshold, but is instead more probabilistic. The probability of packet reception goes down with RSSI but several other factors including random noise come into play. **Translation across devices:** Does the relationship of PRP with distance depend on a device? To answer, we collect PRP values at the same location with two android smartphones: Nexus5X and NuuA4L. As shown in Figure 6, we see very close trends in PRP vs distance with minor variations5. Footnote 5: The experiments were conducted on different days for each smartphone **Robustness to interference:** Does interference from other in-band transmissions like WiFi hurt PRP? To understand this, we conduct the following experiment. We setup a WiFi router on 2.4GHz WiFi band and use two laptops to saturate the link using the iperf utility[21]. We measure the PRP-distance relationship with WiFi interference turned on and off. We see negligible variation in the relationship between PRP and distance ( Figure 6(right)). This is because the three advertising channels of BLE fall between or outside the main frequencies used for IEEE 802.11, allowing for better coexistence with WiFi. Does interference from many co-located beacons hurt PRP? BLE beacons send out short advertising messages in passive mode containing a payload of at most 31 bytes. As pointed out in [13], the small size of the advertising messages helps in avoiding any significant collisions of upto 200 or more co-located devices. Similarly, the co-location of many receivers or scanning devices will not impact PRP. In our set-up, the receiver receives the advertising message in a passive scanning mode, and does not respond in any way. As a result, many scanning devices do not lead to any interference. Fig. 5: **Experimental Testbed: We conduct our experiments in a library (a) and a retail store (b) using devices shown in (c)—Beacon, Blufi, Sniffer, Laptop, Nexus5X and NuuA4L android smartphone.** ## 8 Results We compare the localization performance of baselines against B-PRP in Section 8.1. For these results, we assume that all beacon and reception locations are known (for all methods). In Section 8.2, we evaluate the robustness to the number and placement of beacons. In Section 8.3, we evaluate the contact tracing performance of the PRP against RSSI. In Section 8.4 and Section 8.5, we list the results for B-PRP when we reduce the beacon set-up costs and the number of labelled training locations. In summary: * Median error for B-PRP is \(1.03m\) and \(1.45m\) in library and retail store. The corresponding errors for the best baseline, Bayesian RSSI are \(1.3m\) and \(2.05m\). * Median error for contact tracing distance estimation with PRP is \(0.97m\) and \(1.22m\) in library and retail store. The corresponding errors with RSSI are \(1.69m\) and \(1.25m\). * B-PRP is more robust than RSSI to decreasing number of beacons. With \(5\) beacons, B-PRP performance is \(65\%\) better in the library and \(50\%\) better in the retail store. * B-PRP performs better than Bayesian RSSI when we use only Non Line-of-Sight(NLOS) or far away beacons. With beacons placed greater than \(6m\) distance, B-PRP gives error of \(1.53m\) and \(2.07m\) in LOS and NLOS. RSSI errors are \(3.85m\) and \(5.15m\). * B-PRP can reduce set-up cost by learning most beacon locations. Given data from 12 training locations, B-PRP needs to know exact location of only \(6\) beacons and it can infer the remaining \(54\) beacon locations while giving an accuracy of \(1.05m\). * B-PRP can reduce retraining efforts by leveraging data from unknown locations. Having data from 12 known locations vs (6 known + 6 unknown) locations gives the same accuracy level. We can improve accuracy \(\sim 40\%\) by adding data from unlabeled spots. ### _Localization Accuracy Evaluation_ We compare the accuracy of B-PRP against baselines. We use Euclidean distance to measure the error between actual and estimated locations for each time window. We show cumulative distribution over errors in Figure 7 and median error in Table I. First, observe that B-PRP achieves a median error of \(1.03m\) and \(1.45m\) in the library and retail store. The next best method, Bayesian RSSI, achieves errors of \(1.3m\) and \(2.05m\). The errors for all methods are higher for the retail store which has more human traffic than the library. B-PRP can outperform baselines due to two reasons: (a) B-PRP can extract information even from lost packets, and (b) It incorporates a new multipath-model that can work in the presence of obstacles. The stack model helps to increase the median accuracy of B-PRP from \(1.41m\) to \(1.03m\) in the library and from \(1.6m\) to \(1.45m\) in the retail store. B-PRP performs much better than RSSI with non-line-of-sight (NLOS) beacons. The median error for RSSI is \(2.34m\) with NLoS beacons in the library compared to \(1.63m\) with B-PRP. **B-PRP+RSSI:** One might wonder if B-PRP can be augmented with RSSI to achieve even better performance. We augment B-PRP with RSSI to test this hypothesis. As shown in Figure 7, the method works approximately similar to B-PRP. As we demonstrate in the next subsection, this is because at smaller distances, RSSI experiences little packet loss and helps our model make better inference. However, at large distances, \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Environment & **B-PRP** & B-PRP + RSSI & Bayesian RSSI [30] & Horus [57] & Bayesian FP [8] & MCL [18] \\ \hline Library & 1.03m & 0.91m (\(\downarrow 11.6\%\)) & 1.3m (\(\uparrow 26.2\%\)) & 1.83m (\(\uparrow 77.6\%\)) & 1.93m (\(\uparrow 87.4\%\)) & 2.26m (\(\uparrow 119\%\)) \\ Retail Store & 1.45m & 1.46m (\(\uparrow 0.6\%\)) & 2.05m(\(\uparrow 41.4\%\)) & 1.85m(\(\uparrow 27.6\%\)) & 1.95m(\(\uparrow 34.5\%\)) & 2.93m (\(\uparrow 102\%\)) \\ \hline \hline \end{tabular} \end{table} TABLE I: **Median error (in \(m\)) of B-PRP and baselines: B-PRP performs best in both environments followed by Bayesian-RSSI in the library and Horus in retail store. Fusion of B-PRP and RSSI performs slightly better in the ideal library environment with many beacons at close distance. Horus and Bayesian FP underperform as they require more training states for better accuracy. All methods perform worst in the harsh retail environment.** Fig. 6: **Microbenchmarks: (left) PRP vs RSSI are not directly related, but follow an expected trend. (middle) PRP variation is similar across two different android devices—Nexus5X and NuuA4L. (right) PRP is robust to ambient WiFi interference.** Fig. 7: **CDF error distribution for Bayesian PRP and baselines in library and retail store. For RSSI techniques, we have averaged RSSI value across the same number of packets that was used by PRP technique.** RSSI experiences larger sampling bias and consequentially, just acts as noise, thereby hurting the model. ### _Beacon Number and Placement_ We evaluate the robustness of B-PRP against the best performing baseline--Bayesian-RSSI to two factors--the number of beacons and the placement of beacons. **Beacon Number:** We evaluated the accuracy with fewer number of beacons (lower bound is set to three beacons - the minimum required to localize). Lower number of beacons will reduce the localization infrastructure cost In Figure 8, we see B-PRP performance degrades slowly than Bayesian-RSSI to decreasing beacon density. The median error of localization for B-PRP is always within \(2m\). For Bayesian-RSSI, with lower beacons, the error is as high as \(3m\). With \(5\) beacons, B-PRP performance is \(65\%\) better than Bayesian-RSSI in the library and \(50\%\) better in the retail store. Also, note that just with \(5\) beacons, B-PRP performs better or equal to Bayesian-RSSI with upto \(60\) beacons. This, yet again, demonstrates that the errors in RSSI-based positioning cannot be solved by just additional deployments, but are fundamental (sampling bias and multipath). **Beacon Placement**: _How does the placement of a beacon with respect to the receiver impact the localization accuracy by B-PRP and RSSI?_ If we use only beacons that are closer than \(2m\) to the reception location, both PRP and RSSI errors are good (c.f. Table II). RSSI performs slightly better in Line-of-Sight scenario due to the less variance in RSSI values and more distance information at very close range. Fusion of B-PRP and RSSI also yields lower errors. When beacon distances become greater than \(2m\), RSSI errors dramatically increase due to variance in RSSI values caused by multi-path and sampling bias. In comparison, PRP errors are much lower in order of \(1.53m\) and \(2.07m\) when beacons are more than \(6m\) away from the receiver. Errors in RSSI also cause fusion results to be worse. Error for all approaches is high when we use only beacons, all of which are in a Non Line-of-Sight(NLOS) scenario and are at a distance between \(2m\) and \(6m\) from the receiver. This experiment highlights the importance of PRP. As RSSI estimates suffer from higher sampling bias with increasing distance, the underlying location information gets corrupted. This is why at larger distances, both Bayesian RSSI and B-PRP+RSSI do worse. ### _Evaluating Contact Tracing Distance Estimates_ We compare the accuracy of PRP against RSSI and PRP+RSSI in contact tracing distance estimation. We measure the absolute error between actual and estimated distances for each pair of person or receivers. We show cumulative distribution over errors in Figure 9. PRP achieves a median distance error of \(0.97m\) and \(1.22m\) in the library and retail store. RSSI achieves distance errors of \(1.69m\) and \(1.25m\). PRP+RSSI performs the best with median distance errors of \(0.91m\) and \(1.15m\). ### _Minimizing beacon set-up cost_ \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Line-Of-Sight & \multicolumn{4}{c}{distance \(<2m\)} & \multicolumn{4}{c}{\(2m<\text{distance}<6m\)} & \multicolumn{4}{c}{distance \(>6m\)} \\ \cline{2-9} Condition & B-PRP & RSSI & B-PRP + RSSI & B-PRP & RSSI & B-PRP & RSSI & B-PRP +RSSI \\ \hline LOS & 0.89m & 0.5m & 0.5m & 0.63m & 3.57m & 0.62m & 1.53m & 3.85m & 1.56m \\ Non-LOS & 0.85m & 1.05m & 0.57m & 5m & 5.95m & 5.62m & 2.07m & 5.15m & 2.7m \\ \hline \hline \end{tabular} \end{table} TABLE II: **Robustness To Beacon Placement**: Recall, our median error using all beacons is \(1.03m\). With only beacons that are closer than \(2m\) to the receiver, both PRP and RSSI errors are low. Fusion of PRP and RSSI gives even lower errors of \(0.5m\). In this range, RSSI values have less variance and more distance information. With beacons further than \(2m\), RSSI variances increase which cause fusion results to be worse. Fig. 8: **Variation in median error for B-PRP with beacon number. The error is within \(2m\) for all cases. With \(5\) beacons, B-PRP performance is better than Bayesian RSSI: \(65\%\) (library) and \(50\%\) (retail store).** \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(N_{R}\) & \(b=60\) & \(b=6\) & \(b=3\) & \(b=1\) \\ \hline 12 & 1.03 & 1.05 & 1.24 & 1.38 \\ 8 & 1.05 & 1.22 & 1.82 & 2.15 \\ 4 & 1.05 & 2.88 & 3.74 & 3.48 \\ \hline \hline \end{tabular} \end{table} TABLE III: B-PRP’s median localization error (in \(m\)) with varying number of known beacon locations \(b\), and number of training locations \(N_{R}\). Error increases as we decrease \(b\) (each row) and decrease \(N_{R}\) (each column). For \(N_{R}=12\), performance is almost same as with \(b=60\) and \(b=6\). Decreasing \(N_{R}\) impacts accuracy more than does \(b\). Fig. 9: **CDF error distribution of contact tracing distance for PRP, RSSI and PRP+RSSI in library and retail store. PRP and PRP+RSSI gives the best median error in both environments.** **beacons.**\(B\) is total number of beacons and \(b\) is the number of beacons with known location information. We use data to estimate \(B-b\) unknown beacon locations. We then use these estimated values to track a receiver. We vary the number of known beacon locations \(b=\{1,3,6,60\}\). \(b=60\) corresponds to when we know all beacon locations. We also vary the value of \(N_{R}\) i.e. the total number of training locations. Ideally, we would like to have less known beacons \(b\) and less training locations \(N_{R}\). We show the results in Table III. We highlight three observations. _First_, when (\(N_{R}=\{12,8\}\)) there is negligible difference in the CDF of the tracking errors between the cases of \(b=60\) and \(b=6\). _Second_, for any value of \(N_{R}\), the errors increase when we decrease \(b\), with the effects most pronounced for \(N_{R}=4\). _Finally_, the figures suggest that the effect of unknown beacon locations is _less significant_ than the effect of the number of training locations. B-PRP can give the same level of performance with as low as \(b=3\) primary beacons when the number of training locations \(N_{R}\) is high. If we reduce \(N_{R}\) to 8, we need at least \(b=6\) known beacons. These results highlight that B-PRP can be deployed for public spaces with little overhead. A retail store operator can just place beacons at random locations, and move around with a smartphone to some known locations. B-PRP can infer the beacon location on its own (for most beacons) and still achieve competitive performance. ### _Reducing training efforts_ Till now, we have used the location information of all training spots \(N_{R}\) while training. Now, **let's use the information for only \(r<N_{R}\) training spots** and estimate the remaining \(N_{R}-r\) locations using our framework. We change the value of known training locations \(r=\{12,8,6,4,2,0\}\), with \(N_{R}=12\). Figure 10 shows the results. In the leftmost sub-figure, we see that as \(r\) decreases, error increases; but notice that we can cut the known locations in half, from \(r=12\) to \(r=6\), without appreciable increase in error. This means that we can collect data from 12 spots but need to annotate only half of those and B-PRP can still maintain the same accuracy level. One might wonder, _do we really gain any performance improvement by adding data from unknown locations?_ Figure 10 (two right sub-figures) validate that conjecture. Suppose, our training dataset contains data from \(12\) training locations in total. Now, \(8\) of those are labeled with location information while \(4\) are unlabeled. If we train PRP parameters using only \(8\) labeled data locations, our median error from the trained model is \(1.82m\). In contrast, if we use the entire dataset and treat the location of the \(4\) unlabeled data points as random variables in our framework, we improve the median error to \(1.08m\). Similarly, if we have \(4\) labeled and \(8\) unlabeled locations, by using all the locations our errors improve from \(3.8m\) to \(2.4m\). Thus, data from unlabeled locations are valuable for training PRP parameters. This further eases the deployment cost by allowing operators to collect fewer labelled data points. ## 9 Discussion and Limitations A few points are worth noting: **Applicability to general indoor environments:** We design B-PRP with a focus on public indoor environments like retail stores that have stacked layouts. This layout is applicable to multiple spaces like libraries, warehouses, pharmacies, etc. and covers an important application area. While the current multipath-resilience model of B-PRP does not directly apply to other environments like homes, we believe PRP itself is applicable to such environments and provides the unique advantage of robustness at large distances. Furthermore, in such environments, obstacles like walls can be modelled using the approach followed in B-PRP. **Access to Layouts:** We design the layout requirement for B-PRP to be low-effort. The layout and stacks can simply be extracted from the floorplan of the store, either manually or through an app. This makes the deployment effort low. Furthermore, B-PRP can apply to store layouts with more stacks than the ones used in this paper. We may encounter geometric elements like _three stacks away_ (\(3-S\)), _four stacks away_ (\(4-S\)) etc. We do not necessarily need a separate PRP function for each of these elements. Since PRP becomes very low after certain number of stacks, we can club these spaces into one geometric element and learn a single model. **Computational complexity:** Bayesian MCMC techniques may take more time to infer location. We ran our computations in python on a MacBook Pro laptop with \(2.5GHz\) Intel Core i7 processor and \(16GB\) RAM. With \(60\) beacons, it took us \(\sim 3\) seconds to find the next location, within our time Fig. 10: **Reducing retraining efforts:** CDF for comparing errors of B-PRP when we train using data from some known and mostly unknown locations. If we have data from 12 known locations vs (6 known + 6 unknown) locations, we get the same accuracy level. In the right two subfigures, we show that we improved accuracy \(\sim 40\%\) by adding data from unknown spots rather than only using data from known spots. resolution (\(\delta=10\)s) for localization. We can further speed-up by using native code and parallelizing the inference. **Scalability to the number of packets:** One limitation of B-PRP is that it needs more than one packet to localize. We can reduce the number of packets used for localization by changing the advertising frequency. We observe in our experiments that as we lower the sending rate from \(10Hz\) to \(1Hz\), while keeping the localization rate to once per 10 seconds, the median error increases by just \(0.2m\). ## 10 Related Work We can classify localization art on different factors-- communication signal used for localization, models to relate distance and signal properties. Most works use signals exchanged with anchor nodes(known location) to infer location of target. Anchor nodes can be --WiFi access points [3], Bluetooth beacons [60, 52, 32], FM radios [9], Zigbee devices[27], ultra-wide band(UWB) devices [12], RFID tags [49, 56, 23, 22], ultra-sound emitters [19], light emitters [28, 26, 59, 61],60GHz devices [34, 5], sub-centimeter sized devices [33]. In contrast, we use BLE beacons which offer advantages over the others. WiFi access points and cameras require continuous power and are more expensive than BLE beacons, which run on long-lasting batteries (lasting 3 to 5 years). A store can deploy hundreds of BLE beacons at a lower cost than WiFi access points or video cameras. We can scale BLE-based systems through past work in opportunistic listening that ensures better channel sharing [13]. WiFi, while widely available in public spaces such as malls and coffee shops, are often absent in large indoor retail stores (e.g., Walmart), in part because the presence of WiFi allows individuals in the store to comparison shop, putting the physical store at a competitive disadvantage. While [12] shows the promise of low-cost UWB sensing, the solution requires the widespread adoption of UWB tags to track objects. With BLE, we can track consumers via their Bluetooth enabled smartphones. The localization techniques use different signal property -- RSS or received signal strength[3, 10, 55], CSI or channel state information [45, 50], AoA or angle of arrival [54, 53, 25], ToF or time-of-flight [31, 42, 41]. AoA, ToF and CSI systems require hardware level changes on the receiver side and thus cannot be used by a retail store with customers who use commodity smartphones. Range free techniques use less accurate proximity information [18, 38, 15]. We use a new property--packet reception probability which is light weight and can be easily deployed on commercial smartphones. Received Signal Strength (RSSI) systems are broadly of two types--model-based and fingerprint-based. Model-based techniques [30, 10, 4] represent RSSI loss between anchor and target as a function of distance. Fingerprint-based techniques [3, 57] build a map of probable RSS values from anchor nodes at sampled locations. Here we use a more robust property and design an easy-to-configure framework. In this paper, we study tracking for public spaces like retail stores which have attracted attention due to proximity marketing [35]. Radhakrishnan et al. [36, 37] look at the problem of inferring item interaction in stores using wearable sensors. iBILL [52] jointly uses iBeacon RSSI model and inertial sensors to localize in supermarkets with \(90\%\) error less than \(3.5m\). Tagbooth [29], ShopMiner [43] tracks customer interaction with commodities using RFID tags in retail stores. The closest approach to our work is [11] which counts packets to estimate distance. But here, we estimate using packet reception probability (PRP). We show PRP as a robust estimator of distance, and propose a Bayesian framework to estimate distance using PRP. ## 11 Conclusion This paper establishes the feasibility of using Bluetooth Low-Energy (BLE) to provide a robust, scalable indoor localization solution using commodity hardware. Demonstrating the feasibility of BLE based distance estimation technique is particularly important during the current pandemic, where BLE has emerged as key technology for contact-tracing. BLE-based distance estimation today relies on either RSSI or just presence, both of which have publicly documented failure modes. We analyze the fundamental underpinnings of these failure modes and demonstrate robust localization through the Bayesian formulation of a new metric--Packet Reception Probability-that _exploits the absence of received packets_. We show significant improvements over the state of the art RSSI methods in two typical public spaces--a retail store and a library. We show that fusing B-PRP with RSSI is beneficial at short distances (\(\leq\)\(2\,\mathrm{m}\)). Beyond \(\geq\)\(2\,\mathrm{m}\), fusion is worse than B-PRP, as RSSI based estimates beyond \(\geq\)\(2\,\mathrm{m}\) are effectively de-correlated with distance. Our solution does not require any hardware, firmware, or driver-level changes to off-the-shelf devices, and involves minimal deployment and re-training costs. We have developed a triangle inequality based joint likelihood framework that directly estimates contact tracing distance between two individuals rather than estimating their locations first, which gives us 10% performance improvement. While our solution is the first step toward robust, reliable indoor contact tracing, we are extending our framework for peer-to-peer distance estimations without beacons (i.e. using only smartphones) for outdoor settings.
2305.18689
Gravitational lensing aided luminosity distance estimation for compact binary coalescences
The luminosity distance is a key observable of gravitational-wave (GW) observations. We demonstrate how one can correctly retrieve the luminosity distance of compact binary coalescences (CBCs) if the GW signal is strongly lensed. We perform a proof-of-concept parameter estimation for the luminosity distance supposing (i) strong lensing produces two lensed GW signals emitted from a CBC, (ii) the Advanced LIGO-Virgo network detects both lensed signals as independent events, and (iii) the two events are identified as strongly lensed signals originated from the same source. Taking into account the maximum magnification allowed in two lensing scenarios and simulated GW signals emitted from four different binary black holes, we find that the strong lensing can improve the precision of the distance estimation of a CBC by up to a factor of a few compared to that can be expected without lensing.
Kyungmin Kim, Eungwang Seo, Chunglee Kim
2023-05-30T02:13:29Z
http://arxiv.org/abs/2305.18689v3
# Gravitational lensing aided luminosity distance estimation for compact binary coalescences ###### Abstract The luminosity distance is a key observable of gravitational-wave observations. We demonstrate how one can correctly retrieve the luminosity distance of compact binary coalescences if the gravitational-wave signal is strongly lensed. We perform a proof-of-concept parameter estimation for the luminosity distance supposing (i) strong lensing produces two lensed gravitational-wave signals, (ii) the advanced LIGO-Virgo network detects both lensed signals as independent events, and (iii) the two events are identified as strongly lensed signals originated from a single compact binary coalescence. Focusing on the maximum magnification allowed in the given lensing scenario, we find that the strong lensing can improve the precision of the distance estimation by up to a factor of two compared to that can be expected for the signal experiencing no lensing. Our results imply that strong lensing of gravitational waves can be helpful for better constraining the distance to the source, and furthermore, the Hubble constant. _Introduction.--_The luminosity distance \(D_{L}\) to a source has significant implications in astronomy as well as in cosmology. It is one of the direct observables available when gravitational-wave (GW) signals are detected. The precision of parameter estimation for an observed GW signal is in general subject to the signal-to-noise ratio (SNR) of the data obtained by a detector with given sensitivity [1]. As \(D_{L}\) is directly proportional to the strain amplitude of the GW signal, any increase in SNR of the data would be helpful to better constrain the distance to the source. This work is motivated by the fact that GWs can be strongly lensed [2; 3; 4; 5; 6; 7; 8; 9]. When some conditions are satisfied, lensing of GWs can magnify the GW signal from a source. This implies the increase of the SNR of an observed GW signal. Based on forecast studies [10; 11; 12; 13; 14], it is expected to observe \(\sim\mathcal{O}(1)\) strongly lensed GW events per year with design sensitivities of the Advanced LIGO [15] and the Advanced Virgo [16]. Although there has been no confirmed strongly lensed GW event from the previous observing runs yet [17; 18; 19; 20], searching for strongly lensed GW signals has been ongoing. The precision of distance measure of a GW source is important information for follow-up observations and for understanding astrophysics of the source. Furthermore, the distance measure can govern the quality of a Hubble constant \(H_{0}\) estimation [1]. The Hubble-Lemaitre law is \(H_{0}=v/d\)[21; 22], where \(v\) is the recessional velocity of an astronomical source and \(d\) is the distance to the source. The law can be re-written by \(d=D_{L}\) when \(D_{L}\) is measured by GW observations. Many studies have discussed methods and implications of \(H_{0}\) measurement enabled by observing GWs [23; 24; 25; 26; 27; 28; 29; 30; 31]. These studies consider GW signals from compact binary coalescences (CBCs) consisting of black holes or neutron stars. Among the known CBCs, GW170817 [32] is the most successful example for being used to estimate \(H_{0}\) based on a GW observation [33]. This provides an independent constraint on \(H_{0}\) in addition to those from the electromagnetic (EM) observations, e.g., the cosmic microwave background observation [34], the Type Ia supernova survey [35], or measuring time delays between strongly lensed multiple images of quasars [36]. In this Letter, we examine the best precision in distance measurement achievable with the advanced HLV detector network sensitivity for strongly lensed GW signals. We assume a binary blak hole (BBH) as a GW source and conduct a proof-of-concept parameter estimation (PE) for the \(D_{L}\). We compare posterior probability density functions (PDFs) of \(D_{L}\), \(p(D_{L})\) for lensed and unlensed GW signals with different detector sensitivities. _Strong lensing of GWs.--_We adopt the lens configuration described in [9]. We consider a lens is located between a BBH and an observer (i.e. the GW detector network on Earth). We assume two strongly lensed GW signals are generated and propagated toward the observer as an originally unlensed GW signal radiated from a BBH passes through the lens. In this work, we assume a galaxy-like lens and apply the thin-lens approximation. Then we can obtain the lensed GW signal \(h_{l}(f)\) from \(h_{u}(f)\) by a simple relation: \(h_{l}(f)=F(f)h_{u}(f)\), where \(F(f)\) is an amplification factor that determines the lensing characteristics. As we consider a galaxy-like lens, \(F(f)\) can be obtained in the geometrical optics limit. In this work, we consider two lens models, the point-mass (PM) and the singular isothermal sphere (SIS). An amplification factor for both lens models is given as \(F(f)=\sqrt{|\mu_{+}|}-i\sqrt{|\mu_{-}|}e^{2\pi if\Delta t}\). Here, \(\mu_{+}\) and \(\mu_{-}\) are individual magnification factors corresponding to each lensed GW signals \(h_{l}^{\text{I}}(f)\) and \(h_{l}^{\text{I}}(f)\), respectively. Also, \(\Delta t\) is the time delay between the arrival times of \(h_{l}^{\text{I}}(f)\) and \(h_{l}^{\text{II}}(f)\) to an observer. For each lens model, \(\mu_{\pm}\) and \(\Delta t\) can be written as follows: \[\text{PM:}\ \ \ \mu_{\pm} =\frac{1}{2}\pm\frac{y^{2}+2}{2y\sqrt{y^{2}+4}}\,\] \[\text{SIS:}\ \ \ \mu_{\pm} =\pm 1+\frac{1}{y}\, \tag{1}\] and \[\text{PM:}\ \ \ \Delta t =\frac{4GM_{ls}}{c^{3}}\left[\frac{y\sqrt{y^{2}{+}4}}{2}+\ln \left\{\frac{\sqrt{y^{2}{+}4}+y}{\sqrt{y^{2}{+}4}-y}\right\}\right]\,\] \[\text{SIS:}\ \ \ \Delta t =\frac{8GM_{ls}y}{c^{3}}. \tag{2}\] In Eqs. (1) and (2), \(y\) denotes the parameterized source position following the lens configuration used in [9]. The range of \(y\) can be constrained to be \([0.1,\ 1.0)\) in this work. The expected occurrence rate of strongly lensed GWs [37; 38] sets the lower limit of \(y\geq 0.1\). The upper limit is given by the SIS model, i.e., the \(y\)-dependent validity of \(F(f)\) requires \(y\!<\!1\) in order to produce two lensed signals with SIS [9]. The expression of \(F(f)\) implies that PM always produces two lensed signals with any \(y\). As shown in Eq. (2), the time delay is proportional to a redshifted mass \(M_{ls}\!=\!M_{l}(1+z_{l})\) of a lens at redshift \(z_{l}\). As a representative value, we set \(M_{ls}\!=\!10^{11.5}M_{\odot}\), which results in a time delay from weeks to months between \(h_{l}^{\text{I}}(f)\) and \(h_{l}^{\text{II}}(f)\) in the range of \(y\) considered in this work. _Parameter estimation for unlensed signals._--Let us label an unlensed signal \(h_{u}(f)\) is the true (unchanged) GW signal from a CBC when there is no lens between the source and an observer. We use the Bilby pipeline [39; 40; 41] and perform PE for an artificially generated \(h_{u}(f)\). Bilby utilizes the Dynesty nested sampler [42]. We consider design power spectral densities (PSDs) [43] of the detector network consisting of advanced LIGO-Hanford (H), advanced LIGO-Livingston (L), and advanced Virgo (V); the HLV detector network hereafter. As we aim to find the best possible precision for the distance estimation via GW observation, a "zero-noise" model is assumed for a baseline PE with \(h_{u}(f)\)[44; 45]. In order to generate templates and injection GW signals used in PE, we use IMRPhenomD waveform model [46; 47]. As for a CBC source, we assume an equal-mass BBH with \(m_{1}=m_{2}=30M_{\odot}\). The chirp mass of the binary is \(\mathcal{M}=(m_{1}m_{2})^{3/5}(m_{1}+m_{2})^{-1/5}\simeq 26M_{\odot}\). We set the \(D_{L}\) to the BBH to be \(3\,\text{Gpc}^{**}\). For the sky location and the polarization angle, we adopt the observed median values of GW150914 [48]. We assume an arbitrary event time that can make the inclination angle to the L detector become zero. Other parameters, such as spins, eccentricity, or higher-order modes, are ignored for simplicity. By choosing these parameters, we have the network SNR of 8 for \(h_{u}(f)\) given the HLV design PSDs with zero-noise. We assume priors of parameters as follows. A power-law distribution in a comoving volume with a power-law index \(\alpha=2\)[49] is used for the distance prior with a range of \([0.1,5]\text{Gpc}\). For other source parameters, including chirp mass with a range of \(\mathcal{M}=[10,100]M_{\odot}\), we adopt prior distributions described in [41]. Details of selected parameters are summarized in Table 1. Fig. 1 presents \(p(\mathcal{M})\) and \(p(D_{L})\) for \(h_{u}(f)\) with the HLV detector network at design PSD and zero-noise. For lensed signals, \(p(\mathcal{M})\) is almost the same but \(p(D_{L})\) becomes different by strong lensing (see Fig. 3). _Parameter estimation for lensed signals._--When performing PE analyses, we assume both the waveform model for a GW signal and a lens model generating two lensed GW signals are known. In this case, the precision of distance estimation is governed by the SNR. As for the most optimistic scenario, we then find the maximum amplification condition for SIS and PM lens models, respectively. Two lensed signals \(h_{l}^{\text{III}}(f)\) are generated by incorporating \(y=0.1\) in Eq. (1) for each lens model. This is when \(\mu_{\pm}\) is maximum. Let us assume the physical association of \(h_{l}^{\text{III}}(f)\) is identified. By calculating a joint likelihood using both signals, one can retrieve the posterior for the true luminosity distance \(p(D_{L})\) and other source parameters. When detected, the two lensed signals \(h_{l}^{\text{III}}(f)\) would likely to be identified as two independent events separated \begin{table} \begin{tabular}{l l l l} Parameter & Unit & Value & \begin{tabular}{l} Prior \\ distribution \\ \end{tabular} \\ \hline Component masses, \(m_{1}\) \& \(m_{2}\) & \(M_{\odot}\) & 30.0 & Uniform \\ Chirp mass, \(\mathcal{M}\) & & 26.1 & Uniform \\ Luminosity distance, \(D_{L}\) & Gpc & 3 & Power-law \\ Right Ascension & rad & 1.3750 & Uniform \\ Declination & rad & -1.2108 & Isotropic \\ \end{tabular} \end{table} Table 1: Parameters used to generate an unlensed GW signal \(h_{u}(f)\) from a BBH along with prior assumptions. Figure 1: Posterior PDFs for chirp mass \(\mathcal{M}\) (left) and luminosity distance \(D_{L}\) (right) recovered from an unlensed GW signal. Black solid lines represent injected parameters. Vertical orange dotted lines are lower and upper bounds of the 99% C.I., respectively. by time in GW observation. If the SNRs of both signals are large enough, the most likely values of \(\mathcal{M}\) obtained from two lensed signals would be almost identical within the uncertainty attributed to the sensitivity of the detector network and analysis pipelines. However, the inferred distances for the two lensed signals are different depending on individual amplification factors. Hence, two _apparent_ luminosity distances \(D_{L\pm}=D_{L}/\sqrt{|\mu_{\pm}|}\) are obtained from PE analysis for a given GW signal when each signal is analyzed individually. In order for reflecting this realistic observation scenario, we inject apparent distances \(D_{L\pm}\) to simulate two lensed signals \(h_{l}^{\rm III}(f)\) but use the same value for the chirp mass, i.e., \(\mathcal{M}_{+}=\mathcal{M}_{-}=\mathcal{M}=26.1M_{\odot}\). We ignore time dilation and phase shift, as they do not affect luminosity distance estimation in our scenario. We also assume the apparent sky locations of two lensed signals are the same because the subtle differences of the lensed signals in sky cannot be distinguished by the sensitivities of the current advanced detector network. As for the PE analysis, we use the Golum[50] pipeline that is constructed based on Bilby. The pipeline enables us to infer the apparent source parameters of strongly lensed signals, for instance, \(D_{L\pm}\) and \(\mathcal{M}_{\pm}\) for \(h_{l}^{\rm III}(f)\), respectively. In addition, it allows us to infer lensing parameters such as the relative magnification factor \(\mu_{\rm rel}\). Based on what is discussed earlier, we use the same injection parameters with the unlensed signal except \(D_{L_{\pm}}\). PE for lensed signals involves assumptions on lensing parameters and \(D_{L_{\pm}}\) in addition to typical assumptions used for an unlensed signal. We assume uniform prior distributions for lensing parameters, e.g., \(\mu_{\rm rel}\), time delay, and the Morse phase shift parameter [50]. As for other parameters including the distance and chirp mass, we apply the same distributions used in the PE for \(h_{u}(f)\). In the geometrical optics limit, \(\mu_{\rm rel}\) can be rewritten by the apparent distances for both PM and SIS models, that is, \(\mu_{\rm rel}\equiv|\mu_{-}/\mu_{+}|=(D_{L+}/D_{L-})^{2}\). The likelihood of \(\mu_{\rm rel}\) (\(\mathcal{L}(\mu_{\rm rel})\)) can then be obtained from the ratio \([\mathcal{L}(D_{L+})/\mathcal{L}(D_{L-})]^{2}\). It is straightforward to compute \(p(\mu_{\rm rel})\) given the uniform prior for \(\mu_{\rm rel}=[0.01,0.9]\). The minimum value of \(\mu_{\rm rel}\) is chosen in order to avoid zero. Also this range is well within the allowed constraints but would not create any railing in posteriors. We obtain the maximum likelihood value \(\mu_{\rm rel,\ max}\) from the \(p(\mu_{\rm rel})\) and find corresponding \(y(\mu_{\rm rel,\ max})\) from Fig. 2. Also, it is straightforward to calculate \(\mu_{\pm}\) using Eq. (1). When \(\mu_{\pm}\) and \(D_{L\pm}\) are at hands, we can obtain \(D_{L}\) by \(D_{L}=\sqrt{|\mu_{+}|}D_{L+}\) or \(D_{L}=\sqrt{|\mu_{-}|}D_{L-}\). It is shown in [50] that we can constrain the lensing and source parameters better by _weighting_ the posterior samples of individual lensed signals. For the same CBC, \(p(D_{L})\) can be obtained from a reweighted posterior for the apparent distance \(p(D_{L+})\) by \(p(D_{L})=\sqrt{|\mu_{+}|}p(D_{L+})\). The individual magnification factor \(\mu_{+}(y)\) is determined by \(y=y(\mu_{rel,\ max})\). One can choose either \(h_{l}^{\rm I}\) or \(h_{l}^{\rm II}\) when combining the two likelihoods in order to determine the true distance posterior of the source. In this work, we choose \(h_{l}^{\rm I}\) assuming (i) \(h_{l}^{\rm I}\) arrives earlier than \(h_{l}^{\rm II}\) and (ii) \(h_{l}^{\rm I}\) is experienced stronger magnification than \(h_{l}^{\rm II}\). We perform the same reweighting procedure--described in Equation (15) and Appendix A of [50]--and calculate posteriors of lensing and apparent source parameters of \(h_{l}^{\rm I}\). Different detector sensitivities are considered in this work: (i) the HLV design PSDs with zero-noise (Case A) and (ii) the HLV O3a PSDs (based on the first three months of O3) 1 with a Gaussian noise (Case B). Footnote 1: The corresponding PSD data for L, H, and V can be found from [https://dcc.ligo.org/LIGO-T2000012/public](https://dcc.ligo.org/LIGO-T2000012/public). _Results._--The maximum magnification (\(y=0.1\)) of two lens models implies \(\mu_{+}^{\rm SIS}=11\) and \(\mu_{+}^{\rm PM}\simeq 5.5\) or \(\text{SNR}_{\rm SIS}=24.6\) and \(\text{SNR}_{\rm PM}=16.3\), respectively. This implies a distance posterior based on the SIS model can be better constrained than that of PM. For example, Fig. 3 Figure 3: Posteriors of \(\mu_{\rm rel}\), \(\mathcal{M}_{+}\), and \(D_{L+}\) obtained for \(h_{l}^{\rm I}\) based on PM (green) and SIS (blue) models. Black solid lines indicate \(\mu_{\rm rel}(y=0.1)\) and injected values of \(\mathcal{M}_{+}\), and \(D_{L+}\). Vertical dashed lines are the lower and upper bounds of the 99% C.I’s centered around a median value of each parameter. Figure 2: Relation between the relative magnification factor \(\mu_{\rm rel}\) and the source position \(y\). The green and blue lines are results from the point-mass (PM) and singular isothermal sphere (SIS) models, respectively. shows PE results for \(h_{l}^{\text{I}}\). Posteriors \(p(\mu_{\text{rel}})\) and \(p(\mathcal{M}_{+})\) show prominent peaks at around the injected values, i.e., \(\mu_{\text{rel}}\!\simeq\!0.81\) and \(\mathcal{M}_{+}\simeq 26.1M_{\odot}\). Posteriors based on the SIS model are better constrained than those from the PM model. This implies \(D_{L+,\text{peak}}^{\text{SIS}}<D_{L+,\text{peak}}^{\text{PM}}\). Fig. 4 shows effects of strong lensing in distance estimation. Fig. 4(a) (Case A) shows \(p(D_{L})^{\prime}s\) obtained by the HLV design PSDs with zero-noise. Results show that the 99% C.I. width of \(p(D_{L})\) can be better constrained by almost a factor of two when both \(h_{l}^{\text{I}}(f)\) and \(h_{l}^{\text{II}}(f)\) are detected. Fig. 4(b) (Case B) shows similar results for the O3a PSDs with a Gaussian noise. As seen from Fig. 3, posteriors from lensed signals are better constrained than that of the unlensed signal. In Table 2, we compare widths of PDFs at the 99% C.I. (labeled as \(\mathcal{W}_{99}\)) in Fig. 4. We define a ratio \(\mathcal{R}_{99}\equiv\mathcal{W}_{99}^{\text{UL}}/\mathcal{W}_{99}^{\text{X}}\), where X is either UL, PM, or SIS. If detected, strong lensing is definitely helpful to constrain the luminosity distance better for a given detector. For example, considering the O3a PSD with Gaussian noise, the \(\mathcal{W}_{99}\) in \(p(D_{L})\) is reduced by 37% (PM) and 42% (SIS) with respect to the results based on the unlensed signal. If comparing the widths at the 67% C.I., we expect 19% (PM) and 34% (SIS) improvements by strong lensing. _Discussions.--_Identification of two lensed signals is assumed, but in practice, this is a challenge in data analysis for O4 and future observations. Also, any mismatch due to the GW waveform or lens model would introduce systematics biases that can further degrade the quality of PE. We consider two simplest lens models and assume two lensed signals are produced by a lens in the geometrical optics limit. Definitions and/or the relation between \(y\) and \(\mu_{\text{rel}}\) can be nontrivial when there are more lens parameters needed to describe a amplification factor \(F(f)\). More than two lensed signals are predicted by models such as singular-isothermal-ellipsoid [51] or the Navarro-Frenk-White model [52] or a more complex macrolens containing multiple microlenses [53; 54; 55]. Future studies on more realistic and/or complicated GW lensing are useful to understand the effects of strong lensing in the context of GW parameter estimation in more details. More precise and accurate distance estimation is valuable not only for understanding the formation and evolution of various CBC populations but also for constraining a Hubble constant. When strongly lensed GW signals are detected, the PE procedure presented in this Letter can be used to estimate \(D_{L}\) with better precision that can be directly used for the Hubble's law. Strongly lensed GW signals are also likely to be detected with the next generation detectors [56; 57] such as the Einstein Telescope [58] and the Cosmic Explorer [59]. More precise distance estimation by strong lensing can shed lights on the Hubble tension in the next decades. _Acknowledgements_ This work is supported by the National Research Foundation of Korea (NRF) grants \begin{table} \begin{tabular}{l c c c c} \hline \hline **Case A: HLV design PSDs with zero-noise** & & & & \\ \hline Signal & \(\mathcal{W}_{99}\) [Mpc] & \(\mathcal{R}_{99}\) & \(\mathcal{W}_{67}\) [Mpc] & \(\mathcal{R}_{67}\) \\ \hline Unlensed & 2,697 & 1.00 & 1,085 & 1.00 \\ Lensed (PM) & 1,429 & 1.89 & 653 & 1.66 \\ Lensed (SIS) & 1,281 & 2.11 & 602 & 1.80 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparisons of the distance estimation for unlensed, PM, and SIS models based on Fig. 4. The 2nd column presents \(\mathcal{W}_{99}\) that is the width of \(p(D_{L})\) at the 99% C.I. The 3rd column shows \(\mathcal{R}_{99}\), a ratio between \(\mathcal{W}_{99}\) values (see text for definition). The 4th and 5th columns present widths and ratios at the 67% C.I. of \(p(D_{L})\). Figure 4: One dimensional posterior PDFs of \(D_{L}\) recovered from \(h_{l,\text{PM}}^{\text{I}}(f)\) (green) and \(h_{l,\text{SIS}}^{\text{I}}(f)\) (blue). The distance posterior from an unlensed signal \(p(D_{L}^{\text{UL}})\) is shown in orange solid lines for comparison. We compare results from the HLV design PSDs with zero-noise (left) and HLV O3a PSDs with a Gaussian noise (right). Black solid lines are injected value of \(3\) Gpc, and dashed lines indicate lower and upper bounds of the 99% C.I.’s. funded by the Ministry of Science and ICT of Korea Government (NRF-2020R1C1C1005863, NRF-2021R1F1A1062969, and NRF-2021M3F7A1082056). E.S. is partially supported by grants from the Research Grants Council of the Hong Kong (Project No. CUHK 24304317), The Croucher Foundation of Hong Kong, the Research Committee of the Chinese University of Hong Kong, and the Science and Technology Facilities Council (Grant No. ST/L000946/1). We are grateful for computational resources provided by the LIGO Laboratory and supported by the National Science Foundation Grants PHY-0757058 and PHY-0823459. K.K. also appreciates the warm hospitality of Korea Astronomy and Space Science Institute, where a part of the work has been done. K.K. and E.S. equally contributed to this work as co-first authors.
2301.12135
AdaSfM: From Coarse Global to Fine Incremental Adaptive Structure from Motion
Despite the impressive results achieved by many existing Structure from Motion (SfM) approaches, there is still a need to improve the robustness, accuracy, and efficiency on large-scale scenes with many outlier matches and sparse view graphs. In this paper, we propose AdaSfM: a coarse-to-fine adaptive SfM approach that is scalable to large-scale and challenging datasets. Our approach first does a coarse global SfM which improves the reliability of the view graph by leveraging measurements from low-cost sensors such as Inertial Measurement Units (IMUs) and wheel encoders. Subsequently, the view graph is divided into sub-scenes that are refined in parallel by a fine local incremental SfM regularised by the result from the coarse global SfM to improve the camera registration accuracy and alleviate scene drifts. Finally, our approach uses a threshold-adaptive strategy to align all local reconstructions to the coordinate frame of global SfM. Extensive experiments on large-scale benchmark datasets show that our approach achieves state-of-the-art accuracy and efficiency.
Yu Chen, Zihao Yu, Shu Song, Tianning Yu, Jianming Li, Gim Hee Lee
2023-01-28T09:06:50Z
http://arxiv.org/abs/2301.12135v1
# AdaSM: From Coarse Global to Fine Incremental Adaptive Structure from Motion ###### Abstract Despite the impressive results achieved by many existing Structure from Motion (SfM) approaches, there is still a need to improve the robustness, accuracy, and efficiency on large-scale scenes with many outlier matches and sparse view graphs. In this paper, we propose AdaSMM: a coarse-to-fine adaptive SfM approach that is scalable to large-scale and challenging datasets. Our approach first does a coarse global SfM which improves the reliability of the view graph by leveraging measurements from low-cost sensors such as Inertial Measurement Units (IMUs) and wheel encoders. Subsequently, the view graph is divided into sub-scenes that are refined in parallel by a fine local incremental SfM regularised by the result from the coarse global SfM to improve the camera registration accuracy and alleviate scene drifts. Finally, our approach uses a threshold-adaptive strategy to align all local reconstructions to the coordinate frame of global SfM. Extensive experiments on large-scale benchmark datasets show that our approach achieves state-of-the-art accuracy and efficiency. ## I Introduction Structure from Motion (SfM) is an important topic that has been studied intensively over the past two decades. It has wide applications in augmented reality and autonomous driving for visual localization [1, 2, 3], and in multi-view stereo [4, 5] and novel view synthesis [6] by providing camera poses and optional sparse scene structures. Despite the impressive results from many existing works, SfM remains challenging in two aspects. The first challenge is outlier feature matches caused by the diversity of scene features, e.g. texture-less, self-similar, non-Lambertian, etc. These diverse features impose challenges in sparse feature extraction and matching which result in outliers that are detrimental to the subsequent reconstruction process. Incremental SfM [7, 8] is notoriously known to suffer from drift due to error accumulation, though is robust in handling outliers. Global SfM methods [9, 10, 11] are proposed to handle drift, but fail to solve the scale ambiguities [12] of camera positions and are not robust to outliers [13, 14]. The second challenge is sparse view graphs from some large-scale datasets. Incremental SfM is known to be inefficient on large-scale datasets. Several works [16, 17, 18, 19] have been proposed to handle millions of images. These are divide-and-conquer SfM methods that deal with very large-scale datasets by grouping images into partitions. Each partition is processed by a cluster of servers that concurrently circumvents the memory limitation. However, these methods [16, 17, 18, 19] are often limited to internet datasets or aerial images where the view graphs are very densely connected. The dense connections in the view graph ensure that there are sufficient constraints between the graph partitions. Nonetheless, divide-and-conquer methods often fail in datasets with weak associations between images for local reconstruction alignments or lack of visual constraints for stable camera registration. An example of such a dataset is autonomous self-driving cars where the interval between consecutive images can be large. In view of the challenges from the outlier feature matches and sparse view graphs on the existing SfM approaches, we propose AdaSMM: a coarse-to-fine adaptive SfM pipeline to enhance the robustness of SfM in dealing with large-scale challenging scenes. Specifically, we first solve the global SfM at a coarse scale, and then the result of the global SfM is used to enhance the scalability of the local incremental reconstruction. Both the scale ambiguities and outlier ratio in global SfM can be significantly reduced by incorporating measurements from the IMU and wheel encoder, which are often available in mobile devices or autonomous self-driving cars. We preintegrate [20] the IMU measurements to get the relative poses of consecutive frames \(\mathcal{P}_{t}=\{\mathbf{P}_{t_{0}},\mathbf{P}_{t_{1}},\cdots\}\), and use the measurements from the wheel encoder to constrain scale drifts of the IMU preintegration [21]. We then replace the relative poses of the consecutive frames in the view graph formed by two-view geometry [22, 8] with \(\mathcal{P}_{t}\) estimated by the IMU and wheel encoder. This augmented view graph is then used to estimate Fig. 1: When combining with global SfM, our AdaSMM is more robust than traditional incremental SfM (tested on the public 4Seasons dataset [15]). the global poses. Consequently, we obtain a coarse scene structure and camera poses, where the latter can be used to filter wrong feature matches. Since that, we partition the view graph with the existing graph cut method [23] and then extend the sub-graphs with a novel adaptive flood-fill method to enhance the constraints of separators [24]. We define separators as images that connect different sub-graphs. For each local SfM, the poses from the global SfM are used for camera registration and to constrain the global refinement of 3D points and camera poses. Finally, we design an adaptive global alignment strategy to merge local reconstructions with the coordinate frame of the global SfM set as the reference frame. We illustrate the pipeline of our method in Fig. 2. We evaluate our method extensively on large-scale challenging scenes. Experimental results show that our AdaSfM is adaptive to different scene structures. Furthermore, we achieve better robustness and comparable efficiency in comparison to existing state-of-the-art SfM methods. ## II Related Work **Incremental SfM.** Agarwal _et al._[7] apply preconditioned conjugate gradient [25] to accelerate large-scale BA [26]. The drift problem is alleviated in [27] with a re-triangulation (RT) step before global BA. Schonberger and Frahm [8] augment the view graph by estimating multiple geometric models in geometric verification and improve the image registration robustness with next best view selection. In addition to the RT before BA [27], RT is also performed after BA in [8]. To reduce the time complexity of repetitive image registration, Cui _et al_[28] select a batch of images for registration, and select a subset of good tracks for BA. **Global SfM.** The simplest configuration of a global SfM method only requires 1) estimating the global rotations by rotation averaging (RA), 2) obtaining the global positions by TA, and 3) triangulating 3D points and performing a final global BA. Govindu [29] represents rotations by lie-algebra, and global rotations and global positions are estimated simultaneously. Chatterjee and Govindu [30, 31] improve the rotation estimation of [29] by a robust \(l_{1}\) initialization followed by a refinement of the rotations with iteratively reweighted least-squares (IRLS) [32]. To solve the TA problem, Wilson _et al_[33] project relative translations onto the 1D space to identify outliers. Relative translations that are inconsistent with the translation directions that have the highest consensus are removed. A nonlinear least-squares problem is then solved to get the global positions. Goldstein _et al._[34] relax the scale constraints of [33] to linear scale factors, and the convex linear programming problem is solved by ADMM [35]. Ozyesil and Singer [12] utilize the parallel rigidity theory to select the images where positions can be estimated uniquely and solved as a constrained quadratic programming problem. By minimizing the \(\sin\theta\) between two relative translations, Zhuang _et al._[36] improve the insensitivity to narrow baselines of TA. The robustness of TA is also improved in [36] by incorporating global rotations. **Hybrid SfM.** Cui _et al._[37] obtain orientations by RA and then register camera centers incrementally with the perspective-2-point (P2P) algorithm. Bhomick _et al._[16] propose to divide the scene graph, where the graph is built from the similarity scores between images. Feature matching and local SfM can then be executed in parallel and local reconstructions are merged [16]. Zhu _et al._[18, 19] adopt a similar strategy to divide the scene and the graph is constructed after feature matching. The relative poses are collected after merging all local incremental reconstruction results. The outliers are filtered during local reconstruction, global rotations are fixed by RA, and camera centers are registered with TA at the cluster level. Based on [18], Chen _et al._[17] find the minimum spanning tree (MST) to solve the final merging step. The MST is constructed at the cluster level, and the most accurate similarity transformations between clusters are given by the MST. Locher _et al._[38] filtered wrong epipolar geometries by RA before applying the divide-and-conquer method [18]. Jiang _et al._[39] use a visual-inertial navigation system (VINS) [40] to first estimate the camera trajectories with loop detection and loop closure [41]. Images are then divided into sequences according to timestamps. However, [39] requires two carefully designed systems: one for VINS with loop detection and the other for SfM. Loop detection is also a challenge in real-world scenes. ## III Notations We denote the absolute camera poses as \(\mathcal{P}=\{\mathbf{P}_{i}=[\mathbf{R}_{i}|\mathbf{t}_{i}]\}\), where \(\mathbf{R}_{i},\mathbf{t}_{i}\) are the rotation and translation of the \(i\)-th image, respectively. The absolute camera poses project 3D points \(\mathcal{X}=\{\mathbf{X}_{k}\}\) from the world frame to the camera frame. The camera centers are denoted by \(\{\mathbf{C}_{i}\}\). The relative pose from image \(i\) to image \(j\) are denoted as \(\mathbf{P}_{ij}=[\mathbf{R}_{ij}|\mathbf{t}_{ij}]\) Fig. 2: **The pipeline of our proposed SfM method. Our method takes images and measurements from low-cost sensors as inputs. The view graph is built after feature matching and refined by the result of global SfM. The absolute poses from the global SfM are used as priors in the subsequent local SfM process. The final reconstruction result is merged into the global SfM reference frame.** where \(\mathbf{R}_{ij},\mathbf{t}_{ij}\) are the relative rotations and translations, respectively. We define the view graph as \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\), where \(\mathcal{V}\) denotes the collection of images and \(\mathcal{E}\) denotes the two view geometries, i.e. the relative poses and inlier matches between the image pairs. For two rotations \(\mathbf{R}_{i},\mathbf{R}_{j}\), we use \(\log(\mathbf{R}_{i},\mathbf{R}_{j})=\log(\mathbf{R}_{j}\mathbf{R}_{i}^{\top})\) to denote the angular error and \(\|\mathbf{R}_{i}-\mathbf{R}_{j}\|_{F}\) to denote the chordal distance. Additionally, the keypoints and the normalized keypoints after applying the intrinsic matrix \(\mathbf{K}\) are denoted by \(\mathbf{u}\) and \(\hat{\mathbf{u}}\), respectively. ## IV Coarse Global to Fine Incremental SfM In this section, we introduce our method in detail. In Sec. IV-A, we introduce our global SfM that can effectively cope with outliers in challenging scenes. A refinement step is also introduced to remove outlier matches after global SfM. In Sec. IV-B, we describe our parallel incremental SfM approach that utilizes the results from coarse global SfM to mitigate the problems from sparse view graphs. ### _Coarse Global SfM_ We first obtain the absolute rotations \(\mathbf{R}_{i}\) by solving the rotation averaging problem: \[\operatorname*{arg\,min}_{\{\hat{\mathbf{R}}_{i}\}}\sum_{ \begin{subarray}{c}i\in\mathcal{V}\\ (i,j)\in\mathcal{E}\end{subarray}}d(\hat{\mathbf{R}}_{j}\hat{\mathbf{R}}_{i}^ {\top},\mathbf{R}_{ij}), \tag{1}\] where \(\hat{\mathbf{R}}_{i}\) denotes the absolute poses obtained by rotation averaging, and \(d(\cdot)=\|\cdot\|_{F}\) denotes the chordal distance. Eq. (1) can be solved robustly and efficiently by [42]. We then obtain the absolute camera positions by solving the translation averaging problem. However, existing translation averaging methods often fail to recover the camera positions under challenging scenes due to two main factors: 1) The high ratio of outliers in the relative translations. 2) The view graph is solvable only when the parallel rigid graph condition [12] is satisfied. To alleviate the first problem, we first remove the erroneous matching pairs by checking the discrepancy of relative rotations: \(\log(\mathbf{R}_{ij}^{\top}\hat{\mathbf{R}}_{j}\mathbf{R}_{i}^{\top})>\epsilon _{\mathbf{R}}\), and then the relative translations [12] are refined in parallel by: \[\operatorname*{arg\,min}_{\mathbf{t}_{ij}}\ \|\hat{\mathbf{u}}^{\top}([\mathbf{t}_{ ij}]_{\times}(\hat{\mathbf{R}}_{j}\hat{\mathbf{R}}_{i}^{\top}))\hat{ \mathbf{u}}\|,\quad\text{s.t.}\quad\|\mathbf{t}_{ij}\|=1. \tag{2}\] We do not extract the rigid parallel graph [12] to solve the scale ambiguities since it is time-consuming to solve polynomial equations. Furthermore, the state-of-the-art method to establish the solvability of a view graph is only limited to 90 nodes [43]. We improve the solvability of the view graph by augmenting the relative translations in \(\mathcal{P}_{t}\) of the consecutive frames from the IMU and wheel encoder. We do not augment the relative rotations because they are more accurate from the image-based two-view geometry. Note that errors can accumulate increasingly in the augmented relative poses during the motion of the devices due to the bias of the accelerometers and gyroscopes in the IMU, and drifts in the wheel encoder caused by friction and wheel slippages. To circumvent this problem, we only use the relative poses where the time difference is below a threshold \(\epsilon_{T}\). Since we obtained the _augmented view graph_\(\mathcal{G}_{\text{aug}}=\{\mathcal{V},\mathcal{E}_{\text{aug}}\}\), the rigidity of the original view graph is augmented and the scale ambiguities of some images can be eliminated. We can then further solve the translation averaging problem: \[\operatorname*{arg\,min}_{\hat{\mathbf{C}}_{i},i\in\mathcal{V}: \atop s_{ij},(i,j)\in\mathcal{E}_{\text{aug}}}\sum_{(i,j)\in\mathcal{E}_{ \text{aug}}}\|s_{ij}(\hat{\mathbf{C}}_{i}-\hat{\mathbf{C}}_{j})-\mathbf{R}_{ j}^{\top}\mathbf{t}_{ij}\|, \tag{3}\] \[\text{s.t.}\qquad s_{ij}\geq 0,\quad\forall(i,j)\in\mathcal{E}_{ \text{aug}};\quad\sum_{i\in\mathcal{V}}\hat{\mathbf{C}}_{i}=0.\] (3) can be solved efficiently and robustly under the \(l_{1}\)-norm by collecting all the constraints. Note all the relative translations are normalized in \(\mathcal{E}_{\text{aug}}\). The right of Fig. 3 shows our global SfM result by solving (3). After translation averaging, we triangulate the 3D points and perform an iterative global bundle adjustment to refine camera poses. It is worth mentioning that, global SfM can generate more tracks than incremental SfM, as its camera poses are less accurate and thus it fails to merge some tracks that are physically the same. Besides, according to [28], tracks are redundant for optimisation. Therefore, we can reduce the computation and memory burden with fewer tracks. Though a well-designed algorithm may help with the selection of tracks, we simply create tracks with a stricter threshold: only when the angle between the two rays respectively go through the 3D point and the two camera centers are larger than 5 degrees, it is deemed as a valid track. Note that for numerical stability during optimization, the coordinates are normalized after each iteration. #### Iv-A1 Matches Refinement The correct camera poses recovered by our global SfM with the relative poses from the low-cost sensors to eliminate the wrong two-view geometry estimates can be further utilized to filter out wrong image feature matches. For a calibrated camera with known intrinsics, we can recover the essential matrix between images i and j from \(\hat{\mathbf{E}}=[\hat{\mathbf{t}}_{ij}]_{\times}\hat{\mathbf{R}}_{ij}\) with the absolute rotations \(\hat{\mathbf{R}}_{i}\) and translations \(\hat{\mathbf{t}}_{i}\) computed from rotation and translation averaging. \((\hat{\mathbf{t}}_{ij},\hat{\mathbf{R}}_{ij})\) are computed from \((\hat{\mathbf{R}}_{i},\hat{\mathbf{R}}_{j})\) and \((\hat{\mathbf{t}}_{i},\hat{\mathbf{t}}_{j})\). The true matches \(\hat{\mathbf{u}}^{\prime}\leftrightarrow\hat{\mathbf{u}}\) must satisfy the check on the total point-to-epipolar line distance [22] over the two views, i.e. \[d_{\perp}(\hat{\mathbf{u}},\hat{\mathbf{E}}\hat{\mathbf{u}}^{ \prime})+d_{\perp}(\hat{\mathbf{u}}^{\prime},\hat{\mathbf{E}}\hat{\mathbf{u}}) \leq\epsilon_{M}. \tag{4}\] \(d_{\perp}(\mathbf{x},\mathbf{1})\) gives the shortest distance between a point \(\mathbf{x}\) and a line \(\mathbf{l}\). The epipolar lines on the two images are given by \(\mathbf{l}=\mathbf{E}\hat{\mathbf{u}}^{\prime}\) and \(\mathbf{l}^{\prime}=\mathbf{E}\hat{\mathbf{u}}\). \(\epsilon_{M}\) is the threshold for the check. The effectiveness of global SfM to filter wrong matches can be seen in Fig. 7. We build a pseudo ground truth by Fig. 3: Comparison of global SfM results. Results from [12] (left) and Eq. (3) (right). Red and black colors respectively denote vehicle trajectories and sparse point clouds. COLMAP [8] to evaluate the accuracy of the global SfM. The ratio test is performed after NN by default. Fig. 4 shows the inlier ratio distribution after NN+RANSAC and matches refinement with relative poses obtained from global SfM and incremental SfM, respectively. Table. I gives the relative pose estimation AUC of NN+RANSAC and global SfM with respect to incremental SfM. It can be seen that our coarse global SfM can obtain comparable accuracy to COLMAP [8] in the refinement of the matches. ### _Finer Parallel Incremental SfM_ Although we have obtained the absolute camera poses by global SfM, these coarse poses are not accurate enough for localization. To improve the accuracy, we propose to refine the camera poses and scene structure with the divide-and-conquer incremental SfM. #### Iv-B1 Adaptive Graph Partition Existing approaches [18, 17] used a cut-and-expand schema to create overlapping areas between partitions. However, these approaches have two main drawbacks: : 1) The overlapping areas are not enough for final merging when the view graph becomes too sparse. This can be seen from Fig. 5(a). Edges (3, 20), (7, 9), (8, 9), (8, 20), (16, 19), (17, 18) are collected after the graph cut, and then the images on these edges are added as separators of the partitions. In Fig. 5(a), only images \(\{3,7,8,9,16,17,18,19,20\}\) can be used to create the overlapping areas (Fig. 5(b)). However, these separator images are insufficient to compute the similarity transformations for merging all local reconstructions due to the sparsity of the view graph. 2) Graph cut tends to separate partitions along edges with weak associations. This means the separators are often weakly constrained during reconstruction and thus their poses might not be accurate enough during reconstruction. We propose a flood-fill graph partition algorithm to overcome the above-mentioned disadvantages. We refer to the added nodes in each cluster after an expansion operation as a _layer_. The separators are collected to form a layer after the graph cut on the complete view graph. Fig. 5(a) shows examples of the separators marked green. We have separators \(\mathcal{S}_{1}=\{\{3,7,8\},\{9,16,17\},\{18,19,20\}\}\) in the first layer. We then collect all the adjacent images of every separator for each partition. We find one adjacent image that does not belong to partition \(k\), and add it to the second layer of separators \(\mathcal{S}_{2}\) in partition \(k\). Adjacent images are sorted in descending order according to the weights of the edges, i.e. the number of inlier matches. Fig. 5(b) shows that the separators \(\mathcal{S}_{2}=\{\{9,20\},\{8,18\},\{8,16\}\}\) at the second layer after traversing all separators in \(\mathcal{S}_{1}\). The expansion step is repeated until the number of overlapping images reaches the overlapping threshold \(\tau_{\text{ot}}\) (e.g. 30\(\%\)).Fig. 5(c) shows the separators \(\mathcal{S}_{3}\) at the third layer. #### Iv-B2 Local Incremental SfM We perform incremental SfM in parallel after graph partitioning. For local incremental SfM, we utilize the result of global SfM \(\hat{\mathcal{P}}_{\text{global}}\) to improve the robustness of the image registration step, and to further constrain the camera poses during global optimization. Image RegistrationWe follow [8] for the two-view initialization. We then select a batch of the next-best images to register, where any image that sees at least \(v_{p}\) scene points are put into one batch and sorted in descending order. For each candidate image \(i\), we first use the P3P [44] to compute the initial pose \(\mathbf{P}_{i}^{\text{3p}}\). However, images can be registered wrongly due to wrong matches or scene degeneration. We propose to also compute the image pose \(\mathbf{P}_{i}^{\text{gb}}=[\mathbf{R}_{i}^{\text{gb}}\mid\mathbf{t}_{i}^{ \text{gb}}]\) using \(\hat{\mathcal{P}}_{\text{global}}\). We first collect the set of registered images that are co-visible to image \(i\), and then the rotation of image \(i\) can be computed by a single rotation averaging [45]: \[\operatorname*{arg\,min}_{\mathbf{R}_{i}^{\text{gb}}}\sum_{k}\|\log(\hat{ \mathbf{R}}_{ki}\mathbf{R}_{k},\mathbf{R}_{i}^{\text{gb}})\|,\quad\text{where} \quad\hat{\mathbf{R}}_{ki}=\hat{\mathbf{R}}_{i}\hat{\mathbf{R}}_{k}^{\top}, \tag{5}\] where \(k\) is the index of images that are co-visible to image \(i\). For image translation, we first compute the translation of image \(i\) by each co-visible image and simply adopt the median of each dimension in translations \(\mathbf{t}_{i}^{\text{gb}}\): \[\mathbf{t}_{i}^{\text{gb}}=\text{median}\{\hat{\mathbf{t}}_{ki}+\hat{\mathbf{R }}_{ki}\mathbf{t}_{k}\},\quad\text{where}\quad\hat{\mathbf{t}}_{ki}=\hat{ \mathbf{t}}_{ki}\hat{\mathbf{t}}_{k}. \tag{6}\] To select the best initial pose, we reproject all visible 3D points of image \(i\) to compute the reprojection errors and mark the 3D point with the reprojection error less than 8px as an inlier. Finally, we select the pose which has the most inliers. Bundle AdjustmentTo alleviate the drift problem for local incremental SfM, we perform global optimization using the classical bundle adjustment with the absolute poses obtained from global SfM as the supervision for the incrementally registered poses, i.e. \[\operatorname*{arg\,min}_{\mathbf{R},\mathbf{C},\mathbf{X}} \Big{\{}\sum_{i}\sum_{k}\|\mathbf{\Pi}(\mathbf{R}_{i},\mathbf{C}_{i}, \mathbf{X}_{k})-\mathbf{u}_{ik}\|\quad+ \tag{7}\] \[\sum_{(i,j)\in\mathcal{E}_{\text{sq}}}\Big{(}\|\log(\mathbf{R}_{ ij},\hat{\mathbf{R}}_{ij}\|+d_{\angle}(\mathbf{t}_{ij},\hat{\mathbf{t}}_{ij}) \Big{)}\Big{\}},\] Fig. 4: **Inlier ratio distribution** of NN+RANSAC, global SfM and incremental SfM (ground truth) on the 711 (left) and B6 (right) datasets. where \(\mathbf{\Pi}(\cdot)\) reprojects a 3D point back to the image plane, \(d_{\angle}(\cdot)\) denotes the angle between two vectors. Note that we do not make the hard constraint to force the translation part of \(\hat{\mathbf{P}}_{ij}^{-1}\mathbf{P}_{ij}\) to be a zero-vector. Instead, we use \(d_{\angle}(\mathbf{t}_{ij},\hat{\mathbf{t}}_{ij})=d_{\angle}(\mathbf{C}_{i}- \mathbf{C}_{j},\hat{\mathbf{C}}_{i}-\hat{\mathbf{C}}_{j})\) to constrain the translation direction of camera poses. This is because the absolute positions obtained from global SfM are not sufficiently accurate. #### Iv-B3 Adaptive Global Alignment The global alignment step is crucial for the divide-and-conquer SfM since a wrong similarity transformation can cause catastrophic failure of the reconstruction. The difficulties in estimating a reliable similarity transformation are due to 1) The existence of outliers in registered camera poses. Although the outliers can be identified by RANSAC [46], the threshold that indicates outliers is hard to determine. This is due to the loss of the absolute scale of the real world in SfM without additional information such as GPS. It indicates that _the optimal outlier threshold varies for each cluster_. 2) The estimated similarity transformation can overfit wrongly with insufficient sample points. Existing divide-and-conquer methods [16, 18, 19, 47, 17] suffer from the two issues because the similarity transformations can only be estimated from the overlapping areas between the pairwise local partitions. To tackle the first issue, we propose an adaptive strategy to determine the inlier threshold \(\tau_{\text{inlier}}\). Given an initial inlier threshold \(\tau_{\text{init}}\), we first estimate the similarity transformation by RANSAC [46]. We then compute the inlier ratio \(r_{\text{inlier}}\) and increase the inlier threshold if \(r_{\text{inlier}}<r_{\text{min}}\). Furthermore, we decrease the threshold if \(r_{\text{inlier}}\geq r_{\text{max}}\) to prevent the threshold from becoming too large. A large threshold allows more outliers to be falsely selected and thus harming the similarity transformation estimation. The second issue can be solved easily within our framework. We set the coordinate frame of the global SfM as the reference frame, and align each local SfM into the reference frame. Therefore, for each partition, we can have as many sample points as the number of common registered images between a global SfM and a local partition to compute the similarity transformation. We also show the effectiveness of the algorithm to merge local reconstructions in Fig. 6. When zooming in, we can observe that our adaptive strategy perfectly closed the loop while other fixed threshold trials failed. ## V Experimental Results In this section, we perform extensive experiments to demonstrate the accuracy, efficiency, and robustness of our proposed methods. ### _Implementation Details_ We use HFNet [48] as the default feature extractor and use the NN search for matching. A maximum of 500 feature points are extracted from each image and matched to the top 30 most similar images based on the global descriptors from HFNet. We assume cameras are pre-calibrated and use the ceres-solver [49] for bundle adjustment. We did not compare our method against [39], as VINs [40] fails to find the right loops in our datasets. All methods are run on the same computer with 40 CPU cores and 96 GB RAM. **Evaluation Datasets:** We evaluate our method on our self-collected outdoor datasets and the 4seasons [15] datasets. Our self-collected datasets are collected by low-speed autonomous mowers, of which the running environments have many plants and texture-less areas. The 4seasons dataset is a cross-season dataset that includes multi-sensor data such as IMU, GNSS, and stereo images. It also provides camera poses computed by VI-Stereo-DSO [50, 51] and ground-truth camera poses by fusing multi-sensor data into a SLAM system. See our attached video for a more qualitative and quantitative evaluation of the 4Seasons dataset. Fig. 5: **Pipeline of adaptive flood-fill graph partition**. In the view graph, nodes are denoted by blue circles, edges are denoted by blue solid lines. Separators are marked by green circles. Fig. 6: **Vehicle trajectories of different threshold trials when merging sub-reconstructions**. The last figure is obtained by our method which starts from an initial inlier threshold \(\tau_{\text{init}}\). Others are the results of using a fixed threshold during the alignment to merge all local reconstructions. **Running Parameters:** Empirically, we use the time threshold \(\epsilon_{T}=500\) ms to adopt the fused relative poses in \(\mathcal{G}_{\text{aug}}\), and \(\epsilon_{\text{R}}=5\) degree to check to relative rotation discrepancy. The point-to-epipolar line distance is \(\epsilon_{M}=4\) px. Besides, we set the overlapping ratio \(\tau_{\text{ot}}=0.3\) in the graph partition, \(v_{p}=10\) for an image to be a candidate to register, and \(r_{\min}=0.7,r_{\max}=0.9,\tau_{\text{init}}=1.0,\alpha_{\text{inc}}=0.2, \alpha_{\text{dec}}=0.1\) in global alignment. ### _How Matching Refinement Saves SfM?_ In addition to running our experiments on HFNet, we also do evaluations on different trials. We first show the reconstruction results conducted on a challenging scene in Fig. 7, which is difficult for visual methods to identify the wrong feature matches due to specular issues. We use two different combinations of methods for feature extraction and matching in each scene. In the first combination, we use HFNet [48] for feature extraction and NN search for feature matching. In the second combination, we use Superpoint [52] for feature extraction and Superglue [53] for feature matching. Both settings use RANSAC [46] to remove matching outliers that do not satisfy the point-to-epipolar line constraint. In each sub-figure, the left and right images are the results without and with matching refinement, respectively. It can be seen that for HFNet + NN, while both methods fail to reconstruct the two datasets, the result after our result is visually better than without matches refinement. For Superpoint + Superglue, the state-of-the-art methods respectively on feature extraction and matching, also fails on the dataset without refining matches. In contrast, our method can correctly identify the wrong matching pairs and then leverage the refined matchings to greatly improve the reconstruction quality for both settings. ### _Qualitative Evaluation on Real-World Datasets_ We evaluated our full pipeline on several outdoor datasets. We use the registered images number \(N_{c}\), the recovered 3D points \(N_{p}\), the average track length \(\bar{L}\), and the root mean square error (RMSE) to evaluate the qualitative accuracy. As shown in Table. II, our method shows the most number of registered images in almost all the datasets, while [17] shows the least number of registered images. In terms of efficiency, our method is moderately slower than GraphSfM [17] in most datasets since our method requires an additional global SfM reconstruction step. Interestingly, GraphSfM [17] is almost \(1\times\) slower than our method on the A4 dataset. We conjecture that it is due to the frequent failure of GraphSfM in selecting suitable images to register and therefore more trials are required to register as many images as possible. On the other hand, our method is robust enough to deal with the case since we get the initial poses of the images from P3P or global SfM. Our explanation is validated in Table. II where GraphSfM [17] recovers only 4,235 poses out of 5,184 images, which is almost 20% less than our method. We can further notice that the average track length of global SfM is remarkably shorter than other methods, which means poses from global SfM are not accurate. ## VI Conclusion In this paper, we proposed a robust SfM method that is adaptive to scenes in different scales and environments. Integrating data from low-cost sensors, our initial global SfM can benefit from the augmented view graph, where the solvability of the original view graph is enhanced. The global SfM result is used as a reliable pose prior to improve the robustness of the subsequent local incremental SfM and the final global alignment steps. Comprehensive experiments on different challenging scenes demonstrated the robustness and adaptivity of our method, whilst taking more computation burden with an additional global SfM step. **Acknowledgement.** This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-024), and the Tier 2 grant MOE-T2EP20120-0011 from the Singapore Ministry of Education. Fig. 7: **Vehicle trajectories after match refinement on B6 dataset. In Fig.(a) and Fig.(b), the visual results are respectively reconstructed without (left) and with (right) match refinement in each sub-figure. Fig.(c) shows some of the wrong matching pairs that are filtered by our method.**
2308.08472
An Ambient Intelligence-based Approach For Longitudinal Monitoring of Verbal and Vocal Depression Symptoms
Automatic speech recognition (ASR) technology can aid in the detection, monitoring, and assessment of depressive symptoms in individuals. ASR systems have been used as a tool to analyze speech patterns and characteristics that are indicative of depression. Depression affects not only a person's mood but also their speech patterns. Individuals with depression may exhibit changes in speech, such as slower speech rate, longer pauses, reduced pitch variability, and decreased overall speech fluency. Despite the growing use of machine learning in diagnosing depression, there is a lack of studies addressing the issue of relapse. Furthermore, previous research on relapse prediction has primarily focused on clinical variables and has not taken into account other factors such as verbal and non-verbal cues. Another major challenge in depression relapse research is the scarcity of publicly available datasets. To overcome these issues, we propose a one-shot learning framework for detecting depression relapse from speech. We define depression relapse as the similarity between the speech audio and textual encoding of a subject and that of a depressed individual. To detect depression relapse based on this definition, we employ a Siamese neural network that models the similarity between of two instances. Our proposed approach shows promising results and represents a new advancement in the field of automatic depression relapse detection and mental disorders monitoring.
Alice Othmani, Muhammad Muzammel
2023-08-16T16:21:43Z
http://arxiv.org/abs/2308.08472v1
An Ambient Intelligence-based Approach For Longitudinal Monitoring of Verbal and Vocal Depression Symptoms ###### Abstract Automatic speech recognition (ASR) technology can aid in the detection, monitoring, and assessment of depressive symptoms in individuals. ASR systems have been used as a tool to analyze speech patterns and characteristics that are indicative of depression. Depression affects not only a person's mood but also their speech patterns. Individuals with depression may exhibit changes in speech, such as slower speech rate, longer pauses, reduced pitch variability, and decreased overall speech fluency. Despite the growing use of machine learning in diagnosing depression, there is a lack of studies addressing the issue of relapse. Furthermore, previous research on relapse prediction has primarily focused on clinical variables and has not taken into account other factors such as verbal and non-verbal cues. Another major challenge in depression relapse research is the scarcity of publicly available datasets. To overcome these issues, we propose a one-shot learning framework for detecting depression relapse from speech. We define depression relapse as the similarity between the speech audio and textual encoding of a subject and that of a depressed individual. To detect depression relapse based on this definition, we employ a Siamese neural network that models the similarity between of two instances. Our proposed approach shows promising results and represents a new advancement in the field of automatic depression relapse detection and mental disorders monitoring. Keywords:Ambient Intelligence Automatic speech recognition (ASR) - one-shot learning depression relapse clinical depression. ## 1 Introduction Major Depressive Disorder (MDD) is a mood disorder that has detrimental effects on an individual's cognition, emotions, and daily functioning. It is primarily characterized by persistent feelings of sadness, anger, and loss of interest in activities. MDD is among the most prevalent mental disorder, impacting over 300 million people globally [14]. Furthermore, recent studies have indicated a significant rise in mental health issues, including anxiety, stress, and depression, during the COVID-19 pandemic [27]. Depression relapse refers to the re-occurrence of depressive symptoms after a period of partial or complete remission. It means that a person who previously experienced an episode of depression and showed improvement or recovery from their symptoms subsequently experiences a return or worsening of those symptoms [16]. Relapse can happen during or after treatment for depression, and it is often characterized by a recurrence of the emotional, cognitive, and behavioral manifestations associated with depression. Relapse and recurrence rates are high in MDD patients with a percentage of 60% after 5 years, 67% after 10, and 85% after 15 [10]. Thus, there is a pressing need for automatic monitoring systems which can be employed to detect depression relapse or recurrence at an early stage, thereby facilitating timely intervention. Despite the increasing utilization of machine learning (ML) for analyzing Major Depressive Disorder (MDD) and other mental health issues, and its potential to enhance decision-making for mental health practitioners, [19, 6, 7, 28], there is a noticeable lack of studies utilizing ML to address the issue of depression relapse. Additionally, previous research on relapse prediction primarily relies on clinical variables such as age, gender, medication types, number of episodes, symptom severity, cognitive markers, and medical image data [1, 4, 26, 25]. However, these approaches have overlooked other important attributes, such as the analysis of speech patterns and facial expressions. On the other hand, a big limitation to depression research studies is the lack of public datasets. Moreover, due to privacy, safety, expense, and ethical concerns [29], acquiring examples to train a model for depression relapse is a difficult task. This motivates research for approaches that take into account the data scarcity problem in this area. Few-shot learning is a subfield of machine learning that deals with the challenge of learning new concepts or tasks with limited labeled training data. In traditional machine learning approaches, a large amount of labeled data is typically required to train models effectively. However, in few-shot learning, the goal is to develop algorithms that can generalize and learn from only a few examples or instances of a particular class or task. In this paper, we propose a robust approach that deals with depression relapse data scarcity. The proposed approach is based on one-shot learning. We define depression relapse as the closeness of the speech encoding of a subject to that of a depressed subject. By modeling the similarity between two instances, Siamese neural network is chosen as a suitable candidate for depression relapse detection following the proposed definition. The proposed approach investigates the predictive power of audio and textual encodings of the speech for depression relapse prediction using Siamese neural network architecture. Three Siamese networks built using (1) MFCC audio features, (2) VGGish audio features, and (3) audio-textual fusion features are compared and investigated for the relapse identification task. To our knowledge, this work is the first to tackle the prediction of depression relapse based on verbal and non-verbal cues in a speech, and to employ one-shot learning for this task. The paper is structured as follows. In the following section, a literature review is conducted to identify major previous works done to predict depression relapse and recurrence. An overview of n-shot learning and Siamese networks is also included (section 2). In section 3, the different steps of our methodology are described. Finally, the obtained results and discussion of the performed tests are presented in section 4. ## 2 Related work Relapse and recurrence can be attributed to various factors. It is suggested that recurrence is ascribed to a genetic vulnerability [2]. Likewise, some findings suggest that inflammation can issue a possible mechanism for recurrent depression [13]. Another factor can be neuropsychological functioning [25]. Factors triggering depression relapse may vary among individuals. Moreover, in many cases it is difficult to underpin the trigger factors leading to a depression relapse episode. This motivates the development of automatic monitoring systems capable of detecting relapse. Researcher for predicting depression relapse often relies on n clinical variables such as medication type, episode number, symptom severity, cognitive markers, and occasionally medical imaging data. For example, Borges et al. (2018) [1] have explored the use of machine learning algorithms in predicting relapse in bipolar patients based on their clinical data, with Random Forests showing promising results (68% for the Relapse Group and 74% for the No Relapse Group). Another study by Chanda et al. (2020) [4] aimed to develop a recurrence classification platform based on gender, age, medication, and treatment time, where K-Nearest Neighbor outperformed SVM and RF with an 83% accuracy. Emotional biases were investigated investigated by Ruhe et al. (2019) [25], as potential biomarkers for depression relapse, resulting in a linear SVM model with a 75% accuracy. Moreover, a multimodal approach proposed by Cearns et al. (2019) [3] incorporating various modalities, including clinical, blood-biomarker, genetic, bio-electrical impedance, electrocardiography, and structural imaging, achieved an accuracy of 65.72% using SVM. Further, a holistic approach involving the analysis of depression scores trajectories and predicting individual treatment outcomes was proposed using smoothing splines, K-means clustering, and collaborative modeling [12]. Just recently, three research studies have emerged that propose utilizing video data for the prediction of depression relapse [22, 17, 21]. In [22] along with CNN, a Model of Normality (MoN) was utilize to detect depression and relapse. While, the other approach [17] proposed a preliminary study based one-shot learning due to capacity to learn instantly. Both of these state of the art modalities relies on audio and visual features for monitoring depression relapse. In [21], a deep learning-based approach is proposed for depression recognition and depression relapse prediction using videos of clinical interviews. Their approach is based on a correlation-based anomaly detection framework and a measure of similarity to depression where depression relapse is detected when the deep audiovisual patterns of a depression-free subject become close to the deep audiovisual patterns of depressed subjects. Thus, the correlation between the audiovisual encoding of a test subject and a deep audiovisual representation of depression is computed and used for monitoring depressed subjects and for predicting relapse after depression. ## 3 Methodology Depression relapse is defined in this work as the similarity (dissimilarity) between the audio-textual speech encoding of a subject and the speech encoding of a diagnosed depressed (non-depressed) subject. One-shot learning based Siamese network is chosen in this work for modeling depression relapse, as it models the similarity (dissimilarity) between two samples. The proposed framework is composed of four stages: (1) pre-processing audio data augmentation (section 3.1), (3) audio-textual features extraction (section 3.2), and (4) one-shot learning-based depression relapse detection (section 3.3). Each of these steps are detailed in the following. ### Pre-processing & Data Augmentation #### 3.1.1 Audio Preprocessing In the pre-processing step, unvoiced segments are first filtered and the speech of the subject is extracted from the audio signal. The speech signal is then divided into \(n=7.6\) seconds speech segments. To increase the number of data samples and the system's robustness to noise, data augmentation is performed. The signals are perturbed using two audio augmentation techniques [23, 18, 20]: 1) _Noise Injection_: the audio signal is perturbed through the injection of random noise. Let \(x\) be an audio signal and \(\alpha\) a noise factor, the noise perturbed signal \(x_{N}\) is given by: \(x_{N}=x-\alpha\times rand(x)\), with \(\alpha\) = 0.01, 0.02 and 0.03. Figure 1: Proposed multimodal-based Siamese networks for depression relapse detection. 2) _Pitch Augmentation_: pitch is lowered by factors of 0.5, 2, and 2.5 (in semitones). #### 3.1.2 Text Preprocessing The clinical interviews are recorded and accompanied by transcriptions of the conversations between the participant and the interviewer. Unlike the audio data, we analyze both the transcriptions of the interviewer and the participant. This is because the verbal reactions of the interviewer following the participant's responses can contain valuable information about the participant's emotions. For example, when the participant responds negatively, the interviewer may express phrases like "that sucks" or "I'm sorry to hear that," which provide insight into the participant's depressive state. Our decision to focus solely on the audio patterns of the participant is based on the fact that the interviewer's audio patterns do not exhibit signs of depression. However, the words used by the interviewer can indicate sympathy when the patient is depressed. ### Audio-Textual Features Extraction Following data augmentation, audio-textual features are extracted. These constitute MFCC and Vggish audio features, and textual word2Vec features. #### 3.2.1 MFCC Features Extraction from Audio Signal MFCC features represent the audio cepstrum in the non-linear Mel scale. Such representation is said to approximate the human auditory system. Below is the detailed description of the MFCC feature extraction steps. _Windowing_: The signal is split into 60 milliseconds frames. A Hamming window is then applied on each frame to taper the signal towards the frame boundaries. Given a signal \(s[n]\) of length \(n\) and a hamming window \(w[n]\), the sliced frame is given by: \[x[n]=w[n]s[n]\quad with\quad w[n]=\alpha-\beta\cos\left(2\pi\frac{n}{N-1}\right) \tag{1}\] where \(\alpha=0.54\), \(\beta=0.46\), \(N\) is the window length such that \(0\leq n\leq N\). _DFT spectrum_: Discrete Fourier Transform (DFT) is then performed on each windowed frame to get the magnitude spectrum: \[X[k]=\sum_{n=0}^{N-1}x[n]e^{-j\frac{2\pi}{N}kn};\quad 0\leq k\leq N-1 \tag{2}\] _Mel spectrum_: Triangular Mel-scale filter banks are then multiplied by the magnitude spectrum to compute the Mel spectrum. \[Y_{t}[m]=\sum_{k=0}^{N-1}W_{m}[k]|X_{t}[k]|^{2};\quad 0\leq m\leq M-1 \tag{3}\] where \(W_{m}\) represents the \(m^{th}\) triangular Mel-scale filter bank, \(M\) is total number of filters, and \(k\) corresponds to the DFT bin number. _Discrete cosine transform (DCT)_: The Mel spectrum is then represented on a log scale and DCT is applied generating a set of cepstral coefficients. MFCC is thus calculated as: \[c[n]=\sum_{m=0}^{M-1}log_{10}\left(Y_{t}[m]\right)cos\left(n(m-0.5)\frac{\pi}{ M}\right) \tag{4}\] For this work, 60 dimensional MFCC features are extracted leading to a matrix of \(378\times 60\) for each 7.6 seconds signal. #### 3.2.2 Vggish Features Extraction from Audio signal In this work, VGGish features [11] are extracted from the audio segments. VGGish converts audio features into semantically significant compact high-level 128 dimensional embedding. This embedding can be then fed to a shallow or deep classification model. To compute the VGGish features, first a one-sided Short-Time Fourier Transform (STFT) is applied to the audio clip using a 25 ms periodic Hann window with 10 ms hop, and 512-point DFT. The complex spectral values are then converted to magnitude and phase information is discarded. The one-sided magnitude spectrum are passed to a 64-band mel-spaced filter bank, and the magnitudes in each band are then summed. The obtained mel spectrograms are then converted into a log scale and buffered into overlapped segments consisting of 96 spectrums each. These mel spectrograms are passed through VGGish network to obtain a 14x128 matrix [8, 11]. The VGGish feature matrix contained zero values. To resolve this issue, the VGGish model was released with a precomputed principal component analysis (PCA) matrix [8, 11]. First, we subtract the precomputed 1x128 PCA mean vector from the 14x128 feature matrix, and then premultiply the result by the precomputed 128x128 PCA matrix. #### 3.2.3 Textual Features Extraction For each audio segment (\(7.6sec\)) of the clinical interviews, the words spoken by both the participant and the interviewer are converted into sequences of vectors using word embedding as a textual features. Word embedding is a technique that transforms words into dimensional vectors, where words with similar meanings or words that frequently occur together in context are represented by vectors that are close to each other in the dimensional space. In particular, each frame transcript is represented by a matrix \(E=(e_{1},\ldots,e_{k},\ldots e_{nw})\) where \(e_{k}\) is the word vector corresponding to the \(k^{th}\) word and \(nw\) is the number of words in the frame transcript. For word embedding, we utilize the fastText pretrained network [15], which was trained on Common Crawl 1 and incorporates sub-word information. This network produces word vectors of size 300. In cases where certain words are not present in the pretrained model, we replace them with their synonyms. Additionally, the resulting word vector matrix for each transcript frame is resized to \(60\times 9\), where 60 represents the size of MFCC coefficients and 9 represents the minimum number of words present in a single frame. Footnote 1: [https://commoncrawl.org/2017/06](https://commoncrawl.org/2017/06) ### One-Shot Learning Framework for Depression Relapse Detection In this work, 1D convolutional Siamese networks are proposed for depression relapse detection based on audio-textual cues. The proposed architectures are summarized in Fig. 1. One-shot learningOne-shot learning refers to the classification task where the number of available instances per class is limited. In some cases, a class may have only a single example during the model training phase. The objective of one-shot learning is to train models that can determine the similarity between a pair of inputs and identify whether they belong to the same class or not. Siamese neural networks, also known as twin neural networks, are a type of one-shot learning model that aim to learn a distance function between pairs of input vectors. In the architecture of a Siamese neural network, two identical sub-networks with shared weights are employed. Each sub-network takes in a distinct pair of inputs and produces output representations. The network then computes a distance metric, such as Euclidean distance or cosine similarity, between the outputs of the sub-networks. This distance metric reflects the similarity between the two input vectors [5]. The training can be done on different pairs of inputs from all possible classes. To obtain a prediction for an instance, a comparison between the instance and reference instances of all classes should be performed. The final prediction is then obtained based on the different similarity scores. For training a Siamese network, dataset pre-processing is required. A pairing procedure is performed where pairs of data points are created: similar samples pair, and dissimilar samples pair. Similar samples are then assigned positive labels, while dissimilar samples are assigned negative labels. The pairs are then fed to the Siamese architecture. The pairing procedure employed in this paper is detailed in section 4.1. #### 3.3.1 Proposed Siamese Architectures We propose two Siamese networks built using (1) MFCC audio features, (2) VGGish audio features, and (3) audio-textual fusion features. **Audio based Siamese Network:** The proposed MFCC and VGGish Siamese models have similar architectures. The architecture consists of two blocks of convolutional, Relu, dropout, and flatten layers. Two fully connected layers are used to compute encodings and an euclidean distance is measured between the two encodings. In other words, both Net-1 (A and B) from Fig. 1 are connected with Net-3 through the fully connected layers. A last layer of size two is added with a sigmoid activation function for binary prediction of relapse. **Audio-Textual based Siamese Network:** In Fig. 1, the 60 MFCC features are fed to a Convolutional Neural Network composed of two blocks of 1D convolutional and Relu layers, followed by dropout and flatten layers. Similarly 14 VGGish features are fed to a similar Convolutional Neural Network consisting of two blocks of 1D convolutional and Relu layers, followed by dropout and flatten layers. These high level MFCC and VGGish features are then concatenated with the textual features into a features vector of size 540 and fed to two fully connected layers to obtain the audio-textual encoding. Afterwards, an euclidean distance is computed between the two encodings: one of non-diagnosed subject and one of a reference depressed subject. A last layer of size two is also added for binary prediction of relapse. ## 4 Results and Discussion ### Dataset We use the Distress Analysis Interview Corpus Wizard-of-Oz dataset (DAIC-WOZ) ([9]). This dataset was introduced in AVEC 2017 challenge [24]. 189 subjects took part in the data collection, where they were interviewed by a virtual agent manipulated in a wizard-of-oz setting. The subject's speech and interviews transcripts were saved and publicly made available for research. The average length of the audio samples is 15 minutes, sampled at \(16kHz\). The dataset includes depression scores in two formats: binary and severity level. Binary depression and severity labels were collected via a self-report Patient Health Figure 2: Confusion matrices of Siamese networks for depressive state similarity detection. NS: Non-Similar, S: Similar. Questionnaire (PHQ) [30]. Only 182 subjects were used in the evaluation. We use a percentage split strategy where the dataset is randomly divided into 80% training, 10% validation, and 10% testing. Out of the 182 subjects, the training dataset included 146 subjects, while the validation and test sets included 18 subjects each. **Pairing** Features pairs are then generated to train, test and validate the one-shot learning model. Samples from the train set are randomly paired with each others to form the training pairs. No sample was paired with itself. To generate the test and validation pairs, samples from these sets are paired with the samples from the training set. The binary depression score provided in DAIC-WOZ consists of two classes: depressed and non-depressed. Binary depression relapse is defined to be the paired feature groups of the couple (Non-Depressed, Depressed). Also, in both classes (i.e., Depressed, Non-Depressed) 50% features matrix groups are paired with the same class, while the remaining 50% features matrix groups are paired with the other class. Also pairs are created for PhQ-score provided in DAIC-WOZ dataset. For all 24 classes 50% features matrix groups are paired with the same class, while the remaining 50% features matrix groups are paired with the other classes. ### Network Implementation Details A ReLU activation function with 64 filters is used for the convolutional layers in the presented CNN architectures. The stride and filter sizes are set to 1 and 3, respectively. The dropout fraction value is set to 0.01%. The dense layers' sizes are set to 1024 and hyperbolic tangent (tanh) is used as an activation function. To predict the similarity between two feature sets for depression relapse detection based on PHQ-binary, the output layer is a dense layer with a sigmoid activation function and a size of 2. To predict the similarity based on PHQ-score pairs, the output dense layer has a size of 25. To train the models, an initial learning rate of \(10^{-5}\) and a decay of \(10^{-6}\) is used. The batch size and epochs are set to 100 and 300, respectively. For both models, Root Mean Square Error is used as a loss function and trained with RMSProp optimizer. An early stopping is used if the loss function stops decreasing after 10 epochs. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Network** & **Acc. (\%)** & **RMSE** & **CC** \\ \hline \hline MFCC Siamese & 65.88 & 0.4841 & 0.3177 \\ \hline VGGish Siamese & 66.45 & 0.4792 & 0.3290 \\ \hline Audio-Textual Siamese & **73.12** & **0.4585** & **0.4631** \\ \hline \end{tabular} \end{table} Table 1: Performance of proposed Siamese networks for depressive state similarity in terms of accuracy, Root Mean Square Error (RMSE) and CC ### Performance Analysis of Siamese Networks The proposed Siamese networks are evaluated using accuracy, Root Mean Square Error (RMSE), and Pearson Correlation Coefficient (CC) metrics. The proposed framework reaches an accuracy of 65.88% and 66.45% when using only the MFCC features and VGGish features, respectively. The fusion of MFCC, VGGish and textual features notably increases the performance where an accuracy of 73.12% is obtained. A minor decrease in RMSE and a minor increase in CC is noted for the VGGish model compared to the MFCC based one. Further, the CC value of MFCC and VGGish based Siamese networks are 0.3177 and 0.3290, which increases to 0.4631 with the fusion of audio and textual features. Fig. 2 shows the confusion matrices of MFCC, VGGish and Audio-textual based Siamese networks. From the figure, one can notice that the fusion of textual features with audio features network adequately improves the performance of one shot learning Siamese network. For non-similar feature pairs classification, the audio-textual based fusion network achieved 9.02% and 8.65% better results compared to MFCC and VGGish based networks. Also for non-similar feature pairs, a notable decrement in false positives has been reported for audio-textual based fusion network. For similar feature pairs (i.e., when both features sets in pair belong to the same class) classification the Audio-Textual based fusion network obtained 5.45% and 4.70% better results compared to MFCC and VGGish based networks. Also a considerable decrement in false positives has been reported for Audio-Textual based fusion network compared to MFCC and VGGish based Siamese networks. Furthermore, We also investigate the pair matching for PhQ-Score using multi-class Audio-Textual based Siamese network and we obtained an RMSE value of 4.025 (Normalized RMSE of 0.161). ## 5 Conclusion and Future Work In this work, an ASR framework is proposed for depression relapse detection, modeling the similarity of audio and textual speech encoding between a new subject and a diagnosed depressed subject using one-shot learning. The proposed model gave reliable results using the speech's audio and textual cues. The fusion of audio and textual features enhanced the one-shot learning model performance, which made it reliable for detecting depression relapse. Further, the proposed ASR system could help depression patients to monitor their recovery. Lastly, in future work we plan to consider also visual cues in the proposed framework. ## 6 Acknowledgments This work is funded under grant number IF040-2021 (MATCH2021: Malaysia France Bilateral Research Grant).
2306.00582
Anomaly Detection with Variance Stabilized Density Estimation
We propose a modified density estimation problem that is highly effective for detecting anomalies in tabular data. Our approach assumes that the density function is relatively stable (with lower variance) around normal samples. We have verified this hypothesis empirically using a wide range of real-world data. Then, we present a variance-stabilized density estimation problem for maximizing the likelihood of the observed samples while minimizing the variance of the density around normal samples. To obtain a reliable anomaly detector, we introduce a spectral ensemble of autoregressive models for learning the variance-stabilized distribution. We have conducted an extensive benchmark with 52 datasets, demonstrating that our method leads to state-of-the-art results while alleviating the need for data-specific hyperparameter tuning. Finally, we have used an ablation study to demonstrate the importance of each of the proposed components, followed by a stability analysis evaluating the robustness of our model.
Amit Rozner, Barak Battash, Henry Li, Lior Wolf, Ofir Lindenbaum
2023-06-01T11:52:58Z
http://arxiv.org/abs/2306.00582v2
# Anomaly Detection with Variance Stabilized Density Estimation ###### Abstract Density estimation based anomaly detection schemes typically model anomalies as examples that reside in low-density regions. We propose a modified density estimation problem and demonstrate its effectiveness for anomaly detection. Specifically, we assume the density function of normal samples is uniform in some compact domain. This assumption implies the density function is more stable (with lower variance) around normal samples than anomalies. We first corroborate this assumption empirically using a wide range of real-world data. Then, we design a _variance stabilized density estimation_ problem for maximizing the likelihood of the observed samples while minimizing the variance of the density around normal samples. We introduce an ensemble of autoregressive models to learn the _variance stabilized_ distribution. Finally, we perform an extensive benchmark with 52 datasets demonstrating that our method leads to state-of-the-art results while alleviating the need for data-specific hyperparameter tuning. ## 1 Introduction Anomaly detection (AD) is a crucial task in machine learning that involves identifying patterns or behaviors that deviate from the norm in a given dataset. Accurate identification of anomalous samples is essential for the success of various applications such as fraud detection [24], medical diagnosis [20, 26], explosion detection [7, 33], and more. An intuitive and well-studied perspective on anomaly detection is via the lens of density estimation. During training, a probabilistic model learns to maximize the average log-likelihood of non-anomalous, i.e., "normal" samples. _Outlier_ samples are then equated to low likelihood points under the learned density function. Examples include Histogram-based Outlier Score (HBOS) [21], which uses the histogram of the features to score anomalies in the dataset. Variational autoencoders [2] use a Gaussian prior for estimating the likelihood of the observations. The Copula-Based Outlier Detection method (COPOD) [31] models a left and right tail empirical cumulative distribution function and computing the skewness coefficient. They use the maximum value between the left-tail empirical copula, the right-tail empirical copula, and the skewness-corrected empirical copula as an anomaly score. While the low-likelihood assumption for modeling anomalous samples seems realistic, density-based anomaly detection methods often underperform compared to geometric or one-class classification models [22]. Several authors have proposed reasons for this gap. One possible explanation is the curse of dimensionality which makes density estimation challenging in high dimensions [37, 37, 38, 49]. Another argument is that "simple" examples may lead to a high likelihood even if they are not seen during training [15, 39]. To mitigate this problem, we propose a simple modified density estimation problem that significantly improves the ability to distinguish between normal and abnormal samples. We base our work on a new assumption on the properties of the density function around normal samples. Specifically, we argue that the density function of normal samples is approximately uniform in some compact domain. This uniformity translates to a more stable (with lower variance) density function around inliers than outliers. We first provide empirical evidence supporting this claim (see Figure 2). Then, we propose a variance-stabilized density estimation problem, realized as a regularized maximum likelihood problem. To learn a reliable, stable density estimate, we use an ensemble of multiple autoregressive models implemented using a probabilistically normalized network (PNN) [30], each trained to learn a density representation of normal samples that is uniform in some compact domain (a schematic illustration of this procedure appears in Figure 1. We perform an extensive benchmark with 52 real-world datasets, demonstrating that our approach is a new state-of-the-art anomaly detector. ## 2 Related work One popular line of solutions for AD relies on the geometric structure of the data. These include methods such as Local Outlier Factor (LOF) [8], which locates anomalous data by measuring local deviations between the data point and its neighbors. Another example is using the distance to the \(k\) nearest neighbors (\(k\)NN) to detect anomalies. Several authors have used an AutoEncoder (AE) for this task by modeling anomalies as harder-to-reconstruct samples [32, 51]. Chen et al. [12] have improved upon this approach by presenting an ensemble of AE with different dropout connections. Another well-studied paradigm for anomaly detection is one-class classification. Deep One-Class Classification[45] trains a deep neural network to learn a transformation that minimizes the volume of a data-enclosing hypersphere centered on a pre-determined point. Anomalies are detected based on their distance to the hypersphere's center. Several works have used self-supervision to improve the classifier's power to distinguish between normal and abnormal samples. Examples include [42], which apply affine transformations to non-image datasets and use the likelihood of a contrastive predictor to detect anomalies. Shenkar and Wolf [47] presented Internal Contrastive Learning (ICL), which relies on a special masking scheme for learning an informative anomaly score. Density-based anomaly detection is based on the logic that anomalous events happen rarely and are unlikely, thus considering an unlikely sample to have a low "likelihood", and high probability density for a normal sample. Multiple works are based on this intuition implicitly or explicitly [35],[6],[23], even in classification [4, 11, 43, 45] or reconstruction [13, 14] based anomaly detection. Recently, numerous works pointed out that anomaly detection based on simple density estimation has multiple flaws. Le Lan and Dinh [29] claimed that methods based on likelihood scoring are unreliable even when provided with a perfect density model of in-distribution data. Nalisnick et al. [39] demonstrated that the regions of high likelihood in a probability distribution may not be associated with regions of high probability, especially as the number of dimensions increases. Caterini and Loaiza-Ganem [10] focuses on the impact of the entropy term in anomaly detection and suggests looking for lower-entropy representations of data before performing likelihood-based anomaly detection. ## 3 Method Problem DefinitionGiven samples \(X=\{x_{1},\ldots,x_{N}\}\), where \(x_{i}\in\mathbb{R}^{D}\), we model the data by \(X=X_{N}\cup X_{A}\), where \(X_{N}\) are normal sample and \(X_{A}\) are anomalies. Our goal is to learn a score function \(S:\mathbb{R}^{D}\rightarrow\mathbb{R}\), such that \(S(x_{n})>S(x_{a})\), for all \(x_{n}\in X_{N}\) and \(x_{a}\in X_{A}\) while training solely on \(x\in X_{N}\). In this study, we consider the modeling of \(S()\) by estimating a regularized density of the normal samples. IntuitionOne widely used assumption in the anomaly detection literature is that normal data has a simple underlying structure. In contrast, anomalies do not follow a clear pattern since they can stem from many unknown factors [1]. Density-based models for anomaly detection [6], on the other hand, assume that the density of the data \(p_{X}(\cdot)\) is typically higher for normal samples than anomalies, that is, \(p_{X}(x_{n})>p_{X}(x_{a})\) for \(x_{n}\in X_{N}\) and \(x_{a}\in X_{A}\). In recent years multiple works Figure 1: The proposed framework for anomaly detection. We use several permuted versions of tabular data. Each permutation is fed into a probabilistic normalized network (PNN) designed to model normal samples’ density as uniform in some compact domain. Each PNN is trained to minimize a regularized negative log-likelihood loss (see Eq. 1). Since our PNN is implemented using an autoregressive model, we use a spectral ensemble of the learned log-likelihood functions as an anomaly score for unseen samples. showed the flaws of scoring a density model based solely on the likelihood (Sec. 2). Here, we introduce a new assumption for modeling the density function of normal samples. Specifically, our working hypothesis is that the density function around normal samples is stable (with lower variance) compared to the density around anomalous samples. Namely, \(\sigma_{n}^{2}<\sigma_{a}^{2}\), with \(\sigma_{n}^{2}=\underset{x\in X_{N}}{\mathbb{E}}(p_{X}(x)-\mu_{n})^{2}\), \(\sigma_{a}^{2}=\underset{x\in X_{A}}{\mathbb{E}}(p_{X}(x)-\mu_{a})^{2}\), and \(\mu_{n},\mu_{a}\) are the means of the density computed over the normal and anomalous samples respectively. First, we perform a simple evaluation using a diverse set of 52 publicly available anomaly detection datasets to validate our low variance assumption. For each dataset, we estimate the variance of the log-likelihood of normal \(\sigma_{n}^{2}\) and anomalous samples \(\sigma_{a}^{2}\). In figure 2, we visualize the log-likelihood variance ratio between anomalous to normal samples. Each bar represents the variance ratio for a single dataset. As indicated by this figure, in most of the datasets (46 out of 52), the variance ratio is larger than 1, thus supporting our working hypothesis. Similar empirical evidence can be seen in [22], in which the authors demonstrate that multiple classifiers trained on normal samples have lower variance compared with those trained on anomalous samples. We now exploit this assumption to derive a modified density estimation for learning a stabilized density of normal samples. Regularized density estimationFollowing recent anomaly detection works [43, 47, 4], during training, we only assume access to normal samples, \(\mathcal{X}_{train}\subset X_{N}\). Therefore, by incorporating our low variance assumption, we can formulate a modified density estimation problem where we impose stability of the density function. Specifically, we minimize a regularized version of the negative log-likelihood. Denoting a density estimator parameterized by \(\theta\) as \(\hat{p}_{\theta}(x)\), our optimization problem can be written as \[\min\underset{x\in X_{N}}{\mathbb{E}}\big{[}-\log\hat{p}_{\theta}(x)+\lambda \hat{\sigma}_{n}^{2}\big{]}, \tag{1}\] where \(\hat{\sigma}_{n}\) is the sample variance of the estimated log-likelihood, and \(\lambda\) is a regularization parameter that controls the regularization. Specifically, for \(\lambda=0\), Eq. 1 boils down to a standard maximum likelihood problem, and using larger values of \(\lambda\) encourages a more stable (lower variance) density estimate. In recent years, many deep-learning methods have been proposed for density estimation. Here, we chose an autoregressive model to learn \(\hat{p}_{\theta}(x)\) due to their superior performance on density estimation benchmarks, though flows are a well-studied Figure 2: Mean log-likelihood variance ratio between anomalous and normal samples for different datasets used in this paper. Values above the dashed line (greater than 1) are marked in green. Those confirm the need for our proposed variance loss term in most datasets. alternative [16; 17; 28; 36]. Based on an autoregressive probabilistic model, the likelihood of a sample \(x\in\mathcal{X}_{train}\) is expressed as: \[\hat{p}_{\theta}(x)=\prod_{i=1}^{D}\hat{p}_{\theta}(x^{(i)}|x^{(<i)})\implies \log\hat{p}_{\theta}(x)=\sum_{i=1}^{D}\log\hat{p}_{\theta}(x^{(i)}|x^{(<i)}), \tag{2}\] where \(x^{(i)}\) is the \(i\)-th feature of \(x\), and \(D\) is the input dimension. To alleviate the influence of variable order on our estimate, we present below an ensemble of likelihood estimates, each based on a different permutation of features. To learn the stabilized density, we rely on a recently proposed probabilistic normalized network (PNN) [30]. Assuming the density of any feature \(x^{(i)}\) is compactly supported on \([A,B]\in\mathbb{R}\), we can define the cumulative distribution function (CDF) of an arbitrary density as \[\hat{P}(X^{(i)}\leq x^{(i)})=\frac{F_{\theta}(x^{(i)})-F_{\theta}(A)}{F_{ \theta}(B)-F_{\theta}(A)}, \tag{3}\] where \(F_{\theta}\) is an arbitrary neural network function with strictly positive weights \(\theta\), and is thus monotonic. Since each strictly monotonic CDF is uniquely mapped to a corresponding density, we now have unfettered access to the class of all densities on \([A,B]\in\mathbb{R}\), up to the expressiveness of \(F_{\theta}\) via the relation \[\hat{p}_{\theta}(x^{(i)})=\nabla_{x}^{(i)}\hat{P}(X\leq x^{(i)}). \tag{4}\] By conditioning each \(F_{\theta}(x^{(i)})\) on \(x^{(<i)}\), we obtain in their product an autoregressive density on \(x\). This formulation enjoys much greater flexibility than other density estimation models in the literature, such as flow-based models [16; 17; 19; 25] or even other autoregressive models [46; 48] that model \(x^{(i)}\) using simple distributions (e.g., mixtures of Gaussian, Logistic). Our \(\hat{p}_{\theta}\) represented by \(F_{\theta}\) is provably a universal approximator for arbitrary compact densities on \(\mathbb{R}^{D}\)[30], and therefore more expressive while still being end-to-end differentiable. The model \(F_{\theta}\) is composed of \(n\) layers defined recursively by the relation \[a_{l}=\sigma(h_{A}(A_{l})^{T}a_{l-1}+h_{b}(b_{l},A_{l})) \tag{5}\] where \(l\) is the \(l\)th layer of the PNN, \(a_{0}:=x\), \(\sigma\) is the sigmoid activation, and \(A_{l},b_{l}\) are the weights and biases of the \(l\)th layer. The final layer is defined as \(F_{\theta}(x)=softmax(A_{n})^{T}a_{n-1}\). Feature permutation ensembleSince our density estimator is autoregressive (Eq. 2), different input feature permutations may lead to different density estimates. While this seems like a limitation, we leverage this property to robustify our estimate and propose an ensemble-based approach to density estimation based on randomized permutations of the features. Specifically, we denote by \(\mathcal{P}_{D}\) the set of permutation matrices of size \(D\). We learn an ensemble of regularized estimators, each minimizing objective Eq. 1 under a different random realization of feature permutation \(\Pi_{\ell}\in\mathcal{P}_{D}\). We denote by \(S(x)=\log\hat{p}_{\theta}(x)\) as the estimated log-likelihood of \(x\). Next, we compute the score for each permutation, namely \(S_{\ell}(x)\) is the score computed based on the permutation matrix \(\Pi_{\ell},\ell=1,...,N_{perm}\). Finally, we use a spectral ensemble approach [27] proposed for aggregating multiple classifiers without using labels. The idea is to compute the \(N_{perm}\times N_{perm}\) sample covariance matrix of multiple log-likelihood estimates \[\Sigma=\mathbb{E}[(S_{i}(x)-\mu_{i})(S_{j}(x)-\mu_{j})],\] with \(\mu_{i}=\mathbb{E}(S_{i}(x))\). Then, utilizing the leading eigenvector of \(\Sigma\), denoted as \(v\) to define the weights of the ensemble. The log-likelihood predictions from each model are multiplied by the elements of \(v\). Specifically, the spectral ensemble is defined as \[\bar{S}(x)=\sum_{\ell=1}^{N_{perm}}S_{l}(x)v[\ell]. \tag{6}\] The intuition is that if we assume the estimation errors of different estimators are independent, then the off-diagonal elements of \(\Sigma\) should be approximately rank-one [27]. We refer the reader to [27], which presents the derivation of the spectral ensemble. Section 4.5 demonstrates that the spectral ensemble works relatively well even for small values of \(N_{perm}\). ## 4 Experiments ### Implementation Details All experiments were conducted using 3 different seeds. Each seed has 3 ensemble models with a different random feature permutation. We used a learning rate of 1e-4 and a dropout of 0.1 for all datasets. Batch size is relative to the dataset size \(N/10\) and has minimum and maximum values of 16 and 8096, respectively. Experiments were run on an NVIDIA A100 GPU with 80GB of memory. ### Synthetic Evaluation First, we use synthetic data to demonstrate the advantage of our variance regularization for anomaly detection. We generate simple two-dimensional data following [9]. The normal data is generated by drawing \(300\) samples from three Gaussians with a standard deviation of 1 and means on \((0,0)\), \((-5,-5)\), and \((5,5)\). We then generate anomalies by drawing \(40\) samples from two skewed Gaussians centered at \((-5,5)\) and \((5,-5)\). We train our proposed autoregressive density estimator based on \(150\) randomly selected normal samples with and without the variance regularizer. In Figure 3, we present the scaled log-likelihood obtained by both models. As indicated by this figure, without regularization, the log-likelihood estimate tends to attain high values in a small vicinity surrounding normal points observed during training. In contrast, the regularized model learns a distribution with lower variance and more uniform distribution around normal points. In this example, the average AUC over \(5\) runs of the regularized model is 98.3, while for the unregularized model, it is 79.8. This example sheds some light on the potential benefit of our regularization for anomaly detection. In the following section, we provide more empirical real-world evidence supporting this claim. ### Real Data Experiments were conducted on various tabular datasets collected from [22][44][40]. The datasets exhibit variability in sample size (80-619,326 samples), number of features (3-1555), as well as the number of anomalies (from 0.03% to 39.91%). We evaluate all models using the well-known area under the curve (AUC) of the Receiver Operating Characteristics curve. We follow the same data splitting scheme as in ICL [5, 42, 47], where all anomalous data is not seen during training. The normal samples are split 50/50 between training and testing sets. Baseline methodsWe compare our method to density based methods like HBOS [21], and COPOD [31], geometric methods such as KNN [3], and IForest [34], and recent neural network based methods like ICL [47], NTL [42], and GOAD [5]. Following [47], we evaluate KNN [3] method with \(K=5\). For GOAD [5], we use the KDD configuration, which specifies all of the hyperparameters, since it was found to be the best configuration in previous work [47]. Figure 3: Synthetic example demonstrating the effect of density stabilization. White dots represent normal samples \(x_{n}\in X_{N}\), while yellow represents anomalies \(x_{a}\in X_{A}\). Left: scaled unregularized log-likelihood estimation. Right: the proposed scaled regularized log-like estimate. ResultsIn Table 1, we present the AUC of our method and all baselines evaluated on 52 different tabular anomaly detection datasets. Our method outperforms previous state-of-the-art schemes by a large margin (both on average and median AUCs). Specifically, we obtain 86.0 and 92.4 mean and median AUC, which are better than the second-best method (ICL [47]) by 1.2 and 2.2 AUC points, respectively. We also achieve an average rank of 2.73 over all datasets, which surpasses the second and third-best 3.08, and 3.97 by ICL [47] and KNN [3], respectively. This shows an impressive average result and indicates that our method is stable compared to the other methods tested. We perform another analysis using Dolan-More performance profiles [18] on AUC scores--the resulting evaluation of the baselines applied to all 52 datasets is presented in Fig. 4. Based on the observed curve, our method performs best on a larger portion of the datasets for any \(\theta\). The \(x\)-axis value \(\theta\in[0,1]\) is a scalar factor multiplying AUC obtained by the best method. For example, on all datasets, our method is never worse than \(0.82\) times the highest AUC obtained by any method (as indicated by the intersection of our curve with the line \(y=1\)). The Dolan-More curve is further explained in the caption of this figure. ### Ablation Study We conduct an ablation study to evaluate all components of the proposed scheme. Variance regularizationIn the first ablation, we evaluate the properties of the proposed regularization. We conduct an ablation with 25 datasets and compare the AUC of our model to a version that does not include regularization. As indicated by Table 2, there is a significant performance drop once the regularizer is removed; specifically, the average AUC drops by more than 10 points. Ensemble of feature permutationWe conduct an additional experiment with \(25\) datasets to evaluate the importance of our permutation spectral ensemble. We compare the proposed approach to a variant that relies on a mean ensemble and one that relies on a spectral ensemble with no feature permutation. The results presented in Table 2 demonstrate that both the feature permutations and spectral ensemble are useful for learning a reliable density estimate for anomaly detection. Figure 4: A Dolan-More performance profile [18] comparing AUC scores for all algorithms and datasets presented in this paper. For each method and each value of \(\theta\) (\(x\)-axis) we calculate the ratio of datasets on which the method performs better or equal to \(\theta\) multiplied by the best AUC for the corresponding datasets. Specifically, for a specific method we calculate \(\frac{1}{N_{data}}\sum_{j}\text{AUC}_{j}\geq\theta\cdot\text{AUC}_{j}^{best}\), where \(\text{AUC}_{j}^{best}\) is the best AUC for dataset \(j\) and \(N_{data}\) is the number of datasets. The ideal algorithm would achieve the best score on all datasets and thus reach the left top corner of the plot for \(\theta=1\). Our algorithm yields better results than all baselines, surpassing ICL on values between \(\theta=0.95\) and \(\theta=0.82\). Furthermore, our method covers all datasets (ratio equals 1) for \(\theta=0.82\) and outperforms the second best, ICL [47], which achieves the same at \(\theta=0.69\). This suggests that using our method on all datasets will never be worse than the leading method by more than 18%. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Method & KNN [3] & GOAD(KDD) [5] & HBOS [21] & IForest [34] & COPOD [31] & ICL [47] & NTL [41, 42] & Ours \\ \hline ALOI & 51.5\(\pm\)0.2 & 50.2\(\pm\)0.2 & 52.3\(\pm\)0.0 & 50.8\(\pm\)0.4 & 49.5\(\pm\)0.0 & 54.2\(\pm\)0.8 & 52.0\(\pm\)0.0 & 60.5\(\pm\)0.3 \\ Annthyroid & 71.5\(\pm\)0.7 & 93.2\(\pm\)0.9 & 69.1\(\pm\)0.0 & 91.7\(\pm\)0.2 & 76.8\(\pm\)0.1 & 80.5\(\pm\)1.3 & 85.2\(\pm\)0.0 & 94.3\(\pm\)0.5 \\ Backdoor & 94.6\(\pm\)0.4 & 89.3\(\pm\)0.5 & 72.6\(\pm\)0.2 & 74.8\(\pm\)2.9 & 79.5\(\pm\)0.3 & 92.2\(\pm\)0.1 & 93.5\(\pm\)0.1 & 98.8\(\pm\)0.2 \\ Breast & 99.6\(\pm\)2.1 & 97.7\(\pm\)0.8 & 99.6\(\pm\)0.6 & 99.8\(\pm\)1.2 & 99.8\(\pm\)0.3 & 99.1\(\pm\)0.3 & 96.3\(\pm\)0.3 & 99.3\(\pm\)0.1 \\ Campaign & 74.1\(\pm\)0.5 & 49.0\(\pm\)1.9 & 80.3\(\pm\)0.1 & 72.9\(\pm\)0.1 & 78.2\(\pm\)0.2 & 74.7\(\pm\)0.8 & 76.0\(\pm\)0.0 & 81.3\(\pm\)0.7 \\ Cardio & 90.5\(\pm\)5.2 & 84.6\(\pm\)3.0 & 81.2\(\pm\)1.2 & 94.2\(\pm\)1.0 & 93.0\(\pm\)0.4 & 92.7 \(\pm\) 0.8 & 83.2\(\pm\)0.1 & 93.7\(\pm\)0.3 \\ Cardioocography & 71.8\(\pm\)2.5 & 49.1\(\pm\)1.0 & 46.8\(\pm\)0.1 & 73.8\(\pm\)0.2 & 66.3\(\pm\)0.1 & 78.0\(\pm\)3.2 & 76.3\(\pm\)0.0 & 75.0\(\pm\)0.6 \\ Celeba & 63.1\(\pm\)2.9 & 28.4\(\pm\)0.8 & 76.8\(\pm\)1.5 & 70.5\(\pm\)0.7 & 75.1\(\pm\)0.9 & 80.3 \(\pm\)1.5 & 68.8\(\pm\)0.2 & 71.7\(\pm\)5.7 \\ Census & 67.5\(\pm\)0.6 & 71.6\(\pm\)1.0 & 65.8\(\pm\)2.5 & 62.9\(\pm\)0.1 & 67.5\(\pm\)1.9 & 60.3 \(\pm\)0.8 & 53.5\(\pm\)1.6 & 66.4\(\pm\)1.1 \\ Cover & 88.0\(\pm\)5.3 & 76.0\(\pm\)5.3 & 60.6\(\pm\)0.2 & 71.3\(\pm\)2.3 & 86.2\(\pm\)0.1 & 96.2\(\pm\)0.6 & 98.6\(\pm\)0.3 & 99.0\(\pm\)0.2 \\ Donors & 100.0\(\pm\)9.8 & 99.5\(\pm\)0.1 & 78.7\(\pm\)0.2 & 91.3\(\pm\)0.2 & 81.5\(\pm\)0.5 & 99.2 \(\pm\)0.8 & 85.0\(\pm\)0.4 & 95.8\(\pm\)2.8 \\ Fault & 58.8\(\pm\)0.9 & 65.4\(\pm\)1.6 & 53.0\(\pm\)0.1 & 57.6\(\pm\)0.4 & 49.1\(\pm\)0.1 & 78.7 \(\pm\)0.7 & 58.0\(\pm\)0.2 & 78.1\(\pm\)0.2 \\ Fraud & 93.1\(\pm\)6.4 & 86.6\(\pm\)0.1 & 94.5\(\pm\)1.0 & 93.6\(\pm\)0.3 & 94.0\(\pm\)0.0 & 95.2 \(\pm\)0.4 & 87.5\(\pm\)0.3 & 95.3\(\pm\)0.0 \\ Glass & 82.3\(\pm\)2.2 & 82.1\(\pm\)6.3 & 80.3\(\pm\)0.5 & 74.9\(\pm\)1.3 & 72.5\(\pm\)0.4 & 88.1 \(\pm\)5.0 & 72.5\(\pm\)0.2 & 88.4\(\pm\)1.2 \\ Hepatitis & 48.3\(\pm\)6.4 & 32.4\(\pm\)6.1 & 78.0\(\pm\)5.0 & 75.6\(\pm\)2.7 & 74.9\(\pm\)0.3 & 73.0 \(\pm\)5.1 & 54.0\(\pm\)0.7 & 74.2\(\pm\)1.6 \\ Http & 99.8\(\pm\)0.0 & 50.4\(\pm\)0.1 & 99.7\(\pm\)1.0 & 99.0\(\pm\)0.1 & 98.8\(\pm\)0.7 & 99.5 \(\pm\)0.0 & 100.0\(\pm\)0.5 & 99.9\(\pm\)0.0 \\ InternetAds & 73.7\(\pm\)0.9 & 66.4\(\pm\)3.0 & 53.1\(\pm\)3.9 & 45.6\(\pm\)14.4 & 65.9\(\pm\)5.5 & 84.1\(\pm\)1.4 & 76.0\(\pm\)2.7 & 86.0\(\pm\)0.1 \\ Ionosphere & 91.7\(\pm\)3.0 & 96.5\(\pm\)1.1 & 62.4\(\pm\)0.6 & 84.6\(\pm\)1.3 & 77.2\(\pm\)0.3 & 98.1\(\pm\)0.4 & 97.9\(\pm\)0.6 & 96.4\(\pm\)0.2 \\ Landsat & 68.4\(\pm\)0.8 & 58.6\(\pm\)1.6 & 73.2\(\pm\)6.3 & 60.1\(\pm\)0.1 & 49.3\(\pm\)0.9 & 74.9\(\pm\)0.4 & 66.5\(\pm\)2.1 & 70.7\(\pm\)0.4 \\ Letter & 36.6\(\pm\)2.9 & 87.6\(\pm\)0.9 & 35.2\(\pm\)1.1 & 33.0\(\pm\)4.1 & 40.9\(\pm\)0.2 & 92.8 \(\pm\)0.9 & 84.8\(\pm\)0.3 & 95.2\(\pm\)0.3 \\ Lympho & 99.5\(\pm\)20.5 & 59.9\(\pm\)14.2 & 97.9\(\pm\)3.7 & 99.8\(\pm\)1.0 & 99.3\(\pm\)3.0 & 99.5 \(\pm\) 0.3 & 97.1\(\pm\)2.1 & 99.7\(\pm\)0.1 \\ Magic.gamma & 84.3\(\pm\)0.9 & 77.3\(\pm\)0.2 & 74.3\(\pm\)0.6 & 76.8\(\pm\)4.0 & 68.0\(\pm\)0.3 & 80.9\(\pm\)0.1 & 82.0\(\pm\)0.7 & 85.9\(\pm\)0.1 \\ Mammography & 87.2\(\pm\)2.4 & 54.5\(\pm\)2.3 & 85.6\(\pm\)0.3 & 88.4\(\pm\)0.9 & 90.5\(\pm\)0.1 & 81.1\(\pm\)2.0 & 82.5\(\pm\)0.2 & 87.9\(\pm\)0.4 \\ Mnist & 93.4\(\pm\)0.1 & 87.7\(\pm\)1.0 & 74.5\(\pm\)0.1 & 87.2\(\pm\)1.3 & 77.7\(\pm\)0.1 & 98.2\(\pm\)0.0 & 98.0\(\pm\)0.0 & 92.9\(\pm\)0.0 \\ Musk & 99.7\(\pm\)2.9 & 100.0\(\pm\)0.0 & 96.4\(\pm\)0.0 & 90.5\(\pm\)0.9 & 99.7\(\pm\)0.0 & 100.0\(\pm\)0.0 & 100.0\(\pm\)0.1 & 100.0\(\pm\)0.0 \\ Optdigits & 99.5\(\pm\)7.9 & 93.1\(\pm\)1.9 & 89.2\(\pm\)3.6 & 81.5\(\pm\)1.0 & 69.3\(\pm\)3.2 & 97.5\(\pm\)1.5 & 84.7\(\pm\)0.1 & 87.0\(\pm\)0.3 \\ PageBlocks & 58.1\(\pm\)1.2 & 90 ### Stability Analysis Here, we evaluate the stability of our approach to different values of \(\lambda\) and different numbers of feature permutations \(N_{perm}\). Regularization parameterTo demonstrate that our method is relatively stable to choice of \(\lambda\). We apply our framework to multiple datasets, with values of \(\lambda\) in the range of \([0,10]\). As indicated by the heatmap presented in Figure 5, adding the regularization helps improve the AUC in most datasets. Moreover, we observe that our performance is relatively stable in the range \([1,10]\); we use \(\lambda=3.33\) in our experiments which worked well across many datasets. Feature permutationTo evaluate the influence of the number of feature permutations on the performance of our spectral ensemble, we run our model on several datasets with values of \(N_{perm}=\{1,2,3,4,5\}\). In Figure 6, we present the AUC of our ensemble for \(N_{perm}>1\) relative to the performance of a single model, with no ensemble (\(N_{perm}=1\)). This heatmap indicates that our ensemble improves performance, and \(N_{perm}=3\) is sufficient to obtain a robust spectral ensemble. Therefore, we use \(N_{perm}=3\) across our experimental evaluation. Furthermore, for the spectral ensemble, we use the absolute value of \(v\), to remove arbitrary signs from this eigenvector (Eq. 6). Figure 5: Stability analysis for the regularization parameter \(\lambda\) that balances the likelihood and the variance loss. \(\lambda=0\) indicates that no variance loss is applied. The numbers present the ratio between the AUC and the AUC obtained without regularization (\(\lambda=0\)). This heatmap indicates the advantage of the proposed regularization for anomaly detection. Furthermore, observe the stability of the AUC for different values of \(\lambda\). Figure 6: Stability analysis of the number of permutations. \(N_{perm}=1\) indicates that no permutations are applied, while \(N_{perm}=5\) is the result of a spectral ensemble of \(5\) permutations. The numbers present the ratio between the AUC of a single model and the ensemble of \(N_{perm}\) permuted estimators. ## 5 Conclusion We present a simple modified density estimator that is highly effective for detecting anomalous samples. Our method assumes normal samples are approximately uniformly distributed in some compact domain. This property could be measured by estimating the variance of the probability density function of normal samples. We first conduct an empirical evaluation on 52 datasets demonstrating that in most of them, the variance of the density function around normal samples is indeed small compared to around anomalies. Then, we implement a probabilistic normalized network to model the distribution by minimizing a regularized negative log-likelihood loss. To improve the robustness of our estimate, we present a spectral ensemble of multiple likelihood estimates, each based on a different permutation of features. Finally, we perform an extensive benchmark demonstrating that our method pushes the performance boundary of anomaly detection. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & No Variance loss & No feature permutation & Mean ensemble & Ours \\ \hline Abalone & 95.6 & 85.7 & 90.6 & 93.7 \\ Annthyroid & 88.2 & 87.1 & 92 & 94.3 \\ Arrhythmia & 77.3 & 78.6 & 78.2 & 78.6 \\ Breastw & 94.1 & 99.2 & 98.5 & 99.3 \\ Cardio & 59.6 & 92.0 & 92.1 & 93.7 \\ Ecoli & 89.0 & 87.0 & 90.4 & 91.9 \\ Forest & 58.9 & 98.8 & 97.6 & 99.0 \\ Glass & 77.0 & 89.0 & 87.9 & 88.4 \\ Ionosphere & 96.2 & 96.4 & 96.0 & 96.4 \\ Letter & 71.4 & 94.2 & 93.6 & 95.2 \\ Lympho & 99.8 & 99.9 & 99.2 & 99.7 \\ Mammograph & 87.0 & 88.0 & 86.5 & 87.9 \\ Musk & 99.7 & 100.0 & 100.0 & 100.0 \\ Optdigits & 66.3 & 88.4 & 85.5 & 87.0 \\ Pendigits & 69.2 & 99.7 & 99.4 & 99.7 \\ Pima & 70.5 & 64.8 & 65.9 & 68.2 \\ Satellite & 68.1 & 84.3 & 82.9 & 83.3 \\ Satimage-2 & 73.3 & 99.0 & 99.3 & 99.5 \\ Shuttle & 99.6 & 99.0 & 99.5 & 99.5 \\ Speech & 52.1 & 52.9 & 52.9 & 52.9 \\ Thyroid & 97.4 & 94.5 & 93 & 95.4 \\ Vertebral & 52.8 & 52.9 & 56.4 & 58.8 \\ Vowels & 72.1 & 97.8 & 98.1 & 99.0 \\ Wbc & 76.7 & 95.8 & 94.6 & 96.3 \\ Wine & 93.2 & 94.1 & 90.6 & 93.3 \\ \hline Mean & 79.4 & 88.7 & 88.8 & 90.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study. Shown are the AUC results on AD datasets ## 6 Limitations Our work focuses on tabular datasets and does not explore other potential domains like image data or temporal signals; however, extending our models to these domains is straightforward. Our work also assumes that no anomalies are present during training. Thus, in case anomalies are present in the training, we can use schemes like [41] to robustify our density estimation.
2302.12471
Cubic singularities in binary linear electromechanical oscillators
Singularities arise in diverse disciplines and play a key role in both exploring fundamental laws of physics and making highly-sensitive sensors. Higher-order (>3) singularities, with further improved performance, however, usually require exquisite tuning of multiple (>3) coupled degrees of freedom or nonlinear control, thus severely limiting their applications in practice. Here we propose theoretically and confirm using mechanics experiments that, cubic singularities can be realized in a coupled binary system without any nonlinearity, only by observing the phase tomography of the driven response. By steering the cubic phase-tomographic singularities in an electrostatically-tunable micromechanical system, enhanced cubic-root response to frequency perturbation and voltage-controlled nonreciprocity are demonstrated. Our work opens up a new phase-tomographic method for interacted-system research and sheds new light on building and engineering advanced singular devices with simple and well-controllable elements, with a wide range of applications including precision metrology, portable nonreciprocal devices, and on-chip mechanical computing.
Xin Zhou, Hui Jing, Xingjing Ren, Jianqi Zhang, Ran Huang, Zhipeng Li, Xiaopeng Sun, Xuezhong Wu, Cheng-Wei Qiu, Franco Nori, Dingbang Xiao
2023-02-24T06:01:37Z
http://arxiv.org/abs/2302.12471v1
# Cubic singularities in binary linear electromechanical oscillators ###### Abstract Singularities arise in diverse disciplines and play a key role in both exploring fundamental laws of physics and making highly-sensitive sensors [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Higher-order (\(\geq 3\)) singularities, with further improved performance [15; 16; 17; 18; 19], however, usually require exquisite tuning of multiple (\(\geq 3\)) coupled degrees of freedom [15; 16; 17] or nonlinear control [18; 19], thus severely limiting their applications in practice. Here we propose theoretically and confirm using mechanics experiments that, cubic singularities can be realized in a coupled binary system without any nonlinearity, only by observing the phase tomography of the driven response. By steering the cubic phase-tomographic singularities in an electrostatically-tunable micromechanical system, enhanced cubic-root response to frequency perturbation and voltage-controlled nonreciprocity are demonstrated. Our work opens up a new phase-tomographic method for interacted-system research, and sheds new light on building and engineering advanced singular devices with simple and well-controllable elements, with a wide range of applications including precision metrology, portable nonreciprocal devices, and on-chip mechanical computing. + Footnote †: preprint: APS/123-QED Singularities, sometimes referred to as catastrophes, arise in diverse disciplines and play an essential role in describing how the properties of an object, that are dependent on certain controlling parameters, change qualitatively even if the controlling parameters vary minimally [1]. The unusual landscapes near the singularities are very useful for enhancing the sensitivities of detection [2; 3; 4; 5; 6; 7; 8] as well as generating nonreciprocity [7; 8; 9; 10; 11; 12; 13; 14]. Recently, higher-order singularities have increasingly attracted attention, which have the potential to provide higher performance and engender richer physics [15; 16; 17; 18; 19]. However, what prevents the higher-order singularities from being well explored or exploited thus far is their difficulty in practical realization and control. Usually, higher-order singularities call for multiple (\(\geq 3\)) degrees of freedom [15; 16; 17] and are very difficult to construct and adjust. Lately, it was theoretically predicted [18] and experimentally observed [19] that introducing nonlinearity to binary non-Hermitian systems may also realize higher-order singularities. These studies point out the possibility of exploring higher-dimensional (\(\geq 3\)) physics using binary systems. However, nonlinearity is still a stringent condition that will bring intrinsic power consumption and reliability limits. Micro- and nanomechanical resonators, with broad applications [20; 21; 22; 23; 24; 25], excellent in-situ controllability [26; 27; 28; 29], and rich interactive phenomena [30; 31; 32; 33; 30; 31; 32; 34; 33], provide an ideal platform for exploring and exploiting singularities. Here we demonstrate, both theoretically and experimentally, that cubic singularity arcs and nexus can be realized in a binary linear micromechanical system, by observing the phase-locked-loop (PLL) enabled tomographic dynamics of the coherent-coupling phase responses. The cubic singularity arcs are featured by a series of stable boundaries on a partially folded geometry made by the closed-loop oscillation frequency of a pair of coherently coupled modes. The intersection of two singularity arcs makes a singularity nexus. Behind the singularities lies interesting polarization dynamics. In an electrically tunable micromechanical system, we experimentally observe the cubic singularities. We confirm that the singularity nexus can provide improved detecting sensitivity with a cubic-root response, surpassing the binary singularities. Moreover, we demonstrate a single-parameter-controlled nonreciprocity by traversing the projected sensitivity arcs. Using the electrostatic tuning method, the nonreciprocity is steered electrically. ## Results _Concept:_ We consider a pair of coherently coupled mechanics modes with tunable natural frequencies \(\omega_{1,2}\) and identical dissipation rate \(\gamma\), as shown in Fig. 1a. In this study, the coherent coupling is produced by the rotation-induced Coriolis effect [34] (see Supplementary note 2), giving an angular velocity \(\Omega\)-controlled coupling strength \(g=2\kappa\Omega\), where \(\kappa\approx 0.85\) is the Coriolis coupling coefficient. Mode 1 is linearly driven by an external sinusoidal force with frequency \(\omega_{\text{d}}\) while mode 2 is set free. The linear displacement responses of the two modes are labelled by \(q_{1,2}=\cos(\omega_{\text{d}}+\theta_{1,2})\), where \(|q_{1,2}|\) (\(\theta_{1,2}\)) are the amplitude (phase) responses. If the two modes are at degeneracy (\(\Delta\omega\equiv\omega_{2}-\omega_{1}=0\)), the open-loop amplitude-frequency response \(|q_{1}|\) of mode 1 as a function of the coupling strength \(g\) displays normal-mode splitting [26] (see Supplementary Fig. 4 or 5). The accompanied phase response \(\theta_{1}\) of mode 1 is shown by the colored surface in Fig. 1b. Here, we consider the tomography of the driven-mode phase response with a constant oscillation phase, which is \(-\pi/2\) for ideal oscillators. The phase tomography is realized by imposing a PLL, which produces a stable closed-loop oscillation (Fig. 1a). As shown by the red contour in Fig. 1b, the controlled closed-loop frequency (denoted by \(\omega_{\text{d}}^{*}\), that fulfills the phase-tomographic condition \(\theta_{1}=-\pi/2\)) as a function of coupling strength shows a "pitchfork" bifurcation. It is noteworthy that this bifurcation is only related to the landscape of the linear phase response \(\theta_{1}\), and is different from its counterparts in nonlinear dynamics [35]. The degenerate bifurcation point is exactly the threshold between weak and strong coupling, \(g=\gamma\). The middle branch of the bifurcation (dotted curves) is unstable because the corresponding amplitude response is in an antiresonance valley (see Supplementary Fig. 4 or 5), which is unlikely to be detected by the PLL. The other two branches (solid curves) are stable because the corresponding amplitude responses are close to the resonant peaks. The "pitchfork" bifurcation of \(\omega_{\rm d}^{*}\) changes as the degeneracy condition \(\Delta\omega\) varies. As a function of the coupling strength \(g\) and degeneracy condition \(\Delta\omega\), \(\omega_{\rm d}^{*}\) makes a 3D surface (Fig. 1c), which is simulated by the cubic equation (see Supplementary note 3) \[(\omega_{\rm d}^{*}-\omega_{1})(\omega_{\rm d}^{*}-\omega_{2}+\frac{{\rm i}}{ 2}\gamma)(\omega_{\rm d}^{*}-\omega_{2}-\frac{{\rm i}}{2}\gamma)-\frac{1}{4}g ^{2}(\omega_{\rm d}^{*}-\omega_{2})=0. \tag{1}\] The inflectional part of the folded \(\omega_{\rm d}^{*}\) surface is unstable. If the stability boundaries (magenta curves) are crossed by changing the control parameters \(g\) and \(\Delta\omega\) in an adiabatic manner, catastrophic jumps of the oscillation state takes place. The stability boundaries constitute singularity arcs. The degenerate bifurcation point (white star) is actually a singularity nexus that connects two singularity arcs, giving a severely twisted \(\omega_{\rm d}^{*}\) geometry. The singularity arcs and nexus, given the discriminant of the cubic Eq. 1, construct cubic cusp catastrophes, the projection of which to the \(\Delta\omega\)-\(g\) parameter plane forms two cusp-connected parabola loci [1; 35] (see Supplementary note 4). The phase-tomographic closed-loop oscillation of the system is described by a state vector \(|\psi\rangle=\cos\frac{\phi}{2}|1\rangle+{\rm e}^{{\rm i}\vartheta}\sin\frac{ \phi}{2}|2\rangle\), where \(\{|1\rangle,|2\rangle\}\) is the orthonormal basis of modes 1 and 2, \(\phi=2\arctan(|q_{2}/|q_{1}|)\) is the polar angle, and \(\vartheta\equiv\theta_{2}-\theta_{1}\) is the relative phase between the coupled modes, which is revealed by the colors of the surface in Fig. 1c. The state vectors can be projected to a classical Bloch sphere with polar and azimuthal angles given by \(\phi\) and \(\vartheta\), respectively (see Supplementary note 5). The red trajectories on the Bloch sphere in Fig. 1d illustrate the dynamics that underlies the degenerate "pitchfork" bifurcation in Fig. 1b. The arrows indicate the \(g\)-increasing direction. There is Figure 1: **Cubic singularity in phase-tomographic dynamics: Concept.****a** Setup for the phase-tomographic singularity. Mode 1 is driven and coherently coupled to mode 2. A PLL is used to obtain a phase-tomographic closed-loop oscillation. In this study, coherent coupling is produced by rotation \(\vec{\Omega}\), giving a coupling strength of \(g=2\kappa\Omega\) with \(\kappa=0.85\). **b** Open-loop phase-frequency response (\(\theta_{1}\), colored surface) of the driven mode 1 as a function of coupling strength. PLL adjusts the drive frequency to track the phase \(\theta_{1}=-\pi/2\). The phase tomography shows a “pitchfork” bifurcation. **c** The \(\omega_{\rm d}^{*}\) as a function of degeneracy condition \(\Delta\omega\) and coupling strength \(g\). Singularity arcs connected by a singularity nexus are formed by the stability boundaries. Colors on the surface represent the relative phase of the coupled modes. **d** Polarization dynamics of the balanced “pitchfork” bifurcation on the classical Bloch sphere. **e** Stabilities and singularities on the Bloch sphere. a corresponding polarization pattern in the \(q_{1}\)-\(q_{2}\) plane for every state vector on the Bloch sphere. At the initial state \(|1\rangle\) without coupling, \(q_{2}=0\) makes a horizontal linear polarization. When \(\Omega\rightarrow\infty\), the states corresponding to the upper or lower stable branches of \(\omega_{\text{d}}^{*}\) approach the counter-clockwise circular polarization state \(|\text{CCW}\rangle=(|1\rangle-i|2\rangle)/\sqrt{2}\), or the clockwise circular polarization state \(|\text{CW}\rangle=(|1\rangle+i|2\rangle)/\sqrt{2}\), respectively. While the state corresponding to the middle unstable branch approaches \(|2\rangle\) with vertical linear polarization, because the antiresonance valley makes \(q_{1}\to 0\). At the singularity nexus, \(|q_{1}|=|q_{2}|\) and \(\vartheta=0\) make a \(45^{\circ}\) linear polarization. The singularity arcs are marked by the magenta curves on the Bloch sphere in Fig. 1e. The unstable, bistable, and monostable regions on the Bloch sphere are filled with light red, light green, and grey, respectively. The front (back) hemisphere corresponds to the oscillations with positive (negative) angular velocities. The binary polarization dynamics is a projection of a cubic singular dynamics. Realization:To experimentally demonstrate the cubic singularities, we realize the scheme in Fig. 1a using a capacitive microelectromechanical system [28] (see Supplementary note 1). As shown in Fig. 2a, two micromechanical modes with near-degenerate natural frequencies \(\omega_{1,2}/2\pi\approx 3.85\) kHz and equal dissipation rates \(\gamma=2\pi\times 55.8\) mHz are coherently coupled by the Coriolis effect. The degeneracy condition \(\Delta\omega\) can be adjusted electrostatically by a tuning voltage \(V_{\text{t}}\), as shown in Fig. 2b (see Methods). Mode 1 is driven externally and the linear displacement responses of the two modes are detected using the Homodyne method (see Methods). A stable \(\theta_{1}=-\pi/2\) phase-tomographic closed-loop oscillation is maintained by applying a PLL to mode 1. First, we adjust \(V_{\text{t}}\) to make \(\Delta\omega\approx 0\) and adiabatically sweep \(\Omega\) from zero to \(40^{\circ}/\text{s}\) to observe the singularity nexus. The calculated and measured phase-tomographic closed-loop frequency \(\omega_{\text{d}}^{*}\) is shown in Fig. 2c. If the angular velocity is below the strong-coupling threshold \(\Omega<\gamma/(2\kappa)\), \(\omega_{\text{d}}^{*}\) is latched to \(\omega_{1}\). At the threshold (singularity nexus), \(\Omega_{0}=\gamma/(2\kappa)=11.83^{\circ}/\text{s}\), the \(\omega_{\text{d}}^{*}\) transfers to one of the two stable bifurcation branches randomly, revealing a spontaneous breaking of chiral symmetry. To better describe the chirality, we re-expand the state vector in the \(\{|\text{CCW}\rangle,|\text{CW}\rangle\}\) basis, \(|\psi\rangle=c_{\text{cww}}|\text{CCW}\rangle+c_{\text{cww}}|\text{CW}\rangle\), and define the relative population of the \(|\text{CCW}\rangle\) and \(|\text{CW}\rangle\) states as the order parameter \(\mathcal{N}=(|c_{\text{cww}}|^{2}-|c_{\text{cww}}|^{2})/(|c_{\text{cww}}|^{2} +|c_{\text{cww}}|^{2})=-\sin\phi\sin\vartheta\) (see Supplementary note 6). The order-parameter evolution in the symmetry-breaking process of Fig. 2c is plotted in Fig. 2d, showing a second-order phase transition. Below the singularity nexus, the oscillation is in the linear-polarization phase. At the nexus, the oscillation transfers to one of the degenerate \(|\text{CCW}\rangle\) and \(|\text{CW}\rangle\) dominant phases randomly. The oscillation frequency of the \(|\text{CCW}\rangle\) (\(|\text{CW}\rangle\)) dominant phase increases (decreases) if \(\Omega\) is further increased from \(\Omega_{0}\), because of the rotational Doppler effect [36; 37]. The experimental data in Fig. 2c and d indicate that we experimentally observed a linear-polarization-to-\(|\text{CCW}\rangle\) phase transition. Next, we change \(\Delta\omega\) by tailoring \(V_{\text{t}}\) adiabatically while keep Figure 2: **Experimental realization of the phase-tomographic singularity.****a** Micromechanical resonator with degenerate modes that are coherently coupled by Coriolis effect. **b** Mode natural frequencies \(\omega_{1,2}\) and their difference \(\Delta\omega\) vs tuning voltage \(V_{\text{t}}\). **c** Spontaneous chiral symmetry breaking and **d** the corresponding order parameter \(\mathcal{N}\) illustrating a second-order phase transition. Error bars are the standard deviation. **e** Measured \(\omega_{\text{d}}^{*}\) if the tuning voltage \(V_{\text{t}}\) is swept adiabatically at constant angular velocities. The blue dashed (red solid) curves indicate the upward (downward) sweeps. Singularities and hysteresis are shown above \(\Omega_{0}\). The gray surface is simulation. **f** Cubic singularities projected to \(V_{\text{t}}\)-\(\Omega\) plane. White-faced points (black curves) are experimental (theoretical) data. The red and blue curves are the corresponding \(\mathcal{N}\) data in **e** revealing first-order phase transitions. ing \(\Omega\) unchanged at some specific values to observe the singularity arcs. The measured \(\omega_{4}^{*}\) is shown in Fig. 2e. The blue dashed (red solid) curves illustrate the experimental results for upward (downward) \(V_{\rm t}\) sweeps. If \(\Omega>\Omega_{0}\), the sweeping curves encounter discontinuous jumps at some specific values of \(V_{\rm t}\), referred to as catastrophes or singularities [1]. The two upward and downward curves at identical \(\Omega\) form a hysteresis loop. The area of the hysteresis loop decreases when reducing \(\Omega\), which vanishes if \(\Omega\leq\Omega_{0}\). The experimentally detected singularities mapped to the \(V_{\rm t}\)-\(\Omega\) parameter plane develop the predicted cusp-connected parabola loci in Fig. 1c, as shown in Fig. 2f (see Supplementary note 4). The order parameters \(\mathcal{N}\) corresponding to the upward (downward) sweeps in Fig. 2e are shown by the blue dashed (red solid) curves in Fig. 2f, which indicate first-order transitions from the \(|\)CCW\(\rangle\) (\(|\)CCW\(\rangle\)) dominant phase to the \(|\)CW\(\rangle\) (\(|\)CCW\(\rangle\)) dominant phase. _Cubic-root sensitivity:_ We now demonstrate the enhanced cubic-root sensitivity of the singularity nexus. As shown in Fig. 3a, at the nexus (\(\Omega=\Omega_{0}\) and \(\Delta\omega=0\)), the closed-loop oscillation frequency \(\omega_{\rm d}^{*}(\Omega_{0})\) is latched to the driven-mode natural frequency \(\omega_{1}\). Otherwise, if the degeneracy is broken, \(\Delta\omega\neq 0\), \(\omega_{\rm d}^{*}(\Omega_{0})\) will suddenly but continuously deviate from \(\omega_{1}\). The deviation \(\delta\omega_{\rm X}=\omega_{\rm d}^{*}(\Omega_{0})-\omega_{1}\) changes sharply if \(\Delta\omega\) shift from the nexus (Fig. 3b). Here, we consider the perturbation \(\epsilon\) that can affect the degeneracy condition, \(\epsilon\sim\Delta\omega\), and regard \(\delta\omega_{\rm X}\) in the vicinity of the nexus as the sensing output of \(\epsilon\), as shown by the red curve in Fig. 3c (see Supplementary note 7). When plotted on a logarithmic scale, it gives a cubic-root response near the nexus: \(\delta\omega_{\rm X}\sim\epsilon^{1/3}\) (Fig. 3d), confirming the cubic nature of the singularity nexus. To experimentally verify the cubic-root behavior, we maintain an \(\Omega_{0}\) rotation and introduce a fine-tuning voltage \(V_{\rm t}\) to sweep across the nexus. By transforming \(V_{\rm t}\) to \(\epsilon\), the experimental input-output data are shown by the red circles in Fig. 3c and d, which coincide well with the cubic-root simulation. We compare the sensitivities produced by the singularity nexus and a binary exceptional-point (EP) singularity that is generated by a passive parity-time-symmetric system whose damping difference is chosen to be equal to the dissipation of our system (see Supplementary note 7). The blue dashed curves in Fig. 3c and d demonstrate an \(\epsilon^{1/2}\) dependency of the eigenfrequency split \(\delta\omega_{\rm EP}\) near the EP. The sensitivity produced by the singularity nexus is greater than that of the binary EP [2; 3; 4; 5] and is on par with that of the third-order EP [15]. When compared to the standard output \(\delta\omega_{\rm DP}\sim\epsilon\) of the diabolic-point (DP) system shown by the black dot-dashed curves in Fig. 3c and d, both \(\delta\omega_{\rm EP}\) and \(\delta\omega_{\rm X}\) are much improved. _Voltage-controlled nonreciprocity:_ Lastly, we show that the phase-tomographic cubic singularity can produce a voltage controlled nonreciprocity. We consider a closed 1D trajectory along the tuning-voltage \(V_{\rm t}\) direction in the parameter plane, which starts and ends at the bistable region and crosses both projected singularity loci. If the trajectory is traversed in the down-up Figure 3: **High sensitivity near the singularity nexus.****a** Phase-tomographic frequency \(\omega_{\rm d}^{*}\) as a function of angular velocity \(\Omega\) and degeneracy condition \(\Delta\omega\). The contours of \(\Omega=\Omega_{0}\) (blue curve) and \(\Delta\omega=0\) (green curves) portray the sharp variation of \(\omega_{\rm d}^{*}\) at the singularity nexus. **b** Phase-tomographic frequency at the nexus angular velocity \(\omega_{\rm d}^{*}(\Omega_{0})\) and its shift from \(\omega_{1}\), \(\delta\omega_{\rm X}=\omega_{\rm d}^{*}(\Omega_{0})-\omega_{1}\) as functions of \(\Delta\omega\). \(\omega_{0}\) denotes \(\omega_{1}\) at \(\Delta\omega=0\). In the range of \(-0.25\gamma\leq\Delta\omega\leq 0.25\gamma\), \(\delta\omega_{\rm X}\) decreases monotonically to \(\Delta\omega\). **c** Frequency output \(\delta\omega_{\rm X}\) near the singularity nexus versus the natural-frequency perturbation \(\epsilon=\Delta\omega\) from simulation (red solid curve) and experiment (points). Eigenfrequency splits near an EP (blue dashed curve) and a DP (black dot-dashed curve) are simulated as well. Error bars are the standard deviation. **d** Logarithmic plot of the absolute data in **e**. The singularity nexus has a cubic-root output, which provides higher sensitivity than the EP and DP. down (up-down-up) direction, as shown in Fig. 4a (b), the system ends at the low (high) branch no matter which branch it is started, as shown in Fig. 4c (d). Even if it is started at an identical location, opposite traversal directions lead to different ending branches, as shown in Fig. 4c or d. The ending location only depends on the traversal direction, not the starting place. In the experiments shown in Fig. 4, the tuning voltage \(V_{\text{t}}\) is the only steering knob of the nonreciprocal state transfer, while the angular velocity is set to be a constant value, \(\Omega=60^{\circ}/\text{s}\). In fact, the \(\Omega\) value of the \(V_{\text{t}}\)-controlled nonreciprocal process can be changed almost arbitrarily, as long as the start/end points are located at the bistable region and both projected singularity loci are crossed. This single-parameter-controlled nonreciprocity is more desirable than that of the binary EP singularity, which is guaranteed by two-parameter encircling [9; 10; 11; 12; 13; 14; 38]. Benefiting from the electrostatic tunability of our device, the phase-tomographic cubic singularity provides a voltage-controlled nonreciprocity. ## Discussion Although we experimentally demonstrate the phase-tomographic closed-loop cubic singularity based on the Coriolis coupling, in principle, it can also be realized using ordinary linear coherent coupling (see Supplementary discussion). This study may open up a new tomographic dimension for phase-related interactive dynamics study, which is applicable for a wide range of disciplines such as optics, optomechanics, or hybrid quantum systems. Our discovery makes it possible to construct advanced singularities using highly controllable elements. It also enhances the understanding of the closed-loop oscillation dynamics, and extends coherent control into the singularity region. Potential applications of the closed-loop singularity include precise sensing, deep-sub-linewidth mode matching, rapid mode switching, and generating portable nonreciprocity. Moreover, the hysteresis in the closed-loop oscillation is promising for mechanical computing [39]. The bit abstraction of the closed-loop oscillations is independent of vibration amplitudes, which may provide potential advantages in power consumption and lifetime. Future studies can also investigate phase-tomographic singularities originating from different kinds of coupling, the interplay with other kinds of singularities, and the phase-tomographic dynamics in many-body systems with more degrees of freedom [40; 17; 41]. ## Methods Electrostatic frequency tuning:The tuning voltage \(V_{\text{t}}\) introduces electrostatic negative stiffness to both modes 1 and 2, \(\omega_{1,2}^{2}(V_{\text{t}})={\omega^{\prime}}_{1,2}^{2}(0)-T_{1,2}(V_{0}-V _{\text{t}})^{2}\), where \(\omega^{\prime}_{1,2}(0)\) denotes the natural frequencies of the bare mechanical modes. The electrostatic tuning factors \(T_{1,2}\) are proportional to the capacitive area, the inverse of the modal mass, and the inverse of the cu Figure 4: **Voltage controlled nonreciprocity.** A closed 1D parameter trajectory that crosses both projected singularity loci in down-up-down **a** and up-down-up **b**\(V_{\text{t}}\) traversing process. The numbers indicate the traversing order. **c** Adiabatic evolutions start at high (upper panel) or low (lower panel) bistable branches following the down-up-down \(V_{\text{t}}\) traversing process. **d** Same as **c** for the up-down-up traversing process. Two processes that start at an identical location and travel the same path in opposite directions will reach distinct destinations. The nonreciprocity is ensured by crossing both sides of the projected singularity loci. bic of the capacitive gap. The stiffness perturbation induced by \(V_{0}\) exists even if the tuning voltage \(V_{\text{t}}\) is absent, so it can be included in the intrinsic natural frequencies. By defining \(\omega_{1,2}^{2}(0)={\omega^{\prime}}_{1,2}^{2}(0)-T_{1,2}V_{0}^{2}\), and assuming that the electrostatic stiffness perturbation is small relative to the intrinsic stiffness, we have \[\omega_{1,2}(V_{\text{t}}) =\sqrt{\omega_{1,2}^{2}(0)+T_{1,2}(2V_{0}V_{\text{t}}-V_{\text{t} }^{2})}\] \[\approx\omega_{1,2}(0)+K_{1,2}(2V_{0}V_{\text{t}}-V_{\text{t}}^{2 }), \tag{2}\] where the tuning coefficients are defined by \(K_{1,2}=T_{1,2}/[2\omega_{1,2}(0)]\). The experimentally measured natural frequencies \(\omega_{1,2}\) as functions of \(V_{\text{t}}\) are shown by the red and blue points in the inset of Fig. 2b, which are fitted (curves) to the model (2) with parameters \(\omega_{1}(0)=2\pi\times 3,852.92\) Hz, \(\omega_{2}(0)=2\pi\times 3,856.43\) Hz, and \(V_{0}=2.5\) V. The tuning coefficients are fitted to be \(K_{1}=1.29\times 10^{-2}\) rad s\({}^{-1}\) V\({}^{-2}\) and \(K_{2}=6.40\times 10^{-2}\) rad s\({}^{-1}\) V\({}^{-2}\). The relationship between the difference of natural frequencies (degeneracy condition) \(\Delta\omega=\omega_{2}-\omega_{1}\) and the tuning voltage \(V_{\text{t}}\) is further given by \[\Delta\omega(V_{\text{t}})=\Delta\omega(0)+(K_{2}-K_{1})(2V_{0}V_{\text{t}}-V_ {\text{t}}^{2}), \tag{3}\] where \(\Delta\omega(0)=\omega_{2}(0)-\omega_{1}(0)\). The experimentally measured \(\Delta\omega\) data (green points in Fig. 2b) coincide well with the tuning model (3) (green curve in Fig. 2b). _Homodyne measurement:_ The antinodal displacements of the two micromechanical modes \(q_{1,2}=|q_{1,2}|\cos(\omega_{\text{d}}t+\theta_{1,2})\) are picked up by capacitive transducers, transformed into voltage signals by two charge amplifiers integrated into a printed circuit board, and then recorded by a two-channel lock-in amplifier (Zurich Instruments HF2LI). The amplitudes \(|q_{1,2}|\) and phases \(\theta_{1,2}\) relative to the driving signal are obtained by dual-phase demodulation techniques. In this process, \(q_{j}(\omega_{\text{d}},t)\) is split and separately mixed with the driving reference signal \(\cos\omega_{\text{d}}t\) and a \(\pi/2\)-phase-shifted copy of it, \[|q_{j}|\cos(\omega_{\text{d}}t+\theta_{j})\times\cos\omega_{\text {d}}t\] \[= \frac{|q_{j}|}{2}\left[\cos(2\omega_{\text{d}}t+\theta_{j})+\cos( \theta_{j})\right],\] \[|q_{j}|\cos(\omega_{\text{d}}t+\theta_{j})\times\cos\left(\omega _{\text{d}}t+\frac{\pi}{2}\right)\] \[= \frac{|q_{j}|}{2}\left[-\sin(2\omega_{\text{d}}t+\theta_{j})+\sin (\theta_{j})\right].\] The high-harmonic components of the mixed signals are removed using low-pass filters and the remaining in-phase component \(X_{j}=\frac{|q_{j}|}{2}\cos(\theta_{j})\) and quadrature component \(Y_{j}=\frac{|q_{j}|}{2}\sin(\theta_{j})\) are obtained. By transforming into the polar coordinates, we can obtain the amplitude and phase, \[|q_{j}| =\sqrt{X^{2}+Y^{2}},\] \[\theta_{j} =\arctan\frac{Y}{X}.\] ## Data availability Data relevant to the figures and conclusions of this manuscript are available at [https://doi.org/10.6084/m9.figshare.19609350](https://doi.org/10.6084/m9.figshare.19609350).
2303.05224
EFT, decoupling, Higgs boson mixing, and higher dimensional operators
The effective field theory (EFT) framework is a precise approximation procedure when the inherent assumptions of a large-scale separation between the Standard Model (SM) and new interactions alongside perturbativity are realised. Constraints from available data might not automatically guarantee these circumstances when contrasted with UV scenarios that the EFT analysis wishes to inform. From an EFT perspective, achieving sufficient precision in navigating the alignment or decoupling limits beyond the SM scenarios can necessitate moving beyond the SM's leading, dimension six EFT deformation. Using the example of Higgs boson mixing, we demonstrated the importance of higher-dimensional terms in the EFT expansion. We analyse the relevance of virtual EFT corrections and dimension eight contributions for well-determined electroweak precision observables. We find that when moving away from the decoupling limit, the relevance of additional terms in the EFT expansion quickly becomes relevant. This demonstrates the necessity to move beyond dimension six interactions for any scenario that contains Higgs boson mixing.
Upalaparna Banerjee, Joydeep Chakrabortty, Christoph Englert, Wrishik Naskar, Shakeel Ur Rahaman, Michael Spannowsky
2023-03-09T13:01:34Z
http://arxiv.org/abs/2303.05224v2
# EFT, Decoupling, Higgs Mixing and All That Jazz ###### Abstract The effective field theory (EFT) framework is a precise approximation procedure when the inherent assumptions of a large-scale separation between the Standard Model (SM) and new interactions alongside perturbativity are realised. Constraints from available data might not automatically guarantee these circumstances when contrasted with UV scenarios that the EFT analysis wishes to inform. From an EFT perspective, achieving sufficient precision in navigating the alignment or decoupling limits beyond the SM scenarios can necessitate moving beyond the SM's leading, dimension six EFT deformation. Using the example of Higgs boson mixing, we demonstrated the importance of higher-dimensional terms in the EFT expansion. We analyse the relevance of virtual EFT corrections and dimension eight contributions for well-determined electroweak precision observables. We find that when moving away from the decoupling limit, the relevance of additional terms in the EFT expansion quickly becomes relevant. This demonstrates the necessity to move beyond dimension six interactions for any scenario that contains Higgs boson mixing. ## 1 Introduction Effective field theory (EFT) [1] is a formidable tool for communicating sensitivity beyond the Standard Model (BSM) physics in times when particle physics data seemingly points towards a large-scale separation of new states relative to the Standard Model (SM) degrees of freedom. The extension of the SM by effective interactions relevant to the high-energy frontier of, e.g., the Large Hadron Collider (LHC), i.e. Standard Model Effective Theory (SMEFT) at dimension six [2] has received a lot of theoretical attention and improvement alongside its application in a series of experimental investigations. Matching calculations [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14] that coarse grain ultra-violet (UV) BSM scenarios into EFT provide the technical framework to marry together concrete scenarios of new interactions with the generic EFT analysis of particle data. The latter is typically plagued with considerable uncertainties, both experimentally and theoretically. Even optimistic extrapolations of specific processes to the LHC's high luminosity (HL) phase can imply a significant tension between the intrinsic viability criteria that underpin the EFT limit setting trying to inform the UV scenarios' parameter spaces: EFT cut-offs need to be lowered into domains that can be directly resolved at the LHC. This can be at odds with the perturbativity of the obtained constraints (and hence limits the reliability of the fixed-order matching). The obvious way out of this conundrum is to include higher-dimensional terms in the EFT expansion. Dimension eight interactions have increasingly moved into the focus of the theory community [15; 16; 17]. From a practical point of view, this prompts the question of when we can be confident about reaching the point where phenomenologically-minded practitioners can stop. Unfortunately, an answer to this question is as process and model-dependent as matching a UV-ignorant EFT to a concrete UV scenario. Therefore, the phenomenological task is developing theory-guided intuition using representative scenarios that transparently capture key issues. The purpose of this note is to contribute to this evolving discussion using (custodial iso-singlet) Higgs boson mixing as an example. This scenario has seen much attention from the EFT perspective as the number of degrees of freedom and free parameters is relatively small, thus enabling a transparent connection of EFT and UV theory beyond the leading order of the EFT approach (see, e.g., Refs. [3; 18; 19]). Higgs mixing also arises in many BSM theories. We focus on electroweak precision observables as these are well-constrained by collider data, thus enabling us to navigate cut-offs and Wilson coefficients of the effective theory under experimental circumstances where precise predictions and matching are very relevant. This work is structured as follows: In Sec. 2, we first discuss the oblique corrections and their relation to the polarisation functions to make this work self-contained; Sec. 2.1 gives a quick discussion of the oblique corrections in the singlet scenario (see also [20; 21; 22; 23; 24; 25]) with formulae provided in the appendix. We then focus on the oblique parameters for this case in dimensions six and eight SMEFT in Sec. 2.2. We detail the comparison in Sec. 3 with a view towards perturbative unitarity. Finally, we provide conclusions in Sec. 4. ## 2 Electroweak Precision Observables Extensions of the SM with modified Higgs sectors can be constrained through electroweak precision measurements. A famous subset of these that were instrumental in discovering the Higgs boson is the so-called oblique corrections parametrized by the Peskin-Takeuchi parameters [26] (see also [27]). These \(\widehat{S},\widehat{T},\widehat{U}\) are chiefly extracted from Drell-Yan-like production during the LEP era using global fits, e.g., Ref. [28; 29]. Defining the off-shell two-point gauge boson functions for the SM gauge bosons as \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{Fig1.eps}}=\Pi_{VV^ {\prime}}(p^{2})g^{\mu\nu}+\Sigma_{VV^{\prime}}(p^{2})p^{\mu}p^{\nu}, \tag{1}\] with \(V,V^{\prime}=W,Z,\gamma\). The Peskin-Takeuchi parameters can then be written as \[\alpha\widehat{S}= \left(\frac{4s_{W}^{2}c_{W}^{2}}{M_{Z}^{2}}\right)\left[\Pi_{ZZ} (M_{Z}^{2})-\Pi_{ZZ}(0)-\Pi_{\gamma\gamma}(M_{Z}^{2})-\frac{c_{W}^{2}-s_{W}^{ 2}}{c_{W}s_{W}}\left(\Pi_{\gamma Z}(M_{Z}^{2})-\Pi_{\gamma Z}(0)\right)\right],\] \[\alpha\widehat{T}= \frac{\Pi_{WW}(0)}{M_{W}^{2}}-\frac{\Pi_{ZZ}(0)}{M_{Z}^{2}}- \frac{2s_{W}}{c_{W}}\frac{\Pi_{\gamma Z}(0)}{M_{Z}^{2}},\] \[\alpha\widehat{U}= 4s_{W}^{2}\left[\frac{\Pi_{WW}(M_{W}^{2})-\Pi_{WW}(0)}{M_{W}^{2}} -c_{W}^{2}\left(\frac{\Pi_{ZZ}(M_{Z}^{2})-\Pi_{ZZ}(0)}{M_{Z}^{2}}\right)\right.\] \[\left.-2s_{W}c_{W}\left(\frac{\Pi_{\gamma Z}(M_{Z}^{2})-\Pi_{ \gamma Z}(0)}{M_{Z}^{2}}\right)-s_{W}^{2}\frac{\Pi_{\gamma\gamma}(M_{Z}^{2})} {M_{Z}^{2}}\right]\,.\] \(s_{W},c_{W}\) denote the sine and cosine of the Weinberg angle, \(\alpha\) is the fine structure constant, and \(M_{i}\) stands for the gauge boson masses.1 Constraints on new physics can then be formulated by examining the difference of these parameters from the best SM fit point. To draw a comparison between the full theory and EFT we restrict our analysis to one loop order. In the next subsection, we provide the contributions to the oblique parameters from the full theory. Footnote 1: Note that we employ the normalization of Peskin and Takeuchi, although the hatted quantities are typically defined in the normalization of [30]. This is to avoid confusion between the oblique corrections and the singlet field introduced below. ### SM extended by a real singlet scalar The most general scalar potential for the SM Higgs Sector extended by a real singlet scalar field (\(S\)) is (ignoring tadpoles fixed through minimizing conditions), \[V(H,S)=-\mu^{2}H^{\dagger}H+\frac{1}{2}m_{S}^{2}S^{2}+\eta_{S}(H^{\dagger}H)S+ k_{S}(H^{\dagger}H)S^{2}+\lambda_{h}(H^{\dagger}H)^{2}+\frac{1}{4!}\lambda_{S}S^{ 4}, \tag{3}\] with \(H\) being the SM Higgs doublet, which gets a vacuum expectation value (vev) \(v\simeq 246\) GeV. \(H\) can then be expanded around the vev: \[H=\frac{1}{\sqrt{2}}\begin{pmatrix}\sqrt{2}\,G^{+}\\ v+h+i\,\eta\end{pmatrix}. \tag{4}\] The presence of the mixing terms in the potential given in Eq. (3), results in different mass eigenstates that are a mixture of \(h\) that is the neutral component of \(H\), and \(S\), related by the mixing angle \(\theta\): \[\begin{pmatrix}\tilde{h}\\ \tilde{s}\end{pmatrix}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\begin{pmatrix}h\\ S\end{pmatrix}. \tag{5}\] Here, \(\cos\theta\) can be written in terms of the Lagrangian parameters as \[\cos^{2}\theta=\frac{1}{2}\left[1+\left(1+\frac{4\,v^{2}\eta_{S}^ {2}}{(M_{\mathcal{S}}^{2}-M_{h}^{2})^{2}-4v^{2}\eta_{S}^{2}}\,\right)^{-1/2} \right]\\ =1-\left(\frac{v\,\eta_{S}}{M_{\mathcal{S}}^{2}-M_{h}^{2}}\right) ^{2}-\left(\frac{v\,\eta_{S}}{M_{\mathcal{S}}^{2}-M_{h}^{2}}\right)^{4}+...\,. \tag{6}\] The eigenvalues corresponding to these mass eigenstates shown in Eq. (5) correspond to the masses of the scalars in the theory, i.e., \(M_{h}=125\) GeV and a free parameter \(M_{\mathcal{S}}\), respectively. These mass eigenvalues can be expressed in terms of the Lagrangian parameters and the Higgs vev (\(v\)) as, \[M_{h}^{2},M_{\mathcal{S}}^{2}=\frac{1}{2}\left(m_{S}^{2}+m_{h}^{2}\mp\sqrt{4v ^{2}\eta_{S}+(m_{S}^{2}-m_{h}^{2})^{2}}\right)\,, \tag{7}\] where \(m_{h}^{2}=2\lambda_{h}v^{2}\). For the computation of the oblique parameters we only consider the radiative corrections from the scalar-involved diagrams shown in Fig. 1, since the other diagrams will provide the same contribution to BSM and SM theory, therefore dropping out from the deviation. The explicit expressions are given in the appendix A.1. quation (5) clearly shows that the light (heavy) scalar couplings to the SM particles are suppressed by a factor of \(\cos\theta\) (\(\sin\theta\)). Therefore, the contributions to the gauge boson self-energies get modified by a factor of \(\cos^{2}\theta\) or \(\sin^{2}\theta\) depending on the neutral scalar coupled to. We then express the mixing angle regarding the BSM parameters in the potential. Limits are then imposed on the independent BSM parameters (in our case, it is just \(\eta_{S}\)) and the mass of the heavy scalar (\(M_{\mathcal{S}}\)) using the constraints of GFitter data [31] as shown in Fig. 2. ### Real Singlet Model from SMEFT perspective To investigate how well the effective theory replicates the minute signatures of the singlet extension of the SM described in Sec. 2.1 or, in turn, adjudge the significance of the higher-order effective corrections, we extend the effective series with relevant operator structures Figure 1: Relevant Feynman diagrams with scalars in the loop that have been considered to compute the oblique corrections. Here \(\phi\in(\tilde{h},\tilde{s})\) when one-loop correction in the full theory is computed. Figure 2: 95% and 68% confidence interval bounds on the BSM trilinear coupling \(\eta_{S}\) and the Mass of the heavy scalar (\(M_{\mathcal{S}}\)) obtained from the full-theory calculation setting constraints from GFitter[31]. An additional unitarity bound is imposed which sets the limit of \(\eta_{S}\), as \(|\eta_{S}|\lesssim M_{\mathcal{S}}\) (see appendix B). till dimension eight: \[\mathcal{L}=\mathcal{L}_{\text{SM}}+\sum_{i}\frac{\mathcal{C}_{i}^{(6)}}{\Lambda^ {2}}\,\mathcal{O}_{i}^{(6)}+\sum_{j}\frac{\mathcal{C}_{j}^{(8)}}{\Lambda^{4}}\, \mathcal{O}_{j}^{(8)}. \tag{8}\] Here, the Wilson coefficients \(\mathcal{C}_{i}\) parametrize the strengths of the operators \(\mathcal{O}_{i}\) that are produced after integrating out the heavy real singlet scalar (for a complete matching of such operators at dimension six, see Refs. [17]). We have chosen the cut-off scale \(\Lambda\) to be \(m_{s}=M_{S}\). To validate EFT, as we will be working with small mixing, thus the parameter space of our interest will satisfy the following equivalence \(M_{S}\simeq m_{s}\). In particular, this implies that in the case of effective theory, there may be tree-level electroweak corrections, as shown in Fig. 3, to Eq. (1) from the effective operators that may emerge in the process of integrating out heavy fields from the UV diagram and(or) through the renormalization group running of effective operators generated by integrating out tree-level heavy propagator at the cut-off scale. These contributions depend on the renormalization scale and play an essential role in our further computation, see also [3]. Depending on whether the operators that could contribute to the dominant (when considered in a model-independent way) tree-level correction, as shown in Fig. 3, are generated at one-loop itself, the contributions from the operators generated at the tree level, which can modify the interactions at the one-loop, can become significant. We categorically list the effective corrections to \(\widehat{S},\widehat{T}\) up to one loop: * **Tree-level correction**: Expanding the Lagrangian with dimension six and dimension eight operators can induce corrections to the transverse tree-level vector boson propagators (\(\Pi_{VV^{\prime}}\)) itself, which in turn modifies \(\widehat{S}\), \(\widehat{T}\) parameters [15] \[\widehat{S}_{\text{eff,tree}} =\frac{4\,c_{W}\,s_{W}\,v^{2}}{\alpha}\,\mathcal{C}_{HWB}^{(6)}\, +\frac{2\,c_{W}\,s_{W}\,v^{4}}{\alpha}\,\mathcal{C}_{HWB}^{(8)},\] (9) \[\widehat{T}_{\text{eff,tree}} =-\frac{v^{2}}{2\,\alpha}\,\mathcal{C}_{H\mathcal{D}}^{(6)}\,-\, \frac{v^{4}}{2\,\alpha}\,\mathcal{C}_{H\mathcal{D},2}^{(8)}.\] The expressions for modifying the individual \(\Pi_{VV^{\prime}}\) functions are given in the appendix A.2. The dimension six operators contributing to Eq. (9) are generated at one-loop while integrating out the heavy field. The matching expressions for these coefficients are given in Tab. 1. We have also computed the one-loop matching for the dimension eight operators involved in Eq. (9) and noticed that these do not receive any correction while integrating out complete heavy loop diagrams. On the other hand, these coefficients receive contributions from removing the redundant structures at dimension six, as discussed in Ref. [17]. Since the latter corresponds to a two-loop suppressed sub-leading contribution, we neglect the associated effects in our analysis. * **One-loop insertion of operators**: One-loop corrections to the oblique parameters are essential for the tree-level generated operators, for they provide a similar contribution as the operators that are produced at one-loop contributing to the tree-level Figure 3: Tree-level correction to the gauge boson propagators due to the presence of effective operators. propagator corrections shown in Eq. (9) for a model-dependent analysis. In our case, such a contribution arises from the operators \(\mathcal{O}^{(6)}_{H\square}\) and \(\mathcal{O}^{(8)}_{H\mathcal{D},1}\). The explicit forms of their structures are given in Tabs. 1 and 2, respectively. These operators modify the canonical form of the kinetic term for the Higgs field \[\mathcal{L}_{h,\text{kin}} = \frac{1}{2}\,\left(1-2\,v^{2}\mathcal{C}^{(6)}_{H\square}+\frac{ v^{4}}{4}\mathcal{C}^{(8)}_{H\mathcal{D},1}\right)(\partial_{\mu}h)^{2},\] (10) which can be removed by redefining the field \(h^{\prime}\to Z_{h}\,h\) with \[Z_{h} = \left(1-v^{2}\,\mathcal{C}^{(6)}_{H\square}+\frac{v^{4}}{8}\, \mathcal{C}^{(8)}_{H\mathcal{D},1}\right).\] (11) This implies that while computing the higher order corrections for EFT, we need to recall that \(\phi=h^{\prime}\to Z_{h}\,h\) in Fig. 1. This also accounts for suitable modifications in the vertices, involving Higgs and Goldstone in Fig. 1. This correction, up to \(\mathcal{O}(1/\Lambda^{4})\) capturing the effects from both (dimension six)\({}^{2}\) and dimension eight terms, is incorporated by replacing the \(\cos^{2}\theta\) with \(Z_{h}^{2}\) and setting \(\sin^{2}\theta\) to zero in the expressions shown in appendix A.1. * **RGE improved correction**: It is important to include the running effects to the Wilson coefficients which arise at tree-level. \(\widehat{T}_{\text{eff,tree}}\) in Eq. (9) at dimension six receives such an additional contribution from the operator \(\mathcal{O}^{(6)}_{H\mathcal{D}}\). Contributions arising from the running of the coefficient of the operator \(\mathcal{O}^{(6)}_{H\square}\)[32; 33] \[16\pi^{2}\,\frac{d\,\mathcal{C}^{(6)}_{H\mathcal{D}}\,(\mu)}{d \ln\mu} = \frac{20}{3}\,g_{Y}^{2}\,\mathcal{C}^{(6)}_{H\square},\] \[\implies\mathcal{C}^{(6)}_{H\mathcal{D}}\,|_{\text{RGE}} = -\frac{5\,g_{Y}^{2}\eta_{\mathcal{S}}^{2}}{24\,\pi^{2}\,M_{ \mathcal{S}}^{4}}\,\log\bigg{[}\frac{M_{Z}}{M_{\mathcal{S}}}\bigg{]}.\] (12) So the total contribution to \(\mathcal{C}_{H\mathcal{D}}\) at the EW scale is: \[\mathcal{C}^{(6)}_{H\mathcal{D}}(M_{Z})=-\frac{7g_{Y}^{2}\eta_{\mathcal{S}}^ {2}}{288\,M_{\mathcal{S}}^{4}\,\pi^{2}}-\frac{5\,g_{Y}^{2}\eta_{\mathcal{S}}^ {2}}{24\,\pi^{2}\,M_{\mathcal{S}}^{4}}\,\log\bigg{[}\frac{M_{Z}}{M_{\mathcal{S }}}\bigg{]}.\] (13) \begin{table} \begin{tabular}{|c|c|c|} \hline Operator & Op. Structure & Wilson coeffs. \\ \hline \(\mathcal{O}^{(6)}_{H\square}\) & \((H^{\dagger}H)\square(H^{\dagger}H)\) & \(-\frac{\eta_{\mathcal{S}}^{2}}{2\,M_{\mathcal{S}}^{4}}\) \\ \hline \(\mathcal{O}^{(6)}_{HWB}\) & \((H^{\dagger}\tau^{I}H)W^{I}_{\mu\nu}B^{\mu\nu}\) & \(\frac{g_{W}\,g_{Y}\,\eta_{\mathcal{S}}^{2}}{128\,M_{\mathcal{S}}^{4}\,\pi^{2}}\) \\ \hline \(\mathcal{O}^{(6)}_{HD}\) & \((H^{\dagger}\mathcal{D}_{\mu}H)^{*}(H^{\dagger}\mathcal{D}^{\mu}H)\) & \(-\frac{\tau_{9}^{2}\,\eta_{\mathcal{D}}^{2}}{288\,M_{\mathcal{S}}^{4}\,\pi^{2}}\) \\ \hline \end{tabular} \end{table} Table 1: Relevant operators that produce tree and one-loop corrections to the gauge boson self-energy. The structures in blue first appear at tree-level correction, whereas the rest of the operators contribute at one-loop first. The part of the beta function (cf. Eq. (12)) for the dimension eight Wilson coefficient \(\mathcal{C}^{(8)}_{H\mathcal{D},2}\) and \(\mathcal{C}^{(8)}_{HWB}\) stems from \[16\pi^{2}\,\frac{d\,\mathcal{C}^{(8)}_{H\mathcal{D},2}\,(\mu)}{d \ln\mu} = \frac{40}{3}\,g_{Y}^{2}\,(\mathcal{C}^{(6)}_{H\mathcal{\Box}})^{2}\,+ \frac{10}{3}g_{Y}^{2}\mathcal{C}^{(8)}_{H\mathcal{D},1}\,+\mathcal{C}^{(8)}_{H^ {4}D^{4},3}\left(-\frac{11}{24}g_{Y}^{4}-\frac{79}{48}g_{Y}^{2}g_{W}^{2}+3g_{Y }^{2}\lambda_{h}\right)\] \[\implies\mathcal{C}^{(8)}_{H\mathcal{D},2}\,|_{\rm RGE} = \frac{1}{16\pi^{2}}\Bigg{[}\frac{10}{3}g_{Y}^{2}\eta_{S}^{4}}{3 \,M_{S}^{8}}+\frac{10}{3}g_{Y}^{2}\,\left(\frac{4\eta_{S}^{2}k_{S}}{M_{S}^{6} }-\frac{8\lambda_{S}\eta_{S}^{2}k_{S}}{M_{S}^{6}}\right) \tag{14}\] \[+\frac{2\eta_{S}^{2}}{M_{S}^{6}}\Bigg{(}-\frac{11}{24}g_{Y}^{4}- \frac{79}{48}g_{Y}^{2}g_{W}^{2}+3g_{Y}^{2}\lambda_{h}\Bigg{)}\,\Bigg{]}\log \Bigg{[}\frac{M_{Z}}{M_{S}}\Bigg{]},\] \[16\pi^{2}\,\frac{d\,\mathcal{C}^{(8)}_{HWB}\,(\mu)}{d\ln\mu} = \mathcal{C}^{(8)}_{H^{4}D^{4},3}\left(-\frac{11}{48}g_{Y}^{3}g_{W }-\frac{29}{24}g_{Y}g_{W}^{3}+g_{Y}g_{W}\lambda_{h}\right)\] \[\implies\mathcal{C}^{(8)}_{HWB}\,|_{\rm RGE} = \frac{\eta_{S}^{2}}{8\pi^{2}M_{S}^{6}}\left(-\frac{11}{48}g_{Y}^{ 3}g_{W}-\frac{29}{24}g_{Y}g_{W}^{3}+g_{Y}g_{W}\lambda_{h}\right)\log\Bigg{[} \frac{M_{Z}}{M_{S}}\Bigg{]}, \tag{15}\] with \(g_{Y}=e/c_{W},g_{W}=e/s_{W}\). In addition to contributions from dimension six effective operators, we also compute the contribution to dimension eight operators from the equations of motion of the dimension six operators and the RGE-improved corrections due to dimension six operators, see further Refs. [15; 16; 34; 35]. We note that the correction to \(\Delta\widehat{T}\) and \(\Delta\widehat{S}\) due to the dimension eight inclusion is of the order of the deviations. Thus, dimension eight interactions may be crucial to bringing EFT predictions close to the full theory for given measured constraints. ## 3 Full theory vs EFT In this section, we compare a full theory and its effective version captured in SMEFT. We carefully investigate how the inclusion of higher mass dimensional operators, suppressed by the mass of the heavy integrated-out field, in the EFT expansion affects the computation of our chosen observable \((\widehat{T},\widehat{S})\). For this, we categorize the EFT contribution into three parts. To start with, we discuss the dimension six (\(d_{6}\)) part, which contributes at \(\mathcal{O}(M_{\mathcal{S}}^{-2})\) containing linear dimension six Wilson coefficients (WCs). Here, we include the cumulative effects of field redefinition and radiative corrections on the oblique parameters. Then, we consider the corrections from the dimension eight (\(d_{8}\)) operators that are linear functions of dimension eight WCs at the \(\mathcal{O}(M_{\mathcal{S}}^{-4})\). We also include the dimension eight equivalent contributions (referred to as \((d_{6})^{2}\)), at \(\mathcal{O}(M_{\mathcal{S}}^{-4})\), from dimension six operators which are quadratic functions of dimension six WCs. This takes care of the radiative generation of dimension eight operators from dimension six ones, see Eq. (14) and the expansion of \(Z_{h}^{2}\). We list all those operators that contribute in different orders: * \(d_{6}\): \(\mathcal{C}^{(6)}_{HD},\,\mathcal{C}^{(6)}_{HWB},\,\mathcal{C}^{(6)}_{H \mathcal{\Box}}\,\); * \(d_{8}\): \(\mathcal{C}^{(8)}_{H\mathcal{D},1},\,\mathcal{C}^{(8)}_{H\mathcal{D},2},\, \mathcal{C}^{(8)}_{H^{4}D^{4},3},\,\mathcal{C}^{(8)}_{HWB}\,\); * \((d_{6})^{2}:({\cal C}^{(6)}_{H\square})^{2}\;\). We investigate the departure of the truncated-EFT computation at dimension six from the full theory calculations and the role that the Higgs mixing plays in matching these two. The mixing can be expressed as a function of the trilinear coupling \(\eta_{S}\) and the heavy cut-off scale \(M_{\cal S}\), for allowed \(\eta_{S}\) values, the decoupling can be quantified through the difference of the two theories. In Fig. 4, we show the lines for the constant mixing angles that allow a single \(\eta_{S}\) value for each choice of the cut-off. We also impose the constraint from the perturbative unitarity that rules out a specific region in the \(\eta_{S}-M_{\cal S}\) plane, in turn putting a lower-bound for the mixing for each value of the cut-off \(M_{\cal S}\), that can be seen in Fig. 4. Intuitively, adding higher and higher order terms in EFT expansion would take the EFT closer to the full theory. This concept is illustrated through the \(\widehat{T}\) parameter in Fig. 5. Here, we consider three different types of contributions. Firstly, the leading order terms in the expansion, i.e., \(d_{6}\) ones. Then, we add \(d_{8}\) contributions and finally, the \((d_{6})^{2}\) ones. In passing, we want to highlight that though the \(d_{8}\) term adds positively to the difference between the full theory and EFT, the further addition of \((d_{6})^{2}\), the equivalent of \(d_{8}\) ones, \begin{table} \begin{tabular}{|c|c|c|} \hline Operator & Op. Structure & Wilson coeffs. \\ \hline \({\cal O}^{(8)}_{HD,1}\) & \((H^{\dagger}H)^{2}\left({\cal D}_{\mu}H^{\dagger}{\cal D}_{\mu}H\right)\) & \(\frac{4\eta_{S}^{2}k_{S}}{M_{\cal S}^{8}}-\frac{8\lambda_{S}\eta_{S}^{2}k_{S}}{ M_{\cal S}^{8}}\) \\ \hline \({\cal O}^{(8)}_{H^{4}D^{4},3}\) & \(({\cal D}_{\mu}H^{\dagger}{\cal D}^{\mu}H)({\cal D}_{\nu}H^{\dagger}{\cal D}^{ \nu}H)\) & \(\frac{2\eta_{S}^{2}}{M_{\cal S}^{8}}\) \\ \hline \end{tabular} \end{table} Table 2: Relevant operators that produce tree and one-loop corrections to the gauge boson self-energy. The structures in blue first appear at tree-level correction, whereas the rest of the operators contribute at one-loop first. Figure 4: (a) shows the variation of the trilinear coupling with respect to heavy scalar mass. The orange and yellow dots denote \(\eta_{\cal S}\) values to reproduce values of the respective \(\cos^{2}\theta\) choices. In (b) we show how the mixing angle is a function of the heavy mass for fixed values of the trilinear coupling \(\eta_{\cal S}\). In both plots, the gray-shaded region respects perturbative unitarity (see appendix B). We can give two plots separately. The first one shows how large \(\cos\) theta can be achieved for wide ranges of m. The second one is the allowed range of theta that can be achieved by minimizing \(\eta_{S}\). allows us to capture the complete contribution at \(\mathcal{O}(M_{\mathcal{S}}^{-4})\). Ultimately, it reflects that going to higher order in EFT expansion reduces the gap between full theory and EFT, especially for a relatively large mixing. We perform similar analyses for \(\widehat{S}\) parameter in Fig. 6. We draw a similar conclusion as the previous one, and that makes our conclusion more generic. In Fig. 7, we have calculated the difference between full theory and EFT in calculating the \(\widehat{T}\) parameter for three different heavy mass scales. In each subfigure, we have shown Figure 5: Impact of individual contributions on \(\Delta\widehat{T}=|\widehat{T}_{\text{Full}}-\widehat{T}_{\text{eff}}|\) for three benchmark choices of \(\eta_{s}=M_{S}\), having maximal allowed mixing. \(\Delta\widehat{T}\), computed upto \(\mathcal{O}(M_{S}^{-2})\), i.e., truncating effective Lagrangian at mass dimension six, receives a positive contribution from dimension eight operators. Though, the total contributions upto order \(\mathcal{O}(M_{S}^{-2})\) reduces \(\Delta\widehat{T}\) signifying inclusion of dimension eight operators brings the effective theory prediction relatively closer to that from the full theory. Figure 6: Impact of individual contributions on \(\Delta\widehat{S}=|\widehat{S}_{\text{Full}}-\widehat{S}_{\text{eff}}|\) for three benchmark choices of \(\eta_{s}=M_{S}\). We note that adding the RG evolution contribution from dimension eight operators improves the effective theory to replicate the full theory. that if we lower the value of \(\eta_{S}\) for a fixed mass, the value of \(\cos\theta\) increases. As the \(\cos\theta\) reaches unity, the full theory and EFT are in excellent agreement, which is expected as the new physics contribution vanishes. It is also evident that for a fixed \(\eta_{S}\), once we go for higher masses the difference also decreases. This illustrates the interplay among the coupling \(\eta_{S}\), the heavy mass scale \(M_{\mathcal{S}}\), and mixing parameter \(\cos\theta\). One can tune the value of these parameters so that EFT can be a good explanation for the full theory. Doing the same kind of investigation for the \(\widehat{S}\) parameter in Fig. 8 further emphasises the idea. ## 4 Summary and Conclusions Effective Field Theory is a powerful tool to look for deviations from the SM expectation in a theoretically well-motivated way. In a modern sense, it enables us to extend good quantum field theoretic properties to generic departures from the SM interactions, with potential relevance for UV complete scenarios depending on the accuracy with which constraints can be formulated. Along these lines, a set of particularly well-motivated observables are the oblique corrections as a subset of relevant electroweak corrections. In this work, we have analysed these observables from their dimension six and eight points of view with a critical perspective on how accurately EFT methods describe the full theory in the singlet extension scenarios where decoupling and alignment limits are exceptionally transparent. As expected, EFT approximates the full theory well in regions where it is valid. However, moving away from the alignment/decoupling limit, the relevance of the higher-dimensional terms in the Figure 7: Difference between full theory and the EFT computation for \(\widehat{T}\) parameter at different heavy field mass scales. The mass scales are chosen to be (a) 700 GeV, (b) 1 TeV and (c) 5 TeV. The values for \(\eta_{S}\) are chosen such that they satisfy the unitarity bounds. EFT expansion quickly becomes relevant. Although these ranges are currently not probed by the experiments, it demonstrates the need to include higher-dimensional corrections (chiefly squared dimension six terms) to well-approximate the full theory. Furthermore, as Higgs boson mixing is a feature in almost all BSM theories with a non-minimal Higgs sector, this shows the necessity to go beyond dimension six interactions when data is very precise or when we want to inform a potential UV scenario accurately. ## Acknowledgements C.E. is supported by the STFC under grant ST/T000945/1, by the Leverhulme Trust under grant RPG-2021-031, and the IPPP Associateship Scheme. M.S. is supported by the STFC under grant ST/P001246/1. W.N. is funded by a University of Glasgow College of Science and Engineering Scholarship. ## Appendix A Gauge boson two-point functions ### Modification due to a singlet scalar extension We note down the modifications to the gauge boson two-point functions due to the presence of a new heavier scalar degree of freedom. Here, only the contributions from the scalar-involved diagrams are presented. The BSM contribution to the two-point functions (in Figure 8: Difference between full theory and the EFT computation for \(\widehat{S}\) parameter at different heavy field mass scales. The mass scales are chosen to be (a) 700 GeV, (b) 1 TeV and (c) 5 TeV. The values for \(\eta_{S}\) are chosen such that they satisfy the unitarity bounds. Feynman gauge) are then [36] \[\Pi_{ZZ}(p^{2}) = \left[-\frac{M_{W}^{4}B_{0}\left(p^{2},M_{h}^{2},M_{Z}^{2}\right)}{4 \pi^{2}c_{W}^{4}v^{2}}+\frac{M_{W}^{2}B_{00}\left(p^{2},M_{h}^{2},M_{Z}^{2} \right)}{4\pi^{2}c_{W}^{2}v^{2}}-\frac{M_{h}^{2}M_{W}^{2}\left(1-\log\left(\frac {M_{h}^{2}}{\mu^{2}}\right)\right)}{16\pi^{2}c_{W}^{2}v^{2}}\right] \tag{42}\] \[\cos^{2}\theta+\left[-\frac{M_{W}^{4}B_{0}\left(p^{2},M_{S}^{2},M_ {Z}^{2}\right)}{4\pi^{2}c_{W}^{4}v^{2}}+\frac{M_{W}^{2}B_{00}\left(p^{2},M_{S}^ {2},M_{Z}^{2}\right)}{4\pi^{2}c_{W}^{2}v^{2}}\right.\] \[\left.-\frac{M_{S}^{2}M_{W}^{2}\left(1-\log\left(\frac{M_{h}^{2} }{\mu^{2}}\right)\right)}{16\pi^{2}c_{W}^{2}v^{2}}\right]\sin^{2}\theta,\] \[\Pi_{WW}(p^{2}) = \left[-\frac{M_{W}^{4}B_{0}\left(p^{2},M_{h}^{2},M_{W}^{2}\right) }{4\pi^{2}v^{2}}+\frac{M_{W}^{2}B_{00}\left(p^{2},M_{h}^{2},M_{W}^{2}\right)}{ 4\pi^{2}v^{2}}-\frac{M_{h}^{2}M_{W}^{2}\left(1-\log\left(\frac{M_{h}^{2}}{\mu^ {2}}\right)\right)}{16\pi^{2}v^{2}}\right] \tag{43}\] \[\cos^{2}\theta+\left[-\frac{M_{W}^{4}B_{0}\left(p^{2},M_{S}^{2},M _{W}^{2}\right)}{4\pi^{2}v^{2}}+\frac{M_{W}^{2}B_{00}\left(p^{2},M_{S}^{2},M_{ W}^{2}\right)}{4\pi^{2}v^{2}}\right.\] \[-\left.\frac{M_{S}^{2}M_{W}^{2}\left(1-\log\left(\frac{M_{h}^{2} }{\mu^{2}}\right)\right)}{16\pi^{2}v^{2}}\right]\sin^{2}\theta,\] \[\Pi_{\gamma\gamma}(p^{2}) = \Pi_{\gamma Z}(p^{2})=0\,, \tag{44}\] where, the Passarino-Veltman functions [37] (see also [38; 39]) \(A_{0}\), \(B_{0}\) and \(B_{00}\) capture the scalar one-loop dynamics (the vev is fixed via \(v=2M_{W}s_{W}/e\)). We have cross checked these results numerically against previous results [22; 23]. ### Modification due to the corresponding EFT at tree-level We note down the tree-level correction to the gauge boson propagators as shown in Fig. 3 due to the presence of effective operators. \[\Pi_{WW}^{(\text{EFT})}(p^{2}) = \frac{g_{W}^{2}\,v^{6}}{16}\mathcal{C}_{H\mathcal{D},1}^{(8)}- \frac{g_{W}^{2}\,v^{6}}{16}\mathcal{C}_{H\mathcal{D},2}^{(8)}+p^{2}v^{4}\, \mathcal{C}_{HW}^{(8)}+2p^{2}v^{2}\mathcal{C}_{HW}^{(6)}, \tag{45}\] \[\Pi_{ZZ}^{(\text{EFT})}(p^{2}) = \frac{c_{W}^{2}g_{W}^{2}v^{6}}{16}\mathcal{C}_{H\mathcal{D},1}^{(8 )}+\frac{c_{W}^{2}g_{W}^{2}v^{6}}{16}\mathcal{C}_{H\mathcal{D},2}^{(8)}+\frac{c _{W}^{2}g_{W}^{2}v^{4}}{8}\mathcal{C}_{H\mathcal{D}}^{(6)}+c_{W}^{2}p^{2}v^{4 }\mathcal{C}_{HW}^{(8)}\] (46) \[+2c_{W}^{2}p^{2}v^{2}\mathcal{C}_{HW}^{(6)}+\frac{c_{W}s_{W}g_{W} g_{Y}v^{6}}{8}\mathcal{C}_{H\mathcal{D},1}^{(8)}+\frac{c_{W}s_{W}g_{W}g_{Y}v^{6}}{8} \mathcal{C}_{H\mathcal{D},2}^{(8)}\] \[+\frac{c_{W}s_{W}g_{W}g_{Y}v^{4}}{4}\mathcal{C}_{H\mathcal{D}}^{( 6)}+p^{2}c_{W}s_{W}v^{4}\,\mathcal{C}_{HWB}^{(8)}+2p^{2}c_{W}s_{W}v^{2} \mathcal{C}_{HWB}^{(6)}\] \[+\frac{s_{W}^{2}g_{Y}^{2}v^{6}\mathcal{C}_{H\mathcal{D},1}^{(8)}} {16}+\frac{s_{W}^{2}g_{Y}^{2}v^{6}\mathcal{C}_{H\mathcal{D},2}^{(8)}}{16}+\frac{ s_{W}^{2}g_{Y}^{2}v^{4}\mathcal{C}_{H\mathcal{D}}^{(6)}}{8}\] \[+p^{2}s_{W}^{2}v^{4}\mathcal{C}_{HB}^{(8)}+2p^{2}s_{W}^{2}v^{2} \mathcal{C}_{HB}^{(6)},\] \[\Pi_{\gamma\gamma}^{(\text{EFT})}(p^{2}) = \frac{c_{W}^{2}g_{Y}^{2}v^{6}}{16}\mathcal{C}_{H\mathcal{D},1}^{( 8)}+\frac{c_{W}^{2}g_{Y}^{2}v^{6}}{16}\mathcal{C}_{H\mathcal{D},2}^{(8)}+ \frac{c_{W}^{2}g_{Y}^{2}v^{4}}{8}\mathcal{C}_{H\mathcal{D}}^{(6)}+p^{2}c_{W}^{2} v^{4}\mathcal{C}_{HB}^{(8)}\] \[+2p^{2}c_{W}^{2}v^{2}\mathcal{C}_{HB}^{(6)}-\frac{c_{W}s_{W}g_{W}g_ {Y}v^{6}}{8}\mathcal{C}_{H\mathcal{D},1}^{(8)}-\frac{c_{W}s_{W}g_{W}g_{Y}v^{6}}{8} \mathcal{C}_{H\mathcal{D},2}^{(8)}\] \[-\frac{g_{W}g_{Y}c_{W}s_{W}v^{4}}{4}{\cal C}^{(6)}_{HD}-p^{2}c_{W}s_{W}v^{4}{ \cal C}^{(8)}_{HWB}-2p^{2}c_{W}s_{W}v^{2}{\cal C}^{(6)}_{HWB}\] \[+\frac{s_{W}^{2}g_{W}^{2}v^{6}}{16}{\cal C}^{(8)}_{HD,1}+\frac{s_{ W}^{2}g_{W}^{2}v^{6}}{16}{\cal C}^{(8)}_{HD,2}+\frac{s_{W}^{2}g_{W}^{2}v^{6}}{8}{ \cal C}^{(6)}_{HD}\] \[+p^{2}s_{W}^{2}v^{4}{\cal C}^{(8)}_{HW}+2p^{2}s_{W}^{2}v^{2}{\cal C }^{(6)}_{HW},\] \[\Pi^{(\rm EFT)}_{\gamma Z}(p^{2}) = -\frac{c_{W}^{2}g_{W}g_{Y}v^{6}}{16}{\cal C}^{(8)}_{HD,1}-\frac{c_ {W}^{2}g_{W}g_{Y}v^{6}}{16}{\cal C}^{(8)}_{HD,2}-\frac{c_{W}^{2}g_{W}g_{Y}v^{4} }{8}{\cal C}^{(6)}_{HD}\] (A.6) \[-\frac{p^{2}c_{W}^{2}v^{4}}{2}{\cal C}^{(8)}_{HWB}-p^{2}c_{W}^{2} v^{2}{\cal C}^{(6)}_{HWB}+\frac{c_{W}s_{W}g_{W}^{2}v^{6}}{16}{\cal C}^{(8)}_{ HD,1}+\frac{c_{W}s_{W}g_{W}^{2}v^{6}}{16}{\cal C}^{(8)}_{HD,2}\] \[+\frac{c_{W}s_{W}g_{W}^{2}v^{4}}{8}{\cal C}^{(6)}_{HD}-\frac{c_{W }s_{W}g_{Y}^{2}v^{6}}{16}{\cal C}^{(8)}_{HD,1}-\frac{c_{W}s_{W}g_{Y}^{2}v^{6}}{ 16}{\cal C}^{(8)}_{HD,2}\] \[-\frac{c_{W}s_{W}g_{Y}^{2}v^{4}}{8}{\cal C}^{(6)}_{HD}-p^{2}c_{W} s_{W}v^{4}{\cal C}^{(8)}_{HB}+p^{2}c_{W}s_{W}v^{4}{\cal C}^{(8)}_{HW}-2p^{2}c_{W}s_{W} v^{2}C^{(6)}_{HB}\] \[+2p^{2}c_{W}s_{W}v^{2}{\cal C}^{(6)}_{HW}+\frac{s_{W}^{2}g_{W}g_{Y }v^{6}}{16}{\cal C}^{(8)}_{HD,1}+\frac{s_{W}^{2}g_{W}g_{Y}v^{6}}{16}{\cal C}^{ (8)}_{HD,2}+\frac{s_{W}^{2}g_{W}g_{Y}v^{4}}{8}{\cal C}^{(6)}_{HD}\] \[+\frac{p^{2}s_{W}v^{4}}{2}{\cal C}^{(8)}_{HWB}+p^{2}s_{W}^{2}v^{2} {\cal C}^{(6)}_{HWB}.\] The couplings are given by \(g_{W}=e/s_{W},g_{Y}=e/c_{W}\). ## Appendix B Unitarity Constraints Unitarity provides a suitable tool to gauge whether the matching is indeed for perturbative choices of the UV model parameters. Perturbativity, in one way or another, is implicitly assumed in analysing any collider data and this extends to the electroweak precision constraints as well. To this end, we consider the partial wave constraints that can be derived from longitudinal gauge boson scattering to identify the regions of validity this way. The zeroth partial wave relevant for this is given for scattering \(i_{1}\,i_{2}\to f_{1}\,f_{2}\) (see Ref. [40]) \[a^{0}_{fi}=\frac{\beta^{1/4}(s,m_{i,1}^{2},m_{i,1}^{2})\,\beta^{1/4}(s,m_{f,1 }^{2},m_{f,1}^{2})}{32\pi s}\int_{-1}^{1}{\rm d}\cos\theta\ {\cal M}(\sqrt{s},\cos\theta)\,,\] (B.1) suppressing factors of \(1/\sqrt{2}\) for identical particles in the initial \(i\) or final state \(f\). \(\sqrt{s}\) denotes the centre-of-mass energy, and \(\theta\) is the scattering angle in this frame for the \(2\to 2\) scattering process described by the amplitude \(\sim{\cal M}\). Furthermore, \[\beta(x,y,z)=x^{2}+y^{2}+z^{2}-2xy-2yz-2xz\,.\] (B.2) such that \(\lim_{s\to\infty}\beta^{2}/s=1\). Unitarity of the \(S\) matrix then translates for \(f=i\) to the conditions \[|{\rm Re}\,a^{0}_{ii}|\leq\frac{1}{2}\quad{\rm and}\quad|{\rm Im}\,a^{0}_{ii}| \leq 1\,.\] (B.3) of which we use the first one to obtain the constraints in Sec. 2. The presence of large \(\eta_{S}\sim M_{\cal S}\) leads to unitarity violation through \({\cal S}\) contributions to \(hh\to hh\) via the \(s,t,u\) channels as well as large values of \(M_{\cal S}\) for \(\cos^{2}\theta\neq 1\) in longitudinal gauge boson scattering [41]. Numerical investigation shows that for our choice close to the alignment limits longitudinal unitarity constraints are not as relevant as \(hh\) scattering constraints. Assuming perturbative unitarity up to a cut-off scale \(\sim M_{\mathcal{S}}\) requires \(\eta_{S}\lesssim M_{\mathcal{S}}\). This reflects the fact that when a dimensionful coupling (i.e. a mass scale) such as \(\eta_{S}\) becomes comparable to a UV cut-off (\(M_{\mathcal{S}}\) in the EFT description), we enter strong coupling. This is also visible from the expansion of phenomenologically relevant quantities such as Eq. (6), which scales \(\sim v\eta_{S}/M_{\mathcal{S}}^{2}\) in the EFT regime \(M_{\mathcal{S}}\gg M_{h}\).
2310.02473
Prompting-based Temporal Domain Generalization
Machine learning traditionally assumes that the training and testing data are distributed independently and identically. However, in many real-world settings, the data distribution can shift over time, leading to poor generalization of trained models in future time periods. This paper presents a novel prompting-based approach to temporal domain generalization that is parameter-efficient, time-efficient, and does not require access to future data during training. Our method adapts a trained model to temporal drift by learning global prompts, domain-specific prompts, and drift-aware prompts that capture underlying temporal dynamics. Experiments on classification, regression, and time series forecasting tasks demonstrate the generality of the proposed approach. The code repository will be publicly shared.
Sepidehsadat Hosseini, Mengyao Zhai, Hossein Hajimirsadegh, Frederick Tung
2023-10-03T22:40:56Z
http://arxiv.org/abs/2310.02473v2
# Prompting-based Efficient Temporal Domain Generalization ###### Abstract Machine learning traditionally assumes that training and testing data are distributed independently and identically. However, in many real-world settings, the data distribution can shift over time, leading to poor generalization of trained models in future time periods. Our paper presents a novel prompting-based approach to temporal domain generalization that is parameter-efficient, time-efficient, and does not require access to the target domain data (i.e., unseen future time periods) during training. Our method adapts a target pre-trained model to temporal drift by learning global prompts, domain-specific prompts, and drift-aware prompts that capture underlying temporal dynamics. It is compatible across diverse tasks, such as classification, regression, and time series forecasting, and sets a new state-of-the-art benchmark in temporal domain generalization. The code repository will be publicly shared. ## 1 Introduction Machine learning has achieved great success in many applications in recent years, and most machine learning algorithms rely on the assumption that the training (i.e. source) and test (i.e. target) data are independently and identically distributed (i.i.d.). However, in reality, distribution shift and concept drift are often observed, and these non-i.i.d problems are more challenging to tackle. In domain adaptation (DA), extensive research has been conducted on adapting models to the target domain by modelling the domain relations between the source and the target Courty et al. (2016); Gong et al. (2012); Hoffman et al. (2014); Jimenez et al. (2019); Lao et al. (2020); Wang et al. (2020); Yang and Hospedales (2016). However, such models assume that target domain data is available, which may not always hold in real-world settings. Domain generalization (DG) methods tackle the scenario where models are directly generalized to the target domain without the presence of the target data (labelled or unlabelled) Yue et al. (2019); Prakash et al. (2019); Shankar et al. (2018); Volpi et al. (2018); Hu et al. (2021); Triantafillou et al. (2021); Kim et al. (2021); Wang et al. (2021). DG traditionally focuses on generalization among categorical-indexed domains with categorical task Wang et al. (2021); Chen et al. (2022). In contrast, _temporal DG_ addresses the continuously time-evolving distribution shift (namely concept drift) problem Bai et al. (2023); Nasey et al. (2021). For example, suppose we would like to predict house prices given information about the property's physical characteristics, such as square footage, number of bedrooms, number of bathrooms, and location. Since house prices are influenced by macroeconomic conditions and demographic trends that change over time, a regression model trained on data collected from the past few years could have poor predictive power next year Yin et al. (2022). However, suppose the macroeconomic and demographic factors change gradually over time. In that case, we can extrapolate their influence into the short-term future, and adapt the regression model to make more accurate predictions. Such cases are where temporal domain generalization can be applied. For example, suppose we know that the population in a particular country has been steadily aging over the past several years, which reduces the overall demand for many-bedroom houses. A temporal DG algorithm can anticipate that the demand will continue to fall for many-bedroom houses and adapt the price predictions for these houses accordingly: given the same features, a many-bedroom house next year will be priced some amount less than this year. Note that in the temporal DG setting, we do not get to see the "test domain", i.e., next year's house prices, during training. Therefore, temporal DG methods that model the continuously time-evolving data dynamics and generalize well to the future are needed. Most standard DG methods cannot be directly applied to temporal DG. Different from standard DG problems, which aim to discover general representations among different domains and learn domain-invariant features, capturing the temporal dynamics of domain data changing over time is crucial for temporal DG. Learning domain-invariant features, namely time-invariant representations in temporal DG case, no longer work. Only a few methods studied temporal DG problem Nasery et al. (2021); Bai et al. (2023), which are inefficient and complex to be applied to large datasets and large models. Moreover, all the prior works only showed their effectiveness on classification and/or regression tasks, while missing demonstrations on other applications, such as time series forecasting. Therefore, a more efficient temporal DG framework enabling more diverse tasks is valuable. Prompting is well-known for efficiently adapting a trained network to different tasks without retraining Lester et al. (2021); Vu et al. (2021); Gu et al. (2021); Li and Liang (2021); Asai et al. (2022); Wang et al. (2023). Most prior works Jia and Zhang (2022); Zhang et al. (2021); Li et al. (2022); Dunlap et al. (2022); Shu et al. (2023) adopting prompting for DG are applicable to only CLIP Radford et al. (2021) and cannot be applied to other architectures or tasks. PADA Ben-David et al. (2022) is a recent work proposed for DG. It first generates example-specific prompts, and then the generated prompts are applied to T5 for classification tasks. However, PADA is applicable only to classification tasks, and it can only generate word tokens as prompts. Moreover, none of these prior works can generate time-sensitive prompts that capture temporal dynamics. In this paper, we proposed a parameter-efficient and time-efficient prompting-based temporal DG method. To capture temporal dynamics, domain-specific prompts are first generated on each domain. Then, our method learns time-sensitive prompts by modelling the temporal changes from domain-specific prompts and forecasts future prompts for unseen future domains. Our method also learns global prompts shared across all domains to learn generic representations. The prompts are generated in vector space and can be applied to a wide range of network architectures. To sum up, our contributions are: (1) We propose the first prompting-based temporal DG method for addressing data distribution shifts over time. (2) Our method is parameter-efficient and time-efficient. In contrast to the state-of-the-art approach (Bai et al., 2023), which generates a full network for each domain, including the target domain, only a few parameters shared across all domains are allocated for prompt generation, and no additional parameters are needed for the target domain. (3) Our method is general and can be applied to many applications, including classification, regression, and time series forecasting. ## 2 Related Work **Domain generalization and adaptation** are research fields that have garnered significant attention in recent years due to their practical significance in real-world applications Ganin and Lempitsky (2015); Tzeng et al. (2017); Tremblay et al. (2018); Shankar et al. (2018); Volpi et al. (2018); Zhou et al. (2020). The primary goal of domain adaptation (DA) is to tailor models to specific target domains, using the similarities that exist between these domains Ben-David et al. (2010); Wang and Deng (2018). Continuous domain adaptation, a subset of DA, addresses the adaptation to domains characterized by continuous variables Hoffman et al. (2014); Jimenez et al. (2019); Lao et al. (2020); Wang et al. (2020); Yang and Hospedales (2016). This may include temporal domain adaptation, which deals with domains that evolve over time. For instance, Courty et al. (2016); Gong et al. (2012) adapted their training loss to account for future data derived from prior domains. Similarly, the method proposed by Mancini et al. (2019); Shabani et al. (2022) involves time-sensitive deep neural network parameters to control their evolution over time. Their network possesses domain-specific and domain-generic parameters, with the former integrating an added constraint that considers the similarity between domains. Meanwhile, other approaches like Wang et al. (2020); Ganin et al. (2016) focus on learning time-invariant representations using adversarial methods. Domain generalization (DG) methods build upon the insights from domain adaptation (DA) and aim to enhance the generalization capability of models across unseen (target) domains, where the data distribution may differ significantly from the source domain. These methods are crucial when adaptation approaches, like domain adaptation (DA), are not feasible due to unavailable target domain data or other possible limitations in adapting the base model. DG techniques encompass a range of strategies, as outlined in Wang et al. (2021). DG methods can be categorized into three groups based on their focus. First, data manipulation methods, which include data augmentation by manipulating input data through domain randomization Yue et al. (2019); Prakash et al. (2019), adversarial data augmentation Shankar et al. (2018); Volpi et al. (2018); Nazari and Kovashka (2020); Khirodkar et al. (2019) and data generation Qiao et al. (2020); Liu et al. (2018); Zhao et al. (2021); Garg et al. (2021). Second, representation learning by either applying domain-invariant representation learning techniques Deshmukh et al. (2019); Qi et al. (2021); Fan et al. (2021); Mitrovic et al. (2021) or feature disentanglement techniques Hu et al. (2021); Triantafillou et al. (2021); Nam et al. (2021); Sun et al. (2021) to improve generalization. Third, learning strategy methods exploit various learning strategies like ensemble learning Wu and Gong (2021); Dubey et al. (2021), meta-learning Kim et al. (2021); Wang et al. (2021), and gradient operations Tian et al. (2022); Rame et al. (2021) to enhance the overall generalization capability. DG is essential for scenarios where domain adaptation comes short, and models must excel across unseen domains with diverse data distributions. However, most existing DG methods target categorical-indexed domains for categorical tasks. Temporal Domain Generalization (DG) is a lesser-explored area that deals with the ongoing changes in distribution, referred to as concept drift. Standard DG techniques aren't easily adjustable to handle temporal DG scenarios. Unlike regular DG, which aims for generalized representations across different domains, temporal DG focuses more on capturing the domain data's temporal dynamics. The G1 Nasery et al. (2021) method uses adversarial training to generalize over time, altering the leaky ReLU activation for time dependence. However, its adversarial nature limits its efficiency with larger datasets or models. DRAIN Bai et al. (2023), a recent temporal DG approach, generates future model weights based on previous domains data but is inefficient in terms of parameters. Generating weights for state-of-the-art network architectures, like transformers, becomes challenging. Most existing works demonstrate efficacy only in classification and regression, neglecting other applications, underscoring the need for a more versatile temporal DG framework. **Prompting Mechanism**: The concept of prompt-based learning has gained significant traction in the field of natural language processing (NLP) for adapting pre-trained language models (PLMs) to various downstream tasks. This framework involves conditioning the model with additional instructions to perform specific tasks. Elmo (Peters et al. (2018)), Bert (Devlin et al. (2018)), and Brown et al. (2020) introduced the approach of fine-tuning PLMs for downstream tasks through fixed prompting functions. This technique has succeeded particularly in few-shot classification tasks like sentiment analysis and natural language inference (Gao et al. (2021); Liu et al. (2021), where manually designed prompts were employed. However, formulating such a prompting function is challenging and often demands heuristic knowledge. In response to this challenge, recent efforts such as soft prompts (Lester et al. (2021); Vu et al. (2021); Gu et al. (2021)), P-tuning V2 (Liu et al. (2021)), and prefix tuning (Li and Liang (2021)) have been made to treat prompts as adaptable parameters. It is worth noting that prompts encapsulate task-specific supervision with notably fewer supplementary parameters than competing techniques, such as Adapter (Wang et al. (2020); Pfeiffer et al. (2020) and LoRA (Hu et al. (2021)). A different yet related angle to this topic is the casting of language modelling as a sequence-to-sequence task. This approach employs full transformer models, like the encoder-decoder paradigm, to autoregressively generate masked or altered token spans from input sequences (Raffel et al. (2020); Lewis et al. (2020)). The T5 model, introduced by Raffel et al. (2020), exemplifies this concept by treating every task as generative, where tasks are prefixed with a specific phrase to denote the operation. This approach has spiked different exploration across numerous areas, from adapting language models for diverse utilities (Brown et al. (2020)), extracting sentiment or theme-centric details (Jiang et al. (2020); Sun and Lai (2020); Shin et al. (2020); Haviv et al. (2021)), enhancing fine-tuning efficiencies (Li Liang (2021); Scao & Rush (2021), to functioning as few-shot learning techniques (Gao et al. (2021); Schick & Schutze (2021)). Moreover, researchers have studied the transferability of prompts (Wang et al. (2021); Vu et al. (2021); Su et al. (2021)), seeking to enhance the efficacy of prompt tuning across various tasks. Methods such as SPoT ( Vu et al. (2021)) choose a prompt based on a similarity metric, whereas ATTEMPT ( Asai et al. (2022)) incorporates an attention mechanism to draw from source prompts, initializing the prompt for its designated task. Wang et al. (2023) achieved a universal prompt by decomposing and distilling knowledge from source prompts. However, none of these approaches have considered the concept of temporal drift in their problem and have not been designed for DG where the target domain is unseen. This paper introduces a new prompting-based approach that is both parameter-efficient and time-efficient, designed for temporal DG. It creates domain-specific prompts to capture temporal dynamics and models time-sensitive changes, anticipating prompts for future unseen domains. ## 3 Method We address the problem of adapting a pre-trained model to future time periods under a realistic setting where data distributions evolve over time. Denote a set of temporal domains by \(\mathcal{D}=\{D_{t}\}\), where \(\{D_{t}|1\leq t\leq\tau\}\) represents the source domains, and \(\{D_{t}|t>\tau\}\) represents the target domains. For example, each temporal domain may contain data points for one year. Data points from target domains are only observed during test time. Our goal is to learn the temporal dynamics within the sequence of source domains that can be directly generalized to future unseen target domains. Our solution utilizes two types of learnable prompts: domain-specific prompts (\(P_{S(t)}\)) and temporal prompts (\(P_{T(t)}\)). The domain-specific prompts estimate the distribution \(\mathcal{P}(Y_{t}|X_{t})\) for each domain \(t\), where \(Y_{t}\) are outputs and \(X_{t}\) are inputs. The temporal prompts aim to capture the dynamics associated with temporal drift, and are generated using the domain-specific prompts. In Figure 1, the left and middle subfigures illustrate the training procedure, and the right one depicts the inference step. ### Backbone Network Pre-Training We start with a transformer-based network represented as \(f_{\theta}\) as the model backbone. This network is pre-trained on the combined datasets from all source domains and the goal is to train the \(f_{\theta}\) maximizing the likelihood \(\mathcal{P}_{\theta}(Y_{1:\tau}|X_{1:\tau})\). After pre-training, \(f_{\theta}\) weights are fixed in all later steps. Figure 1: Overview of the proposed method. A set of source domains \(D_{1},D_{2},D_{3}\) and a target domain \(D_{4}\) are given. First, a backbone network is trained on the combined source domains in a pre-training phase. Then, domain-specific prompts \(P_{S1},P_{S2},P_{S3}\) are learned independently on each source domain (while keeping the backbone network frozen) to learn the characteristics of each indexed domain separately. Next, a temporal prompt generator is trained to transform the domain-specific prompts to temporal prompts \((P_{T2},P_{T3},P_{T4})\) which can capture the temporal dynamics and concept drifts within the sequence of domains. finally, to capture the general knowledge across all domains, the general prompts \(P_{G}\) are learned. For inference, the combination \([P_{T4};P_{G},X]\) is fed to the frozen backbone to perform the task on the target domain \(D_{4}\). ### Domain-specific Prompt Learning The backbone network in Section 3.1 was pre-trained on the data aggregated across all source domains, without considering the differences in the individual domains. Intuitively, the pre-trained network captures "average" or "general" knowledge and can fail to learn details that reflect particular domains. Therefore, we adopt prompts to capture domain-specific information. For each domain \(t\), we prepend the input \(X\) with a prompt \(P_{S(t)}\), which are learnable parameters. The combined result, represented as \([P_{S(t)};X]\), is then processed by the frozen backbone network (\(f_{\theta}\)), which was pretrained across all source domains. To learn prompt \(P_{S(t)}\), the model is trained to maximize the likelihood \(\mathcal{P}_{\theta}(Y_{t}|[P_{S(t)};X_{t}])\) while freezing the pre-trained model parameters \(\theta\). Learning on each domain independently, we derive domain-specific prompts \(P_{S1},P_{S2},...,P_{S(\tau)}\), effectively condensing domain knowledge into a concise set of parameters. Formally, for an input sequence \(X\), the domain-specific prompt is represented as \(P_{S}\in\mathbb{R}^{n}\). ### Temporal Prompt Learning To capture concept drift over time, we employ a temporal prompt generator to encode the temporal dynamics into temporal prompts. This module takes in domain-specific prompts from source domains and produces future temporal prompts. Here, we utilize a single-layer transformer encoder module, denoted by \(g_{\omega}\), as our temporal prompt generator. In order to incorporate information from the preceding domains, we apply sequential training. Starting from domain \(t=2\), for each domain \(t\) the temporal prompt generator \(g_{\omega}\) receives domain-specific prompts, \(P_{S1},P_{S2},\ldots,P_{S(t-1)}\), as input tokens. It then uses those prompts to generate the temporal prompts \(P_{T2},P_{T3},\ldots,P_{T(t)}\). Specifically, as shown in Equation 1, it generates the temporal prompt \(P_{T(t)}\) for domain \(t\) from previous domain-specific prompts. \[P_{T(t)}=g_{\omega}(P_{S1:(t-1)}),\quad t=2,\ldots,\tau+1 \tag{1}\] Moreover, to help capture generic information across all domains, we learn a general prompt \(P_{G}\in\mathbb{R}^{n}\). Finally, the input \(X\) from domain \(t\) is prepended by the generic prompt \(P_{G}\) and the temporal prompt \(P_{T(t)}\in\mathbb{R}^{n}\). The result, represented as \([P_{T(t)};P_{G};X]\), is fed into the frozen backbone network \(f_{\theta}\) which has been pre-trained on all the combined source domains as described in Section 3.1. Both \(P_{G}\) and the temporal prompt generator \(g_{\omega}\) are trained to maximize the likelihood \(\mathcal{P}_{\theta}(Y_{t}|[P_{T(t)};P_{G};X_{t}])\), while keeping the backbone network \(f_{\theta}\) frozen. Temporal prompts \(P_{T2}\), \(P_{T3},\ldots,P_{T(\tau+1)}\) effectively capture temporal drift and help the pre-trained network to adapt to changes in the data distribution over time, and to anticipate future changes by capturing temporal trends. ### Inference time During the inference, the model utilizes the domain-specific prompts \(P_{S1},P_{S2},\ldots,P_{S(\tau)}\) and generates temporal prompts \(P_{T2},P_{T3},\ldots,P_{T(\tau+1)}\). To perform the target domain task, the frozen backbone receives the input \([P_{T(\tau+1)};P_{G};X_{t}]\) and predicts the output. ## 4 Experiments ### Implementation details We utilize the Adam optimizer Kingma & Ba (2014) and consistently set the learning rate to \(1e-4\) across all datasets. Our system is implemented in PyTorch and runs on a workstation powered by a 2.10GHz Intel Xeon(R) Gold 6230 CPU with 20 cores, paired with an NVIDIA RTX 5000 GPU. For each dataset, we tune the hyperparameters based on the suggestions from Bai et al. (2023). Additional experiment settings and results (e.g., network architectures and additional ablation results) are provided in the appendix. ### Competing Methods We compare our model with several state-of-the-art methods including temporal domain generalization methods DRAIN Bai et al. (2023) and GI (Nasery et al., 2021), continuous domain adaption methods CDOT (Ortiz-Jimenez et al., 2019) and CIDA (Wang et al., 2020), and prompting method ATTEMPT Asai et al. (2022) to validate the effectiveness of our temporal prompts. It's important to highlight that the original DRAIN employs two fully connected layers (DRAIN-2FC) in both encoding and decoding functions to transform the latent representations between LSTM units. To potentially boost DRAIN's performance, we also explored using three and four linear layers in both encoding and decoding functions. We call these models DRAIN-3FC and DRAIN-4FC, respectively. DRAIN-Best refers to the model achieving the highest performance using these configurations for the encoding/decoding functions. We also compare against several baseline methods that do not consider temporal drift, including 1) The Vanilla-MLP, the MLP-based backbone network from DRAIN Bai et al. (2023), which is trained on the combined source domains. 2) Vanilla-Transformer, our method's transformer-based backbone network, which is trained on the combination of all source domains. ### Synthetic Data In order to comprehensively evaluate our proposed framework, we constructed 4 synthetic datasets. The first 2 datasets derive from the Mackey-Glass equations Mackey & Glass (1977), as shown in Equation 2. The rest 2 datasets are predicated on Cosine waves, defined in Equation 3. For introducing temporal shift to the data, we employed two strategies: alternating the data directly or adding a variable cosine wave with varying phases and frequencies across domains. \[x(t+1)=x(t)+\beta\frac{x(t-\sigma)}{1+x^{n}(t-\sigma)}-\gamma x(t),\quad\left\{ \begin{array}{l}\beta=0.2\\ \gamma=0.1\\ n=15\\ \sigma=18\\ t_{\text{max}}=2600\end{array}\right.,\quad x(t)=0.1\text{ {if }}t<18 \tag{2}\] \[x(t)=\cos\left(a+\frac{\pi h}{\alpha}t\right)+\cos\left(b+\frac{\pi}{\beta}t\right),\quad\left\{\begin{array}{l}\alpha=100\\ \beta=13\\ a=40\\ b=10\\ h=1\end{array}\right.,\quad 0<t<2600 \tag{3}\] **Data alternation**: For Mackey-Glass data, we induced temporal shift by changing \(\sigma=8+i\times 2\) for each domain "\(i\)". For Cosine waves we induced temporal shift by changing \(a=i\) and \(h=i+1\) for each domain "\(i\)". More visualizations of synthetic datasets are shown in the appendix. **Adding variable Cosine wave**: For Mackey-glass time series, we add Eq 4 to our base equation 2 for each domain \(i\) (see examples in figure 5). For the Cosine waves, we went one step further and add the same wave to the cosine wave after the data alternation (see examples in figure 6). More visualizations of the synthetic datasets are shown in the appendix. \[0.5\times\cos\left(100i+\frac{\pi(i+1)}{300}t\right) \tag{4}\] Results on 4 synthetic datasets are summarized in Table 1, we also qualitatively visualize the results on the Cosine wave in Figure 3. Our proposed framework consistently outperformed the Vanilla Transformer, DRAIN, and Attempt models on synthetic data. Quantitatively, our model achieved the lowest MSE across both Mackey Glass and Sum of Cosine Waves datasets with either type of the temporal drifts. Qualitatively, it also demonstrates superior adaptability and accuracy. **Datasets**: A time series datasets: Crypto Arik et al. (2022); three classification datasets: Rotated Moons (2-Moons) Nasery et al. (2021), Online News Popularity (ONP) Ben-David et al. (2010), Electrical Demand (Elec2) Nasery et al. (2021); and two regression datasets: House prices (House) Nasery et al. (2021), Appliances energy prediction (Appliance) Bai et al. (2023). For the classification and regression datasets (2-Moons, ONP, Elec2, House, and Appliance), we followed the procedure outlined in Bai et al. (2023) to partition the dataset into distinct temporal domains. The Crypto dataset contains 8 features on historical trades (e.g., open and close prices) for 14 crypto currencies. Our goal is to generate 15-step predictions for the 15-minute relative future returns (i.e., the target), with each step representing a 1-minute increment from the previous one. It starts from 2018 until 2021. We consider each month as one domain. We used the initial \(90\%\) of entries from each month in 2018, 2019, and 2020 for training (across 36 domains), while reserving the remaining \(10\%\) of entries for _in-domain_ testing. The data from the first month of 2021 was designated for validation, with the subsequent three months of 2021 allocated for actual testing. #### 4.4.1 Experimental Results Table 2 summarizes the results in comparison to other state-of-the-art methods. The experiments are conducted 10 times for each method on every dataset, with both the mean and the standard deviation reported. It is observed that our proposed method yields lower errors in all instances except for the 2-Moons dataset. Notably, in the 2-Moons dataset, our method significantly outperforms the baselines but falls short when compared to the two recent domain generalization methods DRAIN Bai et al. (2023) and GI Nasery et al. (2021). This may be attributed to the low dimensionality of the 2-Moons dataset (only 2 dimensions), which leads to less generalizable backbones for prompt-based approaches (as evidenced by the poor performance in ATTEMPT as well). Table 3 shows the time series forecasting results on Crypto dataset. To ensure a fair comparison, DRAIN, ATTEMPT, and our method all adopt the same backbone network Vanilla-Transformer. \begin{table} \begin{tabular}{l|l|c|c} \hline \hline Base Data & Method & Data Alter. [MSE \(\downarrow\)] & Adding Cosine wave [MSE \(\downarrow\)] \\ \hline \multirow{4}{*}{Mackey Glass} & DRAIN-Best & 0.1140 & 0.2164 \\ & Vanilla Transformer & 0.1315 & 0.2511 \\ & Attempt & 0.1278 & 0.2199 \\ & Ours & **0.0982** & **0.1975** \\ \hline \multirow{4}{*}{Sum of} & DRAIN-Best & 0.0085 & 0.2937 \\ & Vanilla Transformer & 0.0119 & 0.3708 \\ & Attempt & 0.0091 & 0.2974 \\ & Ours & **0.0068** & **0.2489** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of our proposed framework, Vanilla transformer and other state of the arts using a synthetic data with 4 series generated based on Mackey Glass or Cosine waves. \begin{table} \begin{tabular}{c|c|c|c||c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c||}{Classification [\% error \(\downarrow\)]} & \multicolumn{2}{c}{Regression [MAE \(\downarrow\)]} \\ & 2-Moons & ONP & Elec2 & House & Appliance \\ \hline Vanilla-MLP & 22.4 \(\pm\) 4.6 & 33.8 \(\pm\) 0.6 & 23.0 \(\pm\) 3.1 & 11.0 \(\pm\) 0.36 & 10.2 \(\pm\) 1.1 \\ CDOT & 9.3 \(\pm\) 1.0 & 34.1 \(\pm\) 0.0 & 17.8 \(\pm\) 0.6 & - & - \\ CIDA & 10.8 \(\pm\) 1.6 & 34.7 \(\pm\) 0.6 & 14.1 \(\pm\) 0.2 & 9.7 \(\pm\) 0.06 & 8.7 \(\pm\) 0.2 \\ GI & 3.5 \(\pm\) 1.4 & 36.4 \(\pm\) 0.8 & 16.9 \(\pm\) 0.7 & 9.6 \(\pm\) 0.02 & 8.2 \(\pm\)0.6 \\ DRAIN Bai et al. (2023) & **3.2 \(\pm\) 1.2** & 38.3 \(\pm\) 1.2 & 12.7 \(\pm\) 0.8 & 9.3 \(\pm\) 0.14 & 6.4 \(\pm\) 0.4 \\ Vanilla-Transformer & 25.2 \(\pm\) 0.9 & 33.6 \(\pm\) 0.5 & 22.5 \(\pm\) 0.6 & 11.8\(\pm\) 0.3 & 5.6 \(\pm\) 0.4 \\ Attempt Asai et al. (2022) & 21.15 \(\pm\) 1.1 & 34.10 \(\pm\)0.6 & 12.26 \(\pm\)0.8 & 9.0 \(\pm\)0.4 & 4.9 \(\pm\)0.5 \\ Ours & 8.1 \(\pm\) 1.0 & **32.7 \(\pm\) 0.7** & **10.6\(\pm\) 0.9** & **8.9\(\pm\) 0.20** & **4.7 \(\pm\) 0.3** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison of all methods in terms of classification error (in %) for classification tasks and mean absolute error (MAE) for regression tasks (both smaller the better.) Results of comparison methods on all datasets are reported from Bai et al. (2023). “-” denotes that the method could not converge on the specific dataset. We explored two settings: one with fixed-length input sequences, and the other with variable-length input sequences. Our model is notably more accurate under both settings (with a lower RMSE) compared to DRAIN, Vanilla-Transformer, and ATTEMPT. Further, our method is significantly more parameter and time efficient than the current state-of-the-art temporal domain generalization method, DRAIN. While ATTEMPT, also a prompt-based approach, matches our efficiency in terms of parameters and time, it falls short in performance due to its inability to model temporal drift. ### Ablation studies First, we conduct ablation studies on the Crypto dataset and Elec2 datasets to see the impact of the proposed prompts. Table 4 shows that both two prompting mechanisms \(P_{T}\) and \(P_{G}\) contribute to better performance. Next, in order to study the impact of the number of training domains on our model performance, we conduct another ablation study on Mackey-Glass synthetic data(MG) with varying numbers of training domains as shown in Table 5. It is observed that our model's performance improves as the number of source domains increases, as a greater number of observed source domains make temporal patterns more evident. \begin{table} \begin{tabular}{c|c|c|c||c} \hline \hline & & Crypto [RMSE \(\times 10^{3}\downarrow\)] & Elec2 [MAE \(\downarrow\)] \\ \(P_{G}\) & \(P_{T}\) & \(D_{t1}\) & \(D_{t2}\) & \(D_{t3}\) & \(D_{t}\) \\ \hline ✓ & & 3.57 & 6.66 & 6.84 & 14.9 \\ & ✓ & 3.53 & 6.71 & 6.80 & 14.7 \\ ✓ & ✓ & 3.53 & 6.61 & 6.74 & 10.6 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation of effect of \(P_{G},P_{T}\) using Crypto and Elec2 dataset. ✓ indicates the prompt being used. \begin{table} \begin{tabular}{c|l|c|c|c|c|c|c} \hline \hline len. & Method & \#Parameter & Training time (s) & In domain & \(D_{t1}\) & \(D_{t2}\) & \(D_{t3}\) \\ \hline \multirow{5}{*}{\begin{tabular}{c} \multirow{5}{*}{ \begin{tabular}{c} \multirow{2}{*}{\ Conclusion The efficacy of machine learning often depends on the assumptions that training and testing data are distributed independently and identically, an assumption that can be challenged by distribution shifts and concept drifts. This paper studied the scenarios where data distribution evolves over time. Such temporal drifts emphasize the need for temporal domain generalization (DG). In this paper, we propose a parameter and time-efficient prompting-based Temporal DG method that adeptly adapts pre-trained models to unforeseen future domains across various tasks, encompassing classification, regression, and time series forecasting. This represents a significant stride toward anticipating and adapting models to future domains using previous domains information.
2303.15037
New trends in the general relativistic Poynting-Robertson effect modeling
The general relativistic Poynting-Robertson (PR) effect is a very important dissipative phenomenon occurring in high-energy astrophysics. Recently, it has been proposed a new model, which upgrades the two-dimensional (2D) description in the three-dimensional (3D) case in Kerr spacetime. The radiation field is considered as constituted by photons emitted from a rigidly rotating spherical source around the compact object. Such dynamical system admits the existence of a critical hypersurface, region where the gravitational and radiation forces balance and the matter reaches it at the end of its motion. Selected test particle orbits are displayed. We show how to prove the stability of these critical hypersurfaces within the Lyapunov theory. Then, we present how to study such effect under the Lagrangian formalism, explaining how to analytically derive the Rayleigh potential for the radiation force. In conclusion, further developments and future projects are discussed.
Vittorio De Falco
2023-03-27T09:34:45Z
http://arxiv.org/abs/2303.15037v1
# New trends in the general relativistic Poynting-Robertson effect modeling ###### Abstract The general relativistic Poynting-Robertson (PR) effect is a very important dissipative phenomenon occurring in high-energy astrophysics. Recently, it has been proposed a new model, which upgrades the two-dimensional (2D) description in the three-dimensional (3D) case in Kerr spacetime. The radiation field is considered as constituted by photons emitted from a rigidly rotating spherical source around the compact object. Such dynamical system admits the existence of a critical hypersurface, region where the gravitational and radiation forces balance and the matter reaches it at the end of its motion. Selected test particle orbits are displayed. We show how to prove the stability of these critical hypersurfaces within the Lyapunov theory. Then, we present how to study such effect under the Lagrangian formalism, explaining how to analytically derive the Rayleigh potential for the radiation force. In conclusion, further developments and future projects are discussed. XXXI XXX ## 1 Introduction The actual revolutionary discoveries occurred in the last four years represented by the detection of gravitational waves first from a binary black holes (BHs) [1] and then from a neutron stars (NSs) [2] systems and the first imaging of the matter motion around the supermassive BH in M87 Galaxy [3] constitute a strong motivation to improve the actual theoretical models to validate Einstein theory or possible extension of it when benchmarked with the observations. The motion of relatively small-sized test particles, like dust grains or gas clouds, meteors, accretion disk matter elements, around radiating sources located outside massive compact objects is strongly affected by gravitational and radiation fields, and an important effect to be taken into account is the general relativistic PR effect [4, 5]. This phenomenon occurs each time the radiation field invests the test particle, raising up its temperature, which for the Stefan-Boltzmann law starts remitting radiation. This process of absorption and remission of radiation generates a recoil force opposite to the test body orbital motion. Such mechanism removes thus very efficiently angular momentum and energy from the test particle, forcing it to spiral inward or outward depending on the radiation field intensity. This effect has been extensively studied in Newtonian gravity within Classical Mechanics [4] and Special Relativity [5], and then applied in the Solar system [6]. Only in 2009 - 2011, this model has been proposed in General Relativity (GR) by Bini and collaborators within the equatorial plane of the Ker spacetime [7, 8]. Recently, it has been extended in the 3D space in Kerr metric [9, 10, 11]. One of the most evident implications of such effect is the formation of stable structures, termed critical hypersurfaces, around the compact object [12]. This phenomenon has been analysed also under a Lagrangian formulation [13, 14, 15]. The novel aspects of such approach consists in the introduction of new techniques to deal with the non-linearities in gravity patterns based on two new fundamental aspects: (1) use of an integrating factor to make closed differential forms [13]; (2) development of a new method termed _energy formalism_, which permits to analytically determine the Rayleigh potential associated to the radiation force [14, 15]. The article is structured as follows: in Sec. 2 the 3D model and its proprieties are described; in Sec. 3 the stability of the critical hypersurfaces is discussed within the Lyapunov theory; in Sec. 4 we analytically determine the Rayleigh dissipation function by using the energy formalism. Finally in Sec. 5 the conclusions are drawn. ## 2 General relativistic 3D PR effect model We consider a rotating compact object, whose geometry is described by the Kerr metric. Using the signature \((-,+,+,+)\) and geometrical units (\(c=G=1\)), the metric line element, \(ds^{2}=g_{\alpha\beta}dx^{\alpha}dx^{\beta}\), in Boyer-Lindquist coordinates, parameterized by mass \(M\) and spin \(a\), reads as [16] \[\mathrm{d}s^{2}=\left(\frac{2Mr}{\Sigma}-1\right)\mathrm{d}t^{2}-\frac{4Mra \sin^{2}\theta}{\Sigma}\mathrm{d}t\mathrm{d}\varphi+\frac{\Sigma}{\Delta} \mathrm{d}r^{2}+\Sigma\mathrm{d}\theta^{2}+\rho\sin^{2}\theta\mathrm{d}\varphi ^{2}, \tag{1}\] where \(\Sigma\equiv r^{2}+a^{2}\cos^{2}\theta\), \(\Delta\equiv r^{2}-2Mr+a^{2}\), and \(\rho\equiv r^{2}+a^{2}+2Ma^{2}r\sin^{2}\theta/\Sigma\). The determinant of the metric is \(g=-\Sigma^{2}\sin^{2}\theta\). The orthonormal frame adapted to the zero angular momentum observers (ZAMOs) is [9, 10] \[\mathbf{e_{\hat{\mathbf{\ell}}}}\equiv\mathbf{n}=\frac{(\mathbf{\partial_{t}}-N^{\varphi}\mathbf{ \partial_{\varphi}})}{N},\quad\mathbf{e_{\hat{\mathbf{r}}}}=\frac{\mathbf{\partial_{r}}}{ \sqrt{g_{rr}}},\quad\mathbf{e_{\hat{\mathbf{\theta}}}}=\frac{\mathbf{\partial_{\theta}}}{ \sqrt{g_{\theta\theta}}},\quad\mathbf{e_{\hat{\mathbf{\varphi}}}}=\frac{\mathbf{\partial _{\varphi}}}{\sqrt{g_{\varphi\varphi}}}. \tag{2}\] where \(N=(-g^{tt})^{-1/2}\) and \(N^{\varphi}=g_{t\varphi}/g_{\varphi\varphi}\). The nonzero ZAMO kinematical quantities in the decomposition of the ZAMO congruence are acceleration \(\mathbf{a}(n)=\nabla_{\mathbf{n}}\mathbf{n}\), expansion tensor along the \(\hat{\varphi}\)-direction \(\mathbf{\theta_{\hat{\mathbf{\varphi}}}}(n)\), and the relative Lie curvature vector \(\mathbf{k_{(Lie)}}(n)\) (see Table 1 in [9], for their explicit expressions). New trends in the general relativistic Poynting-Robertson effect modeling The radiation field is modeled as a coherent flux of photons traveling along null geodesics on the Kerr metric. The related stress-energy tensor is [9, 10] \[T^{\mu\nu}=\mathcal{I}^{2}k^{\mu}k^{\nu}\,,\qquad k^{\mu}k_{\mu}=0,\qquad k^{\mu }\nabla_{\mu}k^{\nu}=0, \tag{3}\] where \(\mathcal{I}\) is a parameter linked to the radiation field intensity and \(\mathbf{k}\) is the photon four-momentum field. Splitting \(\mathbf{k}\) with respect to the ZAMO frame, we obtain [10] \[\mathbf{k}=E(n)[\mathbf{n}+\hat{\mathbf{\nu}}(k,n)], \tag{4}\] \[\hat{\mathbf{\nu}}(k,n)=\sin\zeta\sin\beta\ \mathbf{e_{\hat{\mathbf{\nu}}}}+ \cos\zeta\ \mathbf{e_{\hat{\mathbf{\theta}}}}+\sin\zeta\cos\beta\ \mathbf{e_{\hat{\mathbf{\varphi}}}}, \tag{5}\] where \(E(n)\) is the photon energy measured in the ZAMO frame, \(\hat{\mathbf{\nu}}(k,n)\) is the photon spatial unit relative velocity with respect to the ZAMOs, \(\beta\) and \(\zeta\) are the two angles measured in the ZAMO frame in the azimuthal and polar direction, respectively. The radiation field is governed by the two impact parameters \((b,q)\), associated respectively with the two emission angles \((\beta,\zeta)\). The radiation field photons are emitted from a spherical rigid surface having a radius \(R_{\star}\) centered at the origin of the Boyer-Lindquist coordinates, and rotating with angular velocity \(\Omega_{\star}\). The photon impact parameters are [10] \[b=-\left[\frac{\mathrm{g}_{\mathrm{t}\varphi}+\mathrm{g}_{\varphi\varphi} \Omega_{\star}}{\mathrm{g}_{\mathrm{t}}+\mathrm{g}_{\mathrm{t}\varphi}\Omega_ {\star}}\right]_{r=R_{\star}},\quad q=\left[b^{2}\cot^{2}\theta-a^{2}\cos^{2} \theta\right]_{r=R_{\star}}. \tag{6}\] The related photon angles in the ZAMO frame are [10] \[\cos\beta=\frac{bN}{\sqrt{g_{\varphi\varphi}}(1+bN^{\varphi})},\qquad\zeta= \pi/2. \tag{7}\] The parameter \(\mathcal{I}\) has the following expression [10] \[\mathcal{I}^{2}=\frac{\mathcal{I}_{0}^{2}}{\sqrt{\left(r^{2}+a^{2}-ab\right)^ {2}-\Delta\left[q+\left(b-a\right)^{2}\right]}}, \tag{8}\] where \(\mathcal{I}_{0}\) is \(\mathcal{I}\) evaluated at the emitting surface. A test particle moves with a timelike four-velocity \(\mathbf{U}\) and a spatial three-velocity with respect to the ZAMO frames, \(\mathbf{\nu}(U,n)\), which both read as [10] \[\mathbf{U}=\gamma(U,n)[\mathbf{n}+\mathbf{\nu}(U,n)], \tag{9}\] \[\mathbf{\nu}=\nu(\sin\psi\sin\alpha\mathbf{e_{\hat{\mathbf{\nu}}}}+\cos\psi \mathbf{e_{\hat{\mathbf{\theta}}}}+\sin\psi\cos\alpha\mathbf{e_{\hat{\mathbf{\varphi}}}}), \tag{10}\] where \(\gamma(U,n)\equiv\gamma=1/\sqrt{1-||\mathbf{\nu}(U,n)||^{2}}\) is the Lorentz factor, \(\nu=||\mathbf{\nu}(U,n)||\), \(\gamma(U,n)=\gamma\). We have that \(\nu\) represents the magnitude of the test particle spatial velocity \(\mathbf{\nu}(U,n)\), \(\alpha\) is the azimuthal angle of the vector \(\mathbf{\nu}(U,n)\) measured clockwise from the positive \(\hat{\varphi}\) direction in the \(\hat{r}-\hat{\varphi}\) tangent plane in the ZAMO frame, and \(\psi\) is the polar angle of the vector \(\mathbf{\nu}(U,n)\) measured from the axis orthogonal to the \(\hat{r}-\hat{\varphi}\) tangent plane in the ZAMO frame. We assume that the radiation test particle interaction occurs through Thomson scattering, characterized by a constant momentum-transfer cross section \(\sigma\) independent from direction and frequency of the radiation field. We can split the photon four momentum (4) in terms of the velocity \(\mathbf{U}\) as [10] \[\mathbf{k}=E(U)[\mathbf{U}+\mathbf{\hat{\mathcal{V}}}(k,U)], \tag{11}\] where \(E(U)\) is the photon energy measured by the test particle. The radiation force can be written as [10] \[\mathcal{F}_{\rm(rad)}(U)^{\hat{\alpha}}\equiv-\tilde{\sigma}\mathcal{I}^{2}( {T^{\hat{\alpha}}}_{\hat{\beta}}U^{\hat{\beta}}+U^{\hat{\alpha}}{T^{\hat{ \alpha}}}_{\hat{\beta}}U_{\hat{\mu}}U^{\hat{\beta}})=\tilde{\sigma}\left[ \mathcal{I}E(U)\right]^{2}\hat{\mathcal{V}}(k,U)^{\hat{\alpha}}, \tag{12}\] where \(m\) is the test particle mass and the term \(\tilde{\sigma}[\mathcal{I}E(U)]^{2}\) reads as [10] \[\tilde{\sigma}[\mathcal{I}E(U)]^{2}=\frac{A\,\gamma^{2}(1+bN^{\varphi})^{2}[1 -\nu\sin\psi\cos(\alpha-\beta)]^{2}}{N^{2}\sqrt{\left(r^{2}+a^{2}-ab\right)^{2 }-\Delta\left[q+\left(b-a\right)^{2}\right]}}, \tag{13}\] with \(A=\tilde{\sigma}[\mathcal{I}_{0}E_{p}]^{2}\) being the luminosity parameter, which can be equivalently written as \(A/M=L/L_{\rm EDD}\in[0,1]\) with \(L\) the emitted luminosity at infinity and \(L_{\rm EDD}\) the Eddington luminosity, and \(E_{p}=-k_{t}\) is the conserved photon energy along the test particle trajectory. The terms \(\hat{\mathcal{V}}(k,U)^{\hat{\alpha}}\) are the radiation field components, whose expressions are [10] \[\hat{\mathcal{V}}^{\hat{r}}=\frac{\sin\beta}{\gamma[1-\nu\sin\psi \cos(\alpha-\beta)]}-\gamma\nu\sin\psi\sin\alpha,\quad\hat{\mathcal{V}}^{\hat {\theta}}=-\gamma\nu\cos\psi, \tag{14}\] \[\hat{\mathcal{V}}^{\hat{\varphi}}=\frac{\cos\beta}{\gamma[1-\nu \sin\psi\cos(\alpha-\beta)]}-\gamma\nu\sin\psi\cos\alpha,\quad\hat{\mathcal{V }}^{\hat{t}}=\gamma\nu\left[\frac{\sin\psi\cos(\alpha-\beta)-\nu}{1-\nu\sin \psi\cos(\alpha-\beta)}\right].\] Collecting all the information together, it is possible to derive the resulting equations of motion for a test particle moving in a 3D space, which are [10] \[\frac{d\nu}{d\tau}=-\frac{1}{\gamma}\left\{\sin\alpha\sin\psi\left[ a(n)^{\hat{r}}+2\nu\cos\alpha\sin\psi\,\theta(n)^{\hat{r}}{}_{\hat{\varphi}} \right]\right. \tag{15}\] \[\left.+\cos\psi\left[a(n)^{\hat{\theta}}+2\nu\cos\alpha\sin\psi\, \theta(n)^{\hat{\theta}}{}_{\hat{\varphi}}\right]\right\}+\frac{\tilde{\sigma }[\Phi E(U)]^{2}}{\gamma^{3}\nu}\hat{\mathcal{V}}^{\hat{t}},\] \[\frac{d\psi}{d\tau}=\frac{\gamma}{\nu}\left\{\sin\psi\left[a(n)^ {\hat{\theta}}+k_{\rm(Lie)}(n)^{\hat{\theta}}\,\nu^{2}\cos^{2}\alpha+2\nu\cos \alpha\sin^{2}\psi\,\,\theta(n)^{\hat{\theta}}{}_{\hat{\varphi}}\right]\right.\] \[\left.-\sin\alpha\cos\psi\left[a(n)^{\hat{r}}+k_{\rm(Lie)}(n)^{ \hat{r}}\,\nu^{2}+2\nu\cos\alpha\sin\psi\,\theta(n)^{\hat{r}}{}_{\hat{\varphi }}\right]\right\}\] (16) \[\left.+\frac{\tilde{\sigma}[\Phi E(U)]^{2}}{\gamma\nu^{2}\sin \psi}\left[\hat{\mathcal{V}}^{\hat{t}}\cos\psi-\hat{\mathcal{V}}^{\hat{\theta} }\nu\right],\] New trends in the general relativistic Poynting-Robertson effect modeling \[\frac{d\alpha}{d\tau}=-\frac{\gamma\cos\alpha}{\nu\sin\psi}\left[a(n )^{\hat{r}}+2\theta(n)^{\hat{r}}{}_{\hat{\varphi}}\ \nu\cos\alpha\sin\psi+k_{\rm(Lie)}(n)^{\hat{r}}\,\nu^{2}\right. \tag{17}\] \[\left.+k_{\rm(Lie)}(n)^{\hat{\theta}}\,\nu^{2}\cos^{2}\psi\sin \alpha\right]+\frac{\tilde{\sigma}[\Phi E(U)]^{2}\cos\alpha}{\gamma\nu\sin\psi} \left[\hat{\cal V}^{\hat{r}}-\hat{\cal V}^{\hat{\varphi}}\tan\alpha\right],\] \[U^{\hat{r}}\equiv\frac{dr}{d\tau}=\frac{\gamma\nu\sin\alpha\sin \psi}{\sqrt{g_{rr}}},\] (18) \[U^{\hat{\theta}}\equiv\frac{d\theta}{d\tau}=\frac{\gamma\nu\cos \psi}{\sqrt{g_{\theta\theta}}},\] (19) \[U^{\hat{\varphi}}\equiv\frac{d\varphi}{d\tau}=\frac{\gamma\nu \cos\alpha\sin\psi}{\sqrt{g_{\varphi\varphi}}}-\frac{\gamma N^{\varphi}}{N},\] (20) \[U^{\hat{t}}\equiv\frac{dt}{d\tau}=\frac{\gamma}{N}, \tag{21}\] where \(\tau\) is the affine parameter along the test particle trajectory. ### Critical hypersurfaces The dynamical system defined by Eqs. (15)-(20) exhibits a critical hypersurface outside around the compact object, where there exists a balance among gravitational and radiation forces, see Fig. 1. On such region the test particle moves purely circular with constant velocity (\(\nu={\rm const}\)) with respect to the ZAMO frame (\(\alpha=0,\pi\)), and the polar axis orthogonal to the critical hypersurface (\(\psi=\pm\pi/2\)). These requirements entail \(d\nu/d\tau=d\alpha/d\tau=0\), from which we have [10] \[\nu=\cos\beta, \tag{22}\] \[a(n)^{\hat{r}}+2\theta(n)^{\hat{r}}{}_{\hat{\varphi}}\nu+k_{\rm (Lie)}(n)^{\hat{r}}\nu^{2}\] (23) \[=\frac{A(1+bN^{\varphi})^{2}\sin^{3}\beta}{N^{2}\gamma\sqrt{ \left(r_{\rm(crit)}^{2}+a^{2}-ab\right)^{2}-\Delta_{\rm(crit)}\left[q+(b-a)^{2 }\right]}},\] where the first condition means that the test particle moves on the critical hypersurface with constant velocity equal to the azimuthal photon velocity; whereas the second condition determine the critical radius \(r_{\rm(crit)}\) as a function of the polar angle through an implicit equation, once \(A,a,R_{\star},\Omega_{\star}\) are assigned. In general we have \(d\psi/d\tau\neq 0\), because the \(\psi\) angle change during the test particle motion on the critical hypersurface, having the so-called _latitudinal drift_. This effect, occurring for the interplay of gravitational and radiation actions in the polar direction, brings definitively the test particle on the equatorial plane [9, 10]. Only for \(\psi=\theta=\pi/2\), we have \(d\psi/d\tau=0\), corresponding to the equatorial ring. However, we can have \(d\psi/d\tau=0\), also for a \(\theta=\bar{\theta}\neq\pi/2\), having the so-called suspended orbits._ The condition for this last configuration for \(b\neq 0\) reads as [10] \[\begin{split}& a(n)^{\dot{\theta}}+k_{\rm(Lie)}(n)^{\dot{\theta}} \,\nu^{2}+2\nu\sin^{2}\psi\ \theta(n)^{\dot{\theta}}{}_{\dot{\varphi}}\\ &+\frac{A(1+bN^{\varphi})^{2}(1-\cos^{2}\beta\sin\psi)\cos\beta}{ \gamma N^{2}\sqrt{\left(r_{\rm(crit)}^{2}+a^{2}-ab\right)^{2}-\Delta_{\rm(crit )}\left[q+\left(b-a\right)^{2}\right]\tan\psi}}=0,\end{split} \tag{2.24}\] which permits to be solved in terms of \(\psi\). Instead for \(b=0\) we obtain \(\psi=\pm\pi/2\)[9]. The critical points are either the suspended orbits or the equatorial ring, where the test particle ends its motion. In Fig. 2 we display some selected test particle trajectories to give an idea how the PR effect alters the matter motion surrounding a radiation source around a compact object [10]. Figure 1: Left panel: Critical hypersurfaces for \(\Omega_{\star}=0\) and the luminosity parameters \(A=0.5,\,0.7,\,0.8,\,0.85,\,0.87,\,0.9\) at a constant spin \(a=0.9995\). The respective critical radii in the equatorial plane are \(r_{\rm(crit)}^{\rm eq}\sim 2.71M,4.01M,5.52M,7.04M,7.99M,10.16M\), while at poles they are \(r_{\rm(crit)}^{\rm pole}\sim 2.97M,4.65M,6.56M,8.38M,9.48M,11.9M\). Right panel: Critical hypersurfaces for a NS (grey sphere) with \(\Omega_{\star}=0.031,\,R_{\star}=6M\), and luminosity parameters \(A=0.75,\,0.78,\,0.8,\,0.85,\,0.88\) at a constant spin \(a=0.41\). The respective critical radii in the equatorial plane are \(r_{\rm(crit)}^{\rm eq}\sim 8.88M,\,\,10.61M,\,\,12.05M,\,\,17.26M,\,\,22.43M,\,\), while at poles they are \(r_{\rm(crit)}^{\rm pole}\sim 4.73M,\,\,5.28M,\,\,5.74M,\,\,7.43M,\,\,9.11M\). The red arrow is the polar axis. New trends in the general relativistic Poynting-Robertson effect modeling ## 3 Stability of the critical hypersurfaces To prove the stability of the critical hypersurfaces, we consider only those initial configurations, where the test particle ends its motion on them without escaping at infinity. Once the stability has been proven, it immediately follows that the critical equatorial ring is a stable attractor (region where the test particle is attracted for ending its motion), and the whole critical hypersurface is a basin of attraction [12]. Bini and collaborators have proved it only in the Schwarzschild case within the linear stability theory (see Appendix in Ref. [8]). This method consists in linearizing the dynamical system towards the critical points of the critical hypersurface and then looking at its eigenvalues. Theoretically such method is simple, but practically it implies several calculations (especially in the Kerr case). There is a simpler, and more physical approach based on the Lyapunov theory. The dynamical system (2.15)-(2.20), \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\), is defined in the domain \(\mathcal{D}\), while the critical hypersurface is defined by \(\mathcal{H}\). Let \(\Lambda=\Lambda(\mathbf{x})\) be a real valued function, continuously differentiable in all points of \(\mathcal{D}\), then \(\Lambda\) is a Lyapunov function Figure 2: Left panel: Test particle trajectories around a NS of spin \(a=0.41\), radius \(R_{\star}=6M\), angular velocity \(\Omega_{\star}=0.031\), and luminosity parameter \(A=0.8\), starting at the position \((r_{0},\theta_{0})=(15M,10^{\circ})\) with the initial velocity \(\nu_{0}=0.01\) oriented in the azimuthal corotating direction (orange) and oriented radially towards the emitting surface (red). Right panel:Test particle trajectories around a NS of spin \(a=0.07\), radius \(R_{\star}=6M\), angular velocity \(\Omega_{\star}=0.005\), and luminosity parameter \(A=0.85\), starting at the position \((r_{0},\theta_{0})=(15M,10^{\circ})\) with the initial velocity \(\nu_{0}=0.01\) oriented in the azimuthal corotating direction direction (orange) and oriented radially towards the emitting surface (red). The black sphere corresponds to the emitting surface of the NS. The blue-gray surface denotes the critical hypersurface. for \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\) if it fulfills the following conditions: \[\mathrm{(I)} \Lambda(\mathbf{x})>0,\quad\forall\mathbf{x}\in\mathcal{D}\setminus\mathcal{ H}; \tag{11}\] \[\mathrm{(II)} \Lambda(\mathbf{x_{0}})=0,\quad\forall\mathbf{x_{0}}\in\mathcal{H};\] (12) \[\mathrm{(III)} \dot{\Lambda}(\mathbf{x})\equiv\nabla\Lambda(\mathbf{x})\cdot\mathbf{f}( \mathbf{x})\leq 0,\quad\forall\mathbf{x}\in\mathcal{D}. \tag{13}\] Once the Lyapunov function \(\Lambda\) has been found for all points belonging to the critical hypersurface \(\mathcal{H}\), a theorem due to Lyapunov assures that \(\mathcal{H}\) is stable [12]. The advantage to use this approach relies on easily studying the behavior of a dynamical system without knowing the analytical solution. The Lyapunov function is not unique and there is no fixed rules to determine it, indeed several times one is guided by the physical intuitions. For the general relativistic PR effect three different Lyapunov functions have been determined. The proof that they are Lyapunov function is based on expanding all the kinematic terms with respect to the radius estimating thus their magnitude (see Ref. [12], for further details). * _The relative mechanical energy_ of the test particle with respect to the critical hypersurface measured in the ZAMO frame is \[\mathbb{K}=\frac{m}{2}\left|\nu^{2}-\nu_{\mathrm{crit}}^{2}\right|+(A-M)\left( \frac{1}{r}-\frac{1}{r_{\mathrm{crit}}}\right),\] (14) where \(\nu_{\mathrm{crit}}(\theta)=[\cos\beta]_{r=r_{\mathrm{crit}}(\theta)}\), which includes as a particular case the velocity \(\nu_{\mathrm{eq}}=[\cos\beta]_{r=r_{\mathrm{crit}}(\pi/2)}\) in the equatorial ring. Its derivative is \[\dot{\mathbb{K}}=m\ \mathrm{sgn}\left(\nu^{2}-\cos^{2}\beta\right)\left[\nu \frac{d\nu}{d\tau}-\cos\beta\frac{d(\cos\beta)}{d\tau}\right]-\frac{A-M}{r^{2 }}\dot{r}.\] (15) where \(\mathrm{sgn}(x)\) is the signum function. * _The angular momentum_ of the test particle measured in the ZAMO frame is \[\mathbb{L}=m(r\nu\sin\psi\cos\alpha-r_{\mathrm{crit}}\nu_{\mathrm{crit}}).\] (16) Its derivative is given by \[\begin{split}&\dot{\mathbb{L}}=m\ \left[-\dot{r}_{\mathrm{crit}}\nu_{ \mathrm{crit}}-r_{\mathrm{crit}}\frac{d(\nu_{\mathrm{crit}})}{d\tau}+r\frac{d \nu}{d\tau}\cos\alpha\sin\psi+\nu(\dot{r}\cos\alpha\sin\psi\right.\\ &\left.-r\sin\alpha\sin\psi\ \dot{\alpha}+r\sin\alpha\cos\psi\ \dot{\psi} \right)\right].\end{split}\] (17) * _The Rayleigh dissipation function_ is (see Sec. 4 for its derivation and meaning) \[\mathbb{F}=\tilde{\sigma}\mathcal{I}^{2}\left[\lg\left(\frac{\mathbb{E}_{ \mathrm{crit}}}{E_{p}}\right)-\lg\left(\frac{\mathbb{E}}{E_{p}}\right)\right],\] (18) where \(E_{p}\) is the photon energy and \(\mathbb{E}\equiv E(U)\), defined as \[\mathbb{E}\equiv-k_{\alpha}U^{\alpha}=\gamma\frac{E_{p}}{N}(1+bN^{\varphi})[1 -\nu\sin\psi\cos(\alpha-\beta)].\] (19) \(\mathbb{E}_{\mathrm{crit}}\) is the energy \(\mathbb{E}\) evaluated on the critical hypersurface, given by \[\mathbb{E}_{\mathrm{crit}}=[\mathbb{E}]_{r=R_{\star},\alpha=0,\pi,\psi=\pm\pi/ 2,\nu=\nu_{\mathrm{crit}}}=\frac{E_{p}|(\sin\beta)_{\mathrm{crit}}|}{N_{ \mathrm{crit}}}(1+bN_{\mathrm{crit}}^{\varphi}).\] (20) New trends in the general relativistic Poynting-Robertson effect modeling Its derivative is \[\dot{\mathbb{F}}=\tilde{\sigma}(\dot{\mathcal{I}}^{2})\left[\lg\left(\frac{ \mathbb{E}_{\rm crit}}{E_{p}}\right)-\lg\left(\frac{\mathbb{E}}{E_{p}}\right) \right]+\tilde{\sigma}\mathcal{I}^{2}\left[\frac{\dot{\mathbb{E}}_{\rm crit}}{ \mathbb{E}_{\rm crit}}-\frac{\dot{\mathbb{E}}}{\mathbb{E}}\right]. \tag{3.11}\] In Fig. 3 we calculate a test particle orbit in the equatorial plane reaching the critical hypersurface, and in the other panels we show the three proposed functions (i.e., \(\mathbb{K},~{}\mathbb{L},~{}\mathbb{F}\)) together with their derivatives (i.e., \(\dot{\mathbb{K}},~{}\dot{\mathbb{L}},~{}\dot{\mathbb{F}}\)), to graphically prove that they verify the three proprieties to be Lyapunov functions. It is important to note that the first two Lyapunov functions (energy and angular momentum) are written using the classical definition, and not the general relativistic expression, as instead it has been done with the third Lyapunov function. This is not in contradiction with the definition of Lyapunov function, rather they are very useful to carry out more easily the calculations. For example, even a mathematical function with no direct physical meaning with the system under study, but verifying the conditions to be a Lyapunov function is a good candidate to prove the stability of the critical hypersurfaces. ## 4 Analytical form of the Rayleigh dissipation function We describe the energy formalism, which is the method used to derive the Rayleigh potential [15]. The motion of the test particle occurs in \(\mathcal{M}\), a simply connected domain (the region outside of the compact object including the event horizon). We denote with \(T\mathcal{M}\) the tangent bundle of \(\mathcal{M}\), whereas \(T^{*}\mathcal{M}\) stands for the cotangent bundle over \(\mathcal{M}\). Let \(\boldsymbol{\omega}:T\mathcal{M}\to T^{*}\mathcal{M}\) be a smooth differential semi-basic one-form. Defined \(\boldsymbol{X}=(t,r,\theta,\varphi)\) and \(\boldsymbol{U}=(U^{t},U^{r},U^{\theta},U^{\varphi})\), the radiation force components (2.12) are the components of the differential semi-basic one-form \(\boldsymbol{\omega}(\boldsymbol{X},\boldsymbol{U})=F_{\rm(rad)}(\boldsymbol{ X},\boldsymbol{U})^{\alpha}\mathbf{d}X_{\alpha}\). We note that \(\boldsymbol{\omega}\) is closed under the vertical exterior derivative \(\mathbf{d}^{\mathbf{V}}\) if \(\mathbf{d}^{\mathbf{V}}\boldsymbol{\omega}=0\). The local expression of this operator is \[\mathbf{d}^{\mathbf{V}}F=\frac{\partial F}{\partial U_{\alpha}}\mathbf{d}X_{ \alpha}. \tag{4.1}\] For the Poincare lemma (generalised to the vertical differentiation) the closure condition and the simply connected domain \(\mathcal{M}\) guarantee that \(\boldsymbol{\omega}\) is exact. Therefore, it exists a \(0\)-form \(V(\boldsymbol{X},\boldsymbol{U})\in\mathcal{C}^{\infty}(T\mathcal{M},\mathfrak{ m})\), called primitive, such that \(-\mathbf{d}^{\mathbf{V}}V=\boldsymbol{\omega}\). Due to the non-linear dependence of the radiation force on the test particle velocity field, the semi-basic one-form turns out to be not exact [13]. However, the PR phenomenon exhibits the peculiar propriety according to which \(\boldsymbol{\omega}(\boldsymbol{X},\boldsymbol{U})\) becomes exact through the introduction of the integrating factor \(\mu=\left(E_{\rm p}/\mathbb{E}\right)^{2}\)[13]. Considering the energy \(\mathbb{E}=-k_{\beta}U^{\beta}\) and substituting all the occurrences of \(\mathbb{E}\) in \(F_{\rm(rad)}(\boldsymbol{X},\boldsymbol{U})^{\alpha}\), see Eq. (2.12), we obtain [14, 15] \[\mathbb{F}_{\rm(rad)}(\boldsymbol{X},\boldsymbol{U})^{\alpha}=-k^{\alpha} \mathbb{E}(\boldsymbol{X},\boldsymbol{U})+\mathbb{E}(\boldsymbol{X}, \boldsymbol{U})^{2}U^{\alpha}. \tag{4.2}\] Using the chain rule from the velocity to the energy derivative operator, we have \[\frac{\partial\ (\ \cdot\ )}{\partial U_{\alpha}}=-k^{\alpha}\ \frac{\partial\ (\ \cdot\ )}{\partial\mathbb{E}}. \tag{4.3}\] Figure 3. We show a test particle orbit and the related three Lyapunov functions. _Upper left panel:_ test particle moving around a rotating compact object with mass \(M=1\), spin \(a=0.3\), luminosity parameter \(A=0.2\), and photon impact parameter \(b=0\). The test particle starts its motion at the position \((r_{0},\varphi_{0})=(30M,0)\) with velocity \((\nu_{0},\alpha_{0})=(\sqrt{M/r_{0}},0)\). The critical hypersurface is a circle with radius \(r_{(\rm crit)}=2.07M\). The energy (see Eqs. (3.4) and (3.5), and _upper right panel_), the angular momentum (see Eqs. (3.6) and (3.7), and _lower left panel_), and the Rayleigh potential (see Eqs. (3.9) and (3.11), and _lower right panel_) together with their \(\tau\)-derivatives are all expressed in terms of the proper time \(\tau\). The dashed blue lines in all plots represent the proper time \(T_{\rm touch}\) at which the test particle reaches the critical hypersurface and it amounts to \(T_{\rm touch}=2915M\). The Rayleigh potential (4.5) is a valuable tool to investigate the proprieties of the general relativistic PR effect and more in general the radiation processes in high-energy astrophysics. This potential contains a logarithm of the energy, which physically is interpreted as the absorbed energy from the test particle. Therefore, it represents a new class of functions, never explored and discovered in the literature, used to describe the radiation absorption processes. Another important implication of the Rayleigh potential relies on the direct connection between theory and observations. In Fig. 4 we show in panel \(a)\) the test particle trajectory (what we can observe) and in panels \(b)-f)\) the Rayleigh potential in terms of the coordinates \(r,\varphi,t,\dot{r},\dot{\varphi}\), respectively (what comes from the theory). Therefore, observing the test particle motion, it is possible to theoretically reconstruct the Rayleigh function; viceversa new Rayleigh functions can be proposed to study then the dynamics and see what we should observe (see Ref. [15], for details). Figure 4: Test particle trajectory with the Rayleigh potential \(V\) for mass \(M=1\) and spin \(a=0.1\), luminosity parameter \(A=0.1\) and photon impact parameter \(b=1\). The test particle moves in the spatial equatorial plane with initial position \((r_{0},\varphi_{0})=(10M,0)\) and velocity \((\nu_{0},\alpha_{0})=(\sqrt{1/10M},0)\). a) Test particle trajectory spiralling towards the BH and stopping on the critical radius (red dashed line) \(r_{\rm(crit)}=2.02M\). The continuous green line is the event horizon radius \(r_{\rm(EH)}^{+}=1.99M\). Rayleigh potential versus b) radial coordinate, c) azimuthal coordinate, d) time coordinate, e) radial velocity, and f) azimuthal velocity. The blue dashed line in panel e) marks the minimum value attained by the radial velocity, corresponding to \(\dot{r}=-0.13\). ## 5 Conclusions In this work, we have presented the fully general relativistic treatment of the 3D PR effect in the Kerr geometry, which extends the previous works framed in the 2D equatorial plane of relativistic compact objects. The radiation field comes from a rigidly rotating spherical source around the compact object. The emitted photons are parametrized by two impact parameters \((b,q)\), where \(b\) can be variable and \(q\) depends on the value assumed by \(b\) and the polar angle \(\theta\), position occupied by the test particle in the 3D space. The resulting equations of motion represent a system of six coupled ordinary and highly nonlinear differential equations of first order. The motion of test particles is strongly affected by PR effect together with general relativistic effects. Such dynamical system admits the existence of critical hypersurfaces, regions where the gravitational attraction is balanced by the radiation forces. We have presented the method to prove the stability of the critical hypersurfaces by employing the Lyapunov functions. Such strategy permits to simplify the calculations and to catch important physical aspects of the PR effect. Three different Lyapunov functions have been proposed, all with a different and precise meaning. The first two are deduced by the definition of the PR effect, which removes energy and angular momentum from the test particle. The third example is less intuitive because it is based on the Rayleigh dissipation function, determined by the use of an integrating factor and the introduction of the energy formalism. Such method revealed to be very useful for two reasons: (1) a substantial reduction of the calculations from the 4 variables (i.e., the velocities \(\boldsymbol{U}\)) to only one (i.e., the energy \(\mathbb{E}\)); (2) the obtained expression of the \(V\) potential as a function of \(\mathbb{E}\), suffices for the description of the dynamics, being very important whenever the evaluation of \(f(\boldsymbol{X},\boldsymbol{U})\) turns out to be too laborious. In this way we have obtained for the first time an analytical expression of the Rayleigh potential in GR and we have discovered a new class of functions, represented by the logarithms, which physically describe the absorption processes in high-energy astrophysics. As future projects, we plan to improve the actual theoretical assessments used to treat the radiation field in some ingenue aspects, like: the momemntum-transfer cross section will be not anymore constant, but it will depend on the angle and frequency of the incoming radiation field, the radiation field is not emitted anymore by a point-like source, but from a finite extended source. We would like also to apply this theoretical model to some astrophysical situations in accretion physics, like: accretion disk model, type-I X-ray burst, photospheric radius expansion. The new method to prove the stability of the critical hypersurfaces through Lyapunov functions can be easily applied to any possible extensions of the general relativistic PR effect model, naturally with the due modifications. Instead, the energy formalism opens up new frontiers in the study of the dissipative systems in metric theory of gravity and more broadly in other mathematical and physical research fields thanks to its general structure and versatile applicability. It permits to acquire more information on the mathematical structure and the physical meaning of the problem under study, because as discussed in Fig. 4, it is incredibly evident the profound connection between observations and theory.
2305.09790
Molecule-Morphology Contrastive Pretraining for Transferable Molecular Representation
Image-based profiling techniques have become increasingly popular over the past decade for their applications in target identification, mechanism-of-action inference, and assay development. These techniques have generated large datasets of cellular morphologies, which are typically used to investigate the effects of small molecule perturbagens. In this work, we extend the impact of such dataset to improving quantitative structure-activity relationship (QSAR) models by introducing Molecule-Morphology Contrastive Pretraining (MoCoP), a framework for learning multi-modal representation of molecular graphs and cellular morphologies. We scale MoCoP to approximately 100K molecules and 600K morphological profiles using data from the JUMP-CP Consortium and show that MoCoP consistently improves performances of graph neural networks (GNNs) on molecular property prediction tasks in ChEMBL20 across all dataset sizes. The pretrained GNNs are also evaluated on internal GSK pharmacokinetic data and show an average improvement of 2.6% and 6.3% in AUPRC for full and low data regimes, respectively. Our findings suggest that integrating cellular morphologies with molecular graphs using MoCoP can significantly improve the performance of QSAR models, ultimately expanding the deep learning toolbox available for QSAR applications.
Cuong Q. Nguyen, Dante Pertusi, Kim M. Branson
2023-04-27T02:01:41Z
http://arxiv.org/abs/2305.09790v2
# Molecule-Morphology Contrastive Pretraining for ###### Abstract Image-based profiling techniques have become increasingly popular over the past decade for their applications in target identification, mechanism-of-action inference, and assay development. These techniques have generated large datasets of cellular morphologies, which are typically used to investigate the effects of small molecule perturbagens. In this work, we extend the impact of such dataset to improving quantitative structure-activity relationship (QSAR) models by introducing Molecule-Morphology Contrastive Pretraining (MoCoP), a framework for learning multi-modal representation of molecular graphs and cellular morphologies. We scale MoCoP to approximately 100K molecules and 600K morphological profiles using data from the JUMP-CP Consortium and show that MoCoP consistently improves performances of graph neural networks (GNNs) on molecular property prediction tasks in ChEMBL20 across all dataset sizes. The pretrained GNNs are also evaluated on internal GSK pharmacokinetic data and show an average improvement of 2.6% and 6.3% in AUPRC for full and low data regimes, respectively. Our findings suggest that integrating cellular morphologies with molecular graphs using MoCoP can significantly improve the performance of QSAR models, ultimately expanding the deep learning toolbox available for QSAR applications. Machine Learning, Molecular Learning, Molecular Representation ## 1 Introduction Quantitative structure-activity relationship (QSAR) modeling is a critical step for virtual screening in drug discovery, helping researchers prioritize modifications to chemical structures that shift modeled properties in a favorable direction. Since the Merck Molecular Activity Challenge, applying deep learning techniques to QSAR modeling has gained significant attention due to their ability to extract complex nonlinear relationships between chemical structures and their associated activities. Typically, QSAR models are trained to predict the activity of a molecule based on its in silico representation, which can have varying levels of complexity ranging from computed chemical properties, 2- and 3-D descriptors (Rogers and Hahn, 2010; Sheridan et al., 1996; Carhart et al., 1985; Nilakantan et al., 1987; Schaller et al., 2020), and molecular graphs (Kearnes et al., 2016; Yang et al., 2019). However, performance of QSAR models is limited by the amount of available data, especially when assays are low-throughput, expensive to run, or only commissioned at the later stages of the drug discovery process. To overcome this limitation, methods such as active learning (Reker and Schneider, 2015; Smith et al., 2018), large-scale multitask learning (Xu et al., 2017; Ramsundar et al., 2015; Kearnes et al., 2017) pretraining (Hu et al., 2020), and few-shot learning approaches (Altae-Tran et al., 2017; Nguyen et al., 2020) have been shown to improve model performance in low data regime. Improving the in silico representation of molecules can also enhance performance of QSAR models. Recent trends in small-molecule drug discovery have shifted toward high-content screening approaches, with cellular imaging emerging as a relatively high-throughput (Kurita and Linington, 2015; Kraus et al., 2017; Chandrasekaran et al., 2021) method to profiling small molecules in relevant biological system. The Cell Painting assay (Bray et al., 2016) - an unbiased and scalable approach for capturing images of cells - have made large and reusable repositories of paired molecule and cell images possible (Bray et al., 2017; Fay et al., 2023; Chandrasekaran et al., 2023). These images contain cellular morphologies induced by small molecule perturbagens and can be used as an alternative in silico representation of these molecules (Kraus et al., 2017; Godinez et al., 2018; Hofmarcher et al., 2019; Stirling et al., 2021). Convolutional neural network-based approaches have been shown to improve the predictivity of QSAR models across a wide range of assays (Hofmarcher et al., 2019), leading to increased hit rates and optimization of compounds to elicit a desired phenotype (Cuccarese et al., 2020). However, the use of such models is limited by two factors: (1) cellular images are commonly plagued by batch effects, requiring extensive engineering efforts to learn domain agnostic representation (Ando et al., 2017; Sypetkowski et al., 2023), and (2) only molecules that have paired cellular images can be used as input during inference, restricting the application of these models in virtual screening scenarios where such images are not available for the majority of molecules. In parallel, contrastive learning has been shown to be effective for learning representations of multi-modal data. ConVIRT (Zhang et al., 2020) uses a modified InfoNCE objective (Oord et al., 2019) to learn a joint embedding space of medical images and human annotations. CLIP (Radford et al., 2021) scales up this approach to 400M (image, text) pairs, enabling zero-shot transfer to downstream image classification tasks. Recently, CLOOME (Sanchez-Fernandez et al., 2022) uses the InfoLOOB objective (Furst et al., 2022) to jointly learn a molecule encoder and a morphology encoder for molecular retrieval task using the dataset introduced by Bray et al. (2017). Using the same dataset, Zheng et al. (2022) extends this approach by including masked-graph modeling objective for pretraining graph neural networks (GNNs), showing improved performances on downstream tasks in the Open Graph Benchmark (Hu et al., 2021). In this work, we further demonstrate the scaling of GNN-based **M**olecule-morphology **C**ontrastive **P**retraining - refered to as **MoCoP** - from 30K molecules and 120K images in Bray et al. (2017) to approximately 100K molecules and 600K images in JUMP-CP (Chandrasekaran et al., 2023). Using the modified InfoNCE objective (Zhang et al., 2020; Radford et al., 2021) and a gated graph neural network (GGNN) molecule encoder, we first show the effects of pretraining dataset sizes on morphology retrieval tasks. Transfer learning performances of GGNN molecule encoder pretrained with MoCoP is benchmarked on QSAR modeling task with varying training set sizes using the ChEMBL20 dataset (Gaulton et al., 2012). Finally, we demonstrate positive transfer of pretrained GGNNs on internal GSK pharmacokinetic data consisting of four different in vitro clearance assays. ## 2 Background Learning multi-modal molecule and morphology representation with contrastive learningContrastive learning is a member of the metric learning family which aims to learn an embedding space that pulls similar data together and pushes dissimilar data apart. Contrastive learning has experienced a resurgence in interest due to major advances in self-supervised learning. More recently, it has been increasingly employed to learn multi-modal data representation (Zhang et al., 2020; Desai and Johnson, 2021; Radford et al., 2021). For MoCoP, we employ a symmetric variant of InfoNCE loss for pretraining following prior works (Zhang et al., 2020; Radford et al., 2021). Intuitively, we aim to simultaneously learn a molecular encoder \(f^{mol}\) and a morphology encoder \(f^{morph}\) by minimizing the modified InfoNCE loss. Specifically, the pretraining dataset consists of \(N\) molecule-morphology pairs, defined Figure 1: Molecule-morphology contrastive learning workflow. We first jointly learn a molecule encoder and morphology encoder using contrastive learning on paired (molecule, morphology) data in available in the JUMP-CP dataset (left). Transfer learning is then performed by fine-tuning the pretrained molecule encoder on specific downstream tasks (right). as \(\{(\mathbf{x}_{i}^{mol},\mathbf{x}_{i}^{morph})\,|\,i\in\{1,...,N\}\}\). The \(i\)-th molecule-morphology pair \(\mathbf{x}_{i}^{mol}\) and \(\mathbf{x}_{i}^{morph}\) are first encoded by their corresponding encoders \(f^{mol}\) and \(f^{morph}\) to produce their respective representations \[\mathbf{h}_{i}^{mol}=f^{mol}(\mathbf{x}_{i}^{mol})\] \[\mathbf{h}_{i}^{morph}=f^{morph}(\mathbf{x}_{i}^{morph})\] where \(\mathbf{h}_{i}^{mol}\in\mathbb{R}^{d^{mol}}\) and \(\mathbf{h}_{i}^{morph}\in\mathbb{R}^{d^{morph}}\) are the encoded representations of \(\mathbf{x}_{i}^{mol}\) and \(\mathbf{x}_{i}^{morph}\). Each encoder representation is transformed using projection functions \(g\) following \[\mathbf{u}_{i}^{mol}=g^{mol}(\mathbf{h}_{i}^{mol})\] \[\mathbf{u}_{i}^{morph}=g^{morph}(\mathbf{h}_{i}^{morph})\] where \(\mathbf{u}_{i}^{mol}\in\mathbb{R}^{proj}\) and \(\mathbf{u}_{i}^{morph}\in\mathbb{R}^{proj}\) are vectors in the multi-modal embedding space. During training, \(f^{mol}\), \(f^{morph}\), \(g^{mol}\), and \(g^{morph}\) are jointly optimized to minimize the loss function \[\mathcal{L}=\alpha\cdot\mathcal{L}_{mol\to morph}+(1-\alpha)\cdot \mathcal{L}_{morph\to mol}\] where \(\alpha\) is a weighting term and \(\mathcal{L}_{mol\to morph}\) and \(\mathcal{L}_{morph\to mol}\) are molecule- and morphology-specific InfoNCE losses, defined as \[\mathcal{L}_{mol\to morph} =\frac{1}{N}\sum_{i=1}^{N}\text{log}\frac{e^{\langle\mathbf{u}_{ i}^{mol},\mathbf{u}_{i}^{morph}\rangle/\tau}}{\sum_{k=1}^{N}e^{\langle \mathbf{u}_{i}^{mol},\mathbf{u}_{k}^{morph}\rangle}}\] \[\mathcal{L}_{morph\to mol} =\frac{1}{N}\sum_{i=1}^{N}\text{log}\frac{e^{\langle\mathbf{u}_{ i}^{mol},\mathbf{u}_{i}^{morph}\rangle/\tau}}{\sum_{k=1}^{N}e^{\langle \mathbf{u}_{k}^{mol},\mathbf{u}_{i}^{morph}\rangle}}\] with \(\langle\mathbf{u},\mathbf{v}\rangle\) denoting the cosine similarity between vectors \(\mathbf{u}\) and \(\mathbf{v}\), and \(\tau\) denotes a temperature scaling parameter. Minimizing \(\mathcal{L}\) produces encoders \(f^{mol}\) and \(f^{morph}\) that maximally preserve the mutual information between representations \(\mathbf{h}_{i}^{mol}\) and \(\mathbf{h}_{i}^{morph}\). The resulting \(f^{mol}\) is then fine-tuned on downstream tasks for transfer learning. ## 3 Methods JUMP-CP datasetWe use a subset of the dataset _cpg0016-jump_, available from the Cell Painting Gallery on the Registry of Open Data on AWS (_[https://registry.opendata.aws/cellpainting-gallery/_](https://registry.opendata.aws/cellpainting-gallery/_)) as part of the JUMP-CP Consortium (Chandrasekaran et al., 2023). This subset (as of February 2023) contains approximately 700K morphological profiles of 120K compounds in U2OS cells collected across 12 data generating centers. Throughout our experiments, we use the precomputed well-level profiles provided with JUMP-CP. Each feature in a well-level profile is scaled independently using median and interquartile range statistics of the plate that the well belongs to. More concretely, the \(i\)-th feature of profile \(x\in\mathbb{R}^{d}\) belonging to plate \(p\) - denoted as \(x_{i,p}\) - is preprocessed as followed \[x_{i,p}^{processed}=\frac{x_{i,p}^{raw}-med(X_{i,p})}{IQR(X_{i,p})}\] Where \(x_{i,p}^{raw}\) denotes the raw feature value, \(X_{i,p}\) denotes the vector of all \(i\)-th features in plate \(p\), and \(med\) and \(IQR\) denote the median and interquartile range. We follow Way et al. (2021) and remove features with low variance, features with extreme outlier values, and any blacklisted CellProfiler features that are known to be noisy unreliable (Way, 2019). This results in the final set of 3,475 features. ChEMBL20 datasetWe use the ChEMBL20 dataset processed by Mayr et al. (2018) to evaluate transfer learning. The dataset has been used extensively to evaluate and benchmark machine learning approaches for QSAR modeling (Wu et al., 2018; Yang et al., 2019; Nguyen et al., 2020). In short, the dataset consists of approximately 450K compounds, each with sparse annotations of 1,310 binary downstream tasks spanning ADME, toxicity, physicochemical, binding, and functional. Internal GSK pharmacokinetic datasetInternal rodent in vitro metabolism data were collated from four different intrinsic clearance assay protocols: rat liver microsomes (\(CL_{int}^{RLM}\)), mouse liver microsomes (\(CL_{int}^{MLM}\)), rat hepatocytes (\(CL_{int}^{RH}\)), and mouse hepatocytes (\(CL_{int}^{MH}\)). We convert all readouts to intrinsic clearance based on percent hepatic blood flow (PHBF) and aggregate replicate experiments for the same compound and protocol by taking the median reported PHBF. This yielded a dataset of 105,172 unique compounds with available data across all four endpoints. Finally, the data is binarized based on the median PHBF value per endpoint. Contrastive pretraining procedureFollowing notations from Section 2, \(f_{mol}\) and \(f_{morph}\) are a GGNN and a feedward neural network (FFNN), respectively, while both \(g_{mol}\) and \(g_{morph}\) are single feedforward layers. Following Zhang et al. (2020), \(g_{mol}\) and \(g_{morph}\) are non-linear transformations utilizing ReLU as the activation function. The model is trained for 1,000 epochs - approximately 400,000 steps - with a batch size of 256 on approximately 100K of the 120K compounds and 600K of the 700K morphological profiles. We follow the protocol proposed by CLIP (Radford et al., 2021) and OpenCLIP (Cherti et al., 2022) to use the AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate of \(10^{-3}\) and cosine annealing learning rate scheduler with 50 warm-up epochs. MoCoP hyperparameters are further detailed in Appendix B.1. Transfer learningWe explore two transfer learning strategies for MoCoP: linear probe and fine-tuning whole model, which we refer to as MoCoP-LP and MoCoP-FT respectively. We use the Adam optimizer (Kingma and Ba, 2017) with a learning rate of \(5\times 10^{-5}\) and a batch size of 128 for both strategies. BaselinesWe include two baselines: training from scratch and fine-tuning from GGNNs pretrained with multitask supervised learning, which we refer to as FS and Multitask-FT, respectively. Hyperparameter optimization is performed to ensure FS baseline is competitive. Specifically we use ChEMBL 5% and down-sampled GSK pharmacokinetic datasets to carry out a random search consisting of 50 parallel trials spanning the search space described in Appendix A to maximize validation performance. The down-sampling procedure is detailed in Section 4. For Multitask-FT, we first pretrain GGNNs to directly predict morphological profiles in a multi-task regression setting. Pretraining hyperparameters are optimized using random search consisting of 20 trials while fine-tuning hyperparameters are hand-tuned for performances on validation set of ChEMBL 5%. Code availabilityThe source code for MoCoP is available at [https://github.com/GSK-AI/mocop](https://github.com/GSK-AI/mocop). ## 4 Experimental Results and Discussion Scaling MoCoP to JUMP-CPWe first evaluate if MoCoP is feasible with the JUMP-CP dataset following procedure detailed in Section 3. Similar approaches have been previously carried out on smaller datasets collected at a single site (Sanchez-Fernandez et al., 2022; Zheng et al., 2022), and the aim is to test its scalability on a larger and multi-site dataset. To evaluate the pretraining performance, the accuracy of molecule and morphology retrieval is measured. Specifically, the average top-\(k\) accuracy - where \(k\) can be 1, 5, or 10 - of retrieving molecule given morphology and vice versa is reported. The positive-to-negative sampling ratio is set to 1:100 and 1:1000. Shown in Figure 2, the performance of pretraining improves as more compounds are included in the training process. The trend continues even beyond the maximum of 101K compounds, indicating pretraining can further benefit from obtaining more data. This observation highlights the importance of large public repositories of cellular imaging data. Additionally, we present training and validation curves in Appendix B.2, which demonstrates a stable and convergent training process. Moreover, we have not extensively explored preprocessing pipelines for morphological profiles, and we anticipate that employing more advanced approaches to mitigate batch effects could improve performance. Transfer learning performances on ChEMBL20We aim to evaluate the quality of pretrained GGNN molecule encoder by using ChEMBL20 as the downstream task. Random splits based on compounds are carried out at an 80/10/10 ratio for training, validation, and test sets. For each split, we further subsample 1%, 5%, 10%, and 25%, and 50% of the training set to simulate an increasingly sparse data regime. Table 1 shows transfer learning performance on ChEMBL20. We report performance averaged across all tasks following existing works utilizing this dataset (Mayr et al., 2018; Wu et al., 2018; Yang et al., 2019). Our results indicate that fine-tuning GGNNs pretrained with MoCoP (MoCoP-FT) consistently outperformed training-from-scatch (FS) baseline Figure 2: Molecule and morphology retrieval performance at positive-to-negative sampling ratio of 1:100 (top) and 1:1000 (bottom) using MoCoP trained with increasing number of compounds in JUMP-CP. Average top-\(k\) accuracy of retrieving molecule given morphology and vice versa is reported for \(k\in\{1,5,10\}\) for each sampling ratio. across all data regimes. This improvement is also observed by simply applying a linear probe on the frozen molecule encoder (MoCoP-LP). We also observe that MoCoP-LP outperforms MoCoP-FT in lower data regime. Notably, we encounter challenges with Multitask-FT, in which GGNNs are first trained to directly predict morphological features in a multi-task regression setting. This approach fails to produce any improvements over FS baseline. Our finding is consistent with previous research that highlights the superior learning efficiency of contrastive objectives over predictive objectives.Chen et al. (2020); Tian et al. (2020); Radford et al. (2021). Transfer learning performances on internal GSK pharmacokinetic dataThe quality of pretrained GGNNs is further evaluated using a subset of GSK internal pharmacokinetic data as downstream tasks. This dataset consists of 4 tasks as detailed in Section 3. Unlike the previous experiment with ChEMBL20, here we employ scaffold splitting, which has been shown to provide better estimates of model performances in QSAR tasks (Kearnes et al., 2017; Wu et al., 2018). The compounds are first clustered using the Butina algorithm implemented in RDKit with a Euclidean distance function and a distance cutoff of 0.6. The clusters are ordered by size, and for every of six clusters, four are assigned to the training set, one to the validation set, and one to the test set. The procedure is repeated with random cluster ordering to create two additional splits. For each split, a down-sampled version is created randomly selecting a single compound from each cluster to uniformly sample the chemical space in our dataset. \begin{table} \begin{tabular}{l l c c c c} \hline \hline **Metric** & **Dataset** & **FS** & **Multitask-FT** & **MoCoP-LP** & **MoCoP-FT** \\ \hline \hline \multirow{8}{*}{AUROC} & ChEMBL20 - 1\% & \(0.511\pm 0.008\) & \(0.508\pm 0.007\) & \(\mathbf{0.545\pm 0.017}\) & \(0.542\pm 0.010\) \\ & ChEMBL20 - 5\% & \(0.571\pm 0.010\) & \(0.574\pm 0.004\) & \(\mathbf{0.624\pm 0.018}\) & \(0.621\pm 0.022\) \\ \cline{1-1} & ChEMBL20 - 10\% & \(0.597\pm 0.014\) & \(0.588\pm 0.009\) & \(0.638\pm 0.017\) & \(\mathbf{0.646\pm 0.021}\) \\ \cline{1-1} & ChEMBL20 - 25\% & \(0.648\pm 0.017\) & \(0.643\pm 0.020\) & \(0.678\pm 0.015\) & \(\mathbf{0.689\pm 0.018}\) \\ \cline{1-1} & ChEMBL20 - 50\% & \(0.669\pm 0.016\) & — & — & \(\mathbf{0.693\pm 0.030}\) \\ \cline{1-1} & ChEMBL20 - 100\% & \(0.706\pm 0.022\) & — & — & \(\mathbf{0.721\pm 0.020}\) \\ \hline \multirow{8}{*}{AUPRC} & ChEMBL20 - 1\% & \(0.487\pm 0.013\) & \(0.482\pm 0.015\) & \(\mathbf{0.511\pm 0.024}\) & \(0.510\pm 0.016\) \\ \cline{1-1} & ChEMBL20 - 5\% & \(0.528\pm 0.010\) & \(0.525\pm 0.013\) & \(\mathbf{0.576\pm 0.026}\) & \(0.569\pm 0.023\) \\ \cline{1-1} & ChEMBL20 - 10\% & \(0.550\pm 0.022\) & \(0.539\pm 0.023\) & \(0.588\pm 0.032\) & \(\mathbf{0.597\pm 0.036}\) \\ \cline{1-1} & ChEMBL20 - 25\% & \(0.600\pm 0.028\) & \(0.595\pm 0.026\) & \(0.623\pm 0.027\) & \(\mathbf{0.640\pm 0.031}\) \\ \cline{1-1} & ChEMBL20 - 50\% & \(0.623\pm 0.026\) & — & — & \(\mathbf{0.654\pm 0.037}\) \\ \cline{1-1} & ChEMBL20 - 100\% & \(0.662\pm 0.033\) & — & — & \(\mathbf{0.681\pm 0.033}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on held-out test sets of different subsets of ChEMBL20 averaged across all tasks. FS baseline: GGNNs trained from scratch; Multitask-FT baseline: Fine-tuning GGNNs pretrained using multitask supervised learning and fine-tuned; MoCoP-LP: Linear probe on GGNNs pretrained with MoCoP; MoCoP-FT: Fine-tuning GGNNs pretrained with MoCoP. Mean and standard deviation are obtained from 9 repeats from 3 splits and 3 seeds (see Section 3 for details). The best and second best values are in bold and regular text, respectively. \begin{table} \begin{tabular}{l l c c} \hline \hline **Metric** & **Dataset** & **FS** & **MoCoP-FT** \\ \hline \hline \multirow{8}{*}{AUROC} & \(CL_{int}^{RH}\) & \(0.762\pm 0.008\) & \(\mathbf{0.788\pm 0.014}\) \\ & \(CL_{int}^{MH}\) & \(0.763\pm 0.031\) & \(\mathbf{0.791\pm 0.026}\) \\ \cline{1-1} & \(CL_{int}^{HLM}\) & \(0.845\pm 0.011\) & \(\mathbf{0.864\pm 0.013}\) \\ \cline{1-1} & \(CL_{int}^{MLM}\) & \(0.839\pm 0.018\) & \(\mathbf{0.852\pm 0.024}\) \\ \cline{1-1} & Average & \(0.802\pm 0.013\) & \(\mathbf{0.824\pm 0.014}\) \\ \hline \multirow{8}{*}{AUPRC} & \(CL_{int}^{RH}\) & \(0.760\pm 0.023\) & \(\mathbf{0.790\pm 0.030}\) \\ \cline{1-1} & \(CL_{int}^{MH}\) & \(0.775\pm 0.030\) & \(\mathbf{0.795\pm 0.031}\) \\ \cline{1-1} & \(CL_{int}^{HLM}\) & \(0.851\pm 0.006\) & \(\mathbf{0.870\pm 0.004}\) \\ \cline{1-1} & \(CL_{int}^{LMM}\) & \(0.831\pm 0.009\) & \(\mathbf{0.845\pm 0.014}\) \\ \cline{1-1} & Average & \(0.804\pm 0.011\) & \(\mathbf{0.825\pm 0.014}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Performance on held-out test sets of GSK internal pharmacokinetic data. Mean and standard deviation are obtained from 9 repeats from 3 splits and 3 seeds (see Section 3 for details). The best values are in bold text. \begin{table} \begin{tabular}{l l c c} \hline \hline **Metric** & **Dataset** & **FS** & **MoCoP-FT** \\ \hline \hline \multirow{8}{*}{AUROC} & \(CL_{int}^{RH}\) & \(0.716\pm 0.046\) & \(\mathbf{0.763\pm 0.057}\) \\ & \(CL_{int}^{MH}\) & \(0.716\pm 0.056\) & \(\mathbf{0.805\pm 0.049}\) \\ \cline{1-1} & \(CL_{int}^{RLM}\) & \(0.800\pm 0.011\) & \(\mathbf{0.824\pm 0.018}\) \\ \cline{1-1} & \(CL_{int}^{MLM}\) & \(0.779\pm 0.015\) & \(\mathbf{0.805\pm 0.023}\) \\ \cline{1-1} & Average & \(0.752\pm 0.028\) & \(\mathbf{0.799\pm 0.033}\) \\ \hline \multirow{8}{*}{AUPRC} & \(CL_{int}^{RH}\) & \(0.715\pm 0.053\) & \(\mathbf{0.768\pm 0.049}\) \\ \cline{1-1} & \(CL_{int}^{MH}\) & \(0.710\pm 0.044\) & \(\mathbf{0.799\pm 0.046}\) \\ \cline{1-1} & \(CL_{int}^{LMM}\) & \(0.820\pm 0.011\) & \(\mathbf{0.842\pm 0.018}\) \\ \cline{1-1} & \(CL_{int}^{MLM}\) & \(0.818\pm 0.019\) & \(\mathbf{0.846\pm 0.027}\) \\ \cline{1-1} & Average & \(0.766\pm 0.025\) & \(\mathbf{0.814\pm 0.031}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Performance on held-out test sets of GSK internal pharmacokinetic data with down-sampled training data. Mean and standard deviation are obtained from 9 repeats from 3 splits and 3 seeds (see Section 3 for details). The best values are in bold text. Using results from the previous experiment, we benchmark the most performant approach MoCoP-FT, where each model is repeated 9 times with 3 splits and 3 seeds. We again observe that MoCoP-FT consistently outperforms FS baseline across both full and down-sampled datasets, shown in Table 2 and 3, respectively. On the full dataset, pretrained GGNNs show an average improvement of 2.6% in AUPRC across the 4 individual tasks. This effect is increased to 6.3% in AUPRC when less data is available for training. We expect performance can be further improved by considering using related endpoints as descriptors, as demonstrated by Broccatelli et al. (2022). This result offers a glimpse at the potential of using datasets not directly related to the learning task at hand in improving QSAR models. While the results in this study are limited to a single publicly available high-content imaging dataset, other high-dimensional readouts such as transcriptomics and proteomics can be used to augment QSAR modeling in similar manners. Further investigation of routine re-use of high-dimensional data in standard QSAR workflows is warranted in future works. ## 5 Conclusion In this study, we explore MoCoP as a means to improve the performance of QSAR models. We scale MoCoP to approximately 100K molecules and 600K morphological profiles, and evaluate pretrained GGNNs molecule encoder on both public and internal downstream tasks. Our results demonstrate that MoCoP consistently improves the performance of GGNNs in QSAR tasks, especially in low-data regimes when compared to training-from-scratch and multitask supervised pretraining baselines. We observe this trend in both the ChEMBL20 dataset and GSK internal pharmacokinetic data, indicating that the approach is applicable across a range of datasets and tasks. Our work also suggests that data from unbiased high-dimensional assays, beyond cellular imaging, can improve QSAR models via contrastive pretraining. Future works will further explore this approach with other data sources such as transcriptomics and proteomics. Overall, we believe our work can be combined with existing methods to improve model performances and expands the deep learning toolbox available for QSAR applications.
2305.18845
How Generative Models Improve LOS Estimation in 6G Non-Terrestrial Networks
With the advent of 5G and the anticipated arrival of 6G, there has been a growing research interest in combining mobile networks with Non-Terrestrial Network platforms such as low earth orbit satellites and Geosynchronous Equatorial Orbit satellites to provide broader coverage for a wide range of applications. However, integrating these platforms is challenging because Line-Of-Sight (LOS) estimation is required for both inter satellite and satellite-to-terrestrial segment links. Machine Learning (ML) techniques have shown promise in channel modeling and LOS estimation, but they require large datasets for model training, which can be difficult to obtain. In addition, network operators may be reluctant to disclose their network data due to privacy concerns. Therefore, alternative data collection techniques are needed. In this paper, a framework is proposed that uses generative models to generate synthetic data for LOS estimation in non-terrestrial 6G networks. Specifically, the authors show that generative models can be trained with a small available dataset to generate large datasets that can be used to train ML models for LOS estimation. Furthermore, since the generated synthetic data does not contain identifying information of the original dataset, it can be made publicly available without violating privacy
Saira Bano, Achilles Machumilane, Pietro Cassarà, Alberto Gotta
2023-05-30T08:36:43Z
http://arxiv.org/abs/2305.18845v1
# How Generative Models Improve LOS Estimation in 6G Non-Terrestrial Networks ###### Abstract With the advent of 5G and the anticipated arrival of 6G, there has been a growing research interest in combining mobile networks with Non-Terrestrial Network platforms such as low earth orbit satellites and Geosynchronous Equatorial Orbit satellites to provide broader coverage for a wide range of applications. However, integrating these platforms is challenging because Line-Of-Sight (LOS) estimation is required for both inter satellite and satellite-to-terrestrial segment links. Machine Learning (ML) techniques have shown promise in channel modeling and LOS estimation, but they require large datasets for model training, which can be difficult to obtain. In addition, network operators may be reluctant to disclose their network data due to privacy concerns. Therefore, alternative data collection techniques are needed. In this paper, a framework is proposed that uses generative models to generate synthetic data for LOS estimation in non-terrestrial 6G networks. Specifically, the authors show that generative models can be trained with a small available dataset to generate large datasets that can be used to train ML models for LOS estimation. Furthermore, since the generated synthetic data does not contain identifying information of the original dataset, it can be made publicly available without violating privacy. NTNs, Satellites, Channel Modeling, Generative Models. ## 1 Introduction In recent years, the Third Generation Partnership Project (3GPP) has envisioned the integration of 5G and 6G mobile networks with Non-Terrestrial Network (NTN) technologies such as Low Earth Orbit (LEO) satellites, Unmanned Aerial Systems (UASs), and High Altitude Platforms (HAPs) as a promising solution for providing ubiquitous coverage in inaccessible areas [1]. This integration will significantly improve network connectivity, accessibility and data rates and will also support a wide range of applications and services, including rescue missions, remote monitoring, and goods delivery [1]. The main communications challenge with this integration is modeling the Line-of-Sight (LOS) availability of the link between the satellite and terrestrial segments since satellite communications require a clear LOS that can be blocked by obstacles such as buildings and vegetation, resulting in signal blockage, diffraction, and reflection. The elevation angle of the satellite also affects the LOS, with lower angles less likely to result in a LOS that yields to weak or no signal. Existing 3GPP and International Telecommunication Union (ITU) models define channel parameters based on elevation angle, frequency, and deployment scenarios [2]. However, certain critical parameters, such as LOS probability, have no temporal correlation and therefore do not account for satellite motion. Consequently, changes in LOS/Non-Line-of-Sight (NLOS) states may be inaccu rately represented, which could complicate the impact of radio mobility on 5G-based satellite networks. Therefore, it is important to model these changes more accurately, considering the impact of satellite and user mobility. Statistical modeling is one potential solution for LOS prediction and transmission of data only when conditions are favorable. However, this approach can be tedious and time-consuming. Recently, researchers have explored the use of Machine Learning (ML) techniques for LOS estimation. Although these methods have shown promising results, they typically require large amounts of data to train the ML models, which can be difficult and expensive to obtain. In addition, privacy concerns of network operators for sensitive data have made it difficult to obtain large data sets for ML model training. To address this problem, generative models such as Generative Neural Networks (GNNs) have been proposed to generate synthetic datasets using small available real datasets. These networks have numerous applications in various fields, including communications, manufacturing, and healthcare. They are mainly used to create new images, music, text, and videos. The two most commonly used GNNs are Generative Adversarial Network (GAN) and Variational Autoencoder (VAE), which are used to generate synthetic data that mimics the quality and statistical distribution of real-world data. This study aims to demonstrate the effectiveness of generating models in generating new satellite channel data for LOS estimation. To accomplish this, the study utilizes the channel models provided by ITU and creates datasets of LOS and NLOS traces for various satellite elevation angles that can account for satellite mobility. These datasets are then used to train the generative models, namely GAN and VAE. By using these models, this work shows that synthetic data can be generated that closely resembles the original data and retains the statistical distribution, thereby providing a solution to the limited availability of training data in ML models for LOS estimation. Furthermore the generated data would be free of legal, privacy, and security issues. People can use them for academic and research purposes and mitigate the difficulties of obtaining real-world data. We comprehensively evaluate, compare, and analyze the performance levels of the proposed generative models using various statistical parameters. The experimental results show that the proposed models are robust for LOS estimation using the generated dataset. ## 2 Related Work This section provides various techniques proposed in the literature for LOS estimation and synthetic data generation. For LOS estimation, the authors of [3] provide a theoretical model that calculates the likelihood of having LOS when there are no clouds. They showed that the LOS availability depends on the height of the ground station and the satellite elevation angle. A ML-based method for NLOS identification is proposed in [4], which uses two ML algorithms Support Vector Machine (SVM) and Logistic Regression (LR) to predict the NLOS using data from the global navigation system. Authors in [5] use time-varying angular information of a channel to train ML algorithms for LOS identification in Vehicle to Vehicle (V2V) communication. The problem of these ML-based models is that they require a huge amount of measurement data that may not be available or too small to train a model. Moreover, the measurement campaigns to obtain real channel data may be costly and time-consuming. Our work proposes a solution by showing that it is possible to use synthetic data instead of real channel data for LOS estimation. The synthetic data can be obtained by using generative models using a small amount of available real channel data or from publicly available synthetic data of the channel characteristics. Numerous data-driven methods for generating synthetic time series data are also studied in the literature. One example is the approach proposed by [6], which uses GAN to generate energy consumption data. GAN uses a discriminator to indirectly train the generator network, enabling it to generate synthetic data. In [7], a general modeling approach is presented based on training a generative neural network with data. The proposed generative model includes two stages: Prediction of the link state (LOS, NLOS, or no available link) and subsequent use of a conditional VAE to generate path losses, delays, and arrival and departure angles for all propagation paths given the previously predicted link state. In [8], the authors used GAN for path loss prediction for satellite images using the dataset from raytracing simulations. ## 3 System Model ### _Channel Model_ The 3GPP has defined two main architectures for integrating cellular networks and NTN [1]: the transparent and regenerative architecture. In the transparent architecture, the satellite acts like a Radio Frequency (RF) repeater, transparently forwarding traffic between the UE and the New Generation (NG)-Radio Access Network (RAN) of the mobile network. In the regenerative architecture, on the other hand, the satellite has gNB capabilities with onboard gNB-Distributed Unit (DU) and the gNB-Centralized Unit (CU) deployed in the ground segment of the mobile network, as shown in Figure 2. In this work, we use regenerative architecture. As explained earlier, the link between the satellite and the UE can be in LOS or NLOS. In this work, we investigate the generation of synthetic data that can be used to estimate the LOS probability for the link between the satellite and the UE. We use the channel model provided by the ITU [9] and modify it to a simplified Lutz model [10], [3]. According to these models, a channel between a satellite and any land mobile terminal can be in either a good (G) or bad (B) state, as the signal power varies as a result of shadowing and multipath caused by signal obstructions and reflections from obstacles such as buildings, vegetation, and the ground. We assume that a channel is in a good state if there is a LOS, and in a bad state otherwise. The ITU [9] recommendations provide several statistical parameters that can be used to calculate the average duration of each of the two states in different environments, including the type of terrain (urban, suburban, and rural), elevation angles, and frequencies. These parameters were collected in a city in France and are shown in Table 1. They include each state's mean, standard deviation, and minimum state lengths in meters. For this study, we utilized the parameters for a dense urban environment at 2.2 GHz and computed the transition probabilities, which denote the likelihood of transitioning from Fig. 1: Reference Scenario. one state to another given an initial state. Table 2 lists the transition probabilities we derived, and we modeled the state transitions using a two-state Markov process. To train our generative models, we generated LOS-NLOS traces from the statistical model. The generative models learned the latent space and distributions of the training data, allowing them to produce synthetic traces with similar statistical characteristics to the original dataset. Alternatively, without access to a channel model, the training data could be obtained from traces acquired through real-time transmissions using feedback mechanisms, as demonstrated in [11] ### _Generative Neural Networks_ #### 3.2.1 Generative Adversarial Networks (GANs) In [12] Ian Goodfellow introduces GAN, a type of ML algorithm that has gained popularity because of its ability to generate high-quality synthetic images, videos, and even sounds that closely resemble real-world data. GANs use an unsupervised learning approach to detect patterns in the input data and generate new samples with the same distribution as the original data. GANs train two neural networks, the generator network and the discriminator. The generator network generates the fake data by taking samples from a random distribution and converting them into data that resembles real data. In contrast, the discriminator network tries to distinguish between the real data and the fake data generated by the generator. The generator does not have direct access to real samples and learns only through interaction with the discriminator, which has access to synthetic and real samples. Training GANs is about finding the parameters of a discriminator that maximize its classification accuracy and finding the parameters of a generator that maximally confuse the discriminator. As the generator network improves, more realistic data is generated, making it increasingly difficult for the discriminator network to distinguish between real and fake data. This work uses GANs to generate LOS and NLOS traces for 6G-NTN channels. #### 3.2.2 VAE - Variational Auto-Encoders Variational autoencoders are a powerful type of Deep Learning (DL) algorithm that can be used for unsupervised learning and generative modeling. They are particularly useful for learning a compressed dataset representation that captures the data's underlying structure in a low-dimensional space. VAEs have many practical applications, such as image and speech recognition and natural language processing. VAEs are capable of generating new data that is similar to training data. VAEs use a two-part architecture consisting of an encoder and a decoder. The encoder maps the input data to a distribution in latent space, while the decoder maps points in latent space back to the original data space. By varying the sampled points in the latent space, new data that is similar to, but not identical to, the training data can be generated. The VAE architecture includes an encoder that transforms input data into a Gaussian distribution over the latent space using convolutional or dense layers. The decoder takes samples from the latent space and maps them back to the original data space using similar layers. VAE aims to minimize the difference between input data and the decoder's output while maximizing the Gaussian distribution's parameter likelihood, which defines the latent space. #### 3.2.3 Conditional Tabular GAN (CTGAN) and Tabular VAE (IVAE) This work uses Tabular Conditional GAN and Tabular VAE, both presented by Lei Xu et al. in [13], which are part of the Synthetic Data Vault (SDV) package in TensorFlow. These models were chosen because of their ability to process tabular data and allow training of a single model that can generate synthetic data for any available channel between UE and satellite. CTGAN and IVAE are capable of capturing the distribution of each column in the tabular data. In our case, each column contains LOS-NLOS traces for each channel or elevation angle. We consider three elevation angles: \(70^{\circ}.60^{\circ}\),\(45^{\circ}\). Knowing the distribution of each column, the models used in this work can generate synthetic data for each column simultaneously, based on the distribution of each column, which saves time compared to training a model for each channel. However, since the LOS probability changes with the elevation angle of the satellite, different channel models are needed for each angle. For LEO satellites, this means that a different model is required for each elevation angle. Since satellites are visible from certain locations at certain angles, the terminal must switch to different satellites as they become visible. However, the proposed approach uses a single model that can generate synthetic data for all channels or satellite elevation angles that the UE connects to the satellite. ## 4 Performance Evaluation In this part, we evaluate and compare the accuracy of the generative models in generating synthetic data that closely resemble real data. First, we train the models to generate synthetic data for the LOS and NLOS traces. Then we evaluate the trained models and compare the similarity between the generated data and the real data. We use Wasserstein Distance, Kolmogorov-Smirnov (KS) test, and Kullback-Leibler (KL) divergence as measures for comparison and evaluation. These measures allow us to determine how well the probability distribution of the synthetic data matches that of the real data. ### _Model Training_ As explained previously, we trained our models using the training dataset consisting of the LOS-NLOS -traces obtained using the state transition probabilities in 1. For this work, we used the probabilities at three elevation angles: \(45^{\circ}\), \(60^{\circ}\), \(70^{\circ}\). We assume the satellite is visible from our reference UE at these three elevation angles. The trained models should produce synthetic data that can estimate or predict the presence of the LOS at these three angles. The training data set is a table with 100,000 rows of LOS and NLOS traces in three columns, with the traces of each angle in each column. We trained each model for 100 epochs with a batch size of 50 and a learning rate of \(2e-4\). ### _Performance Metrics_ We used the following performance metrics to evaluate the effectiveness of our geneative models for LOS estimation: \begin{table} \begin{tabular}{c c c c c c} \hline \hline Elevation & \(\mu_{G,B}\) & \(\sigma_{G,B}\) & \({\it dur}_{minG,B}\) & \(P(B\to G)\) (g) & \(P(G\to B)\) (b) \\ \hline \(20^{\circ}\) & 2.0042, 3.6890 & 1.2049, 0.9796 & 3.9889, 10.3114 & 0.00014310 & 0.00047466 \\ \(30^{\circ}\) & 2.7332, 2.7582 & 1.1030, 1.2210 & 7.3174, 5.7276 & 0.00024460 & 0.00027570 \\ \(45^{\circ}\) & 3.0639, 2.9108 & 1.6980, 1.2602 & 10.0, 6.0 & 0.00020318 & 0.00007556 \\ \(60^{\circ}\) & 2.8135, 2.0211 & 1.9595, 0.6568 & 10.0, 1.9126 & 0.00105161 & 0.00010797 \\ \(70^{\circ}\) & 4.2919, 2.1012 & 2.4703, 1.0341 & 118.3312, 4.8569 & 0.00052923 & \(2.76683\cdot 10^{-}6\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Satellite link parameters and transition probabilities for the dense urban environment in France at 2.2 GHz. * Wasserstein distance is a metric for measuring the distance between two probability distributions, that is, for measuring the similarity of the probability distribution of the synthetic data to that of the real data. The smaller the Wasserstein distance between the synthetic and real data, the more similar the probability distributions, which means that the generative model has produced synthetic data of high quality. * The KS test is a statistical measure that gets the distance between two empirical Cumulative Distributions Function (CDF), a popular non-parametric measure used in statistics. We use the KS test to measure how far the CDF of the synthetic data is from the real data. It is usually presented as a complementary measure, i.e., a 1-difference in CDF. Thus, the higher the value, the more similar the synthetic data is to the real data. * The KL divergence measures how much two probability distributions differ from each other. The KL divergence between two probability distributions, P and Q, is calculated as the sum of the log difference between the probabilities of each value in P and Q multiplied by the probability of that value in P. The KL divergence is always non-negative and equal to zero if and only if the two distributions are identical. A low KL divergence indicates that the generative model has produced synthetic data of high quality that are similar to the real data. ### _Results_ #### 4.3.1 Mean distance between real and synthetic data In Tables II and III, we show the mean and variance distance between synthetic and real data at different elevation angles using the selected evaluation metrics. To determine the mean distance, we first created a test dataset with 100,000 samples from LOS-NLOS, the same as the one created for the training dataset. Then we use the trained CTGAN and TVAE models to create synthetic datasets of the same size. We repeat this procedure 50 times and calculate the KS test, Wasserstein distance, and KL divergence distances between each synthetic data set and the real data, obtaining a set of fifty distances for each metric for each model. We then calculated the mean and variance of these distances, and the results are shown in Table II for CTGAN and Table III for TVAE. As can be seen from the tables, both the Wasserstein and KL divergences are very low for both models at all three elevation angles. Similarly, the values for the KS test are very high for both models, ranging from 0.9763 to 0.9570, indicating that our models produce high-quality synthetic data with distributions that are very close to the real data. models. The results presented in 2 and 3 show low variances, as low as 1.42 x \(10^{-}9\), which means that the distances between the different synthetic data sets generated in different instances vary little. This shows that our models are stable, robust, and reliable in generating LOS/NLOS estimates at different instances and different elevation angles. #### 4.3.3 Distributions of real data and synthetic data In Figure 2 we have shown the distribution of real and synthetic data for CTGAN and TVAE at different satellite elevation angles. In generating our dataset, LOS and NLOS were coded as 1 and -1, respectively. As expected, the LOS probability decreases with decreasing elevation angle and is highest at \(70^{\circ}\) and lowest at \(45^{\circ}\). The results show that this variation is similar for real and synthetic data, implying that our models can correctly estimate the LOS and NLOS probabilities at different elevation angles. #### 4.3.4 Comparative Analysis The following briefly compares the generative models used in this paper. As mentioned earlier, both GANs and VAEs are DL models used to generate synthetic data. However, they differ in the way they generate synthetic data. GANs use a generator that learns to generate synthetic data that is indistinguishable from real data. In contrast, VAEs use a probabilistic encoder and decoder network to generate synthetic data by mapping the real data to a low-dimensional latent space and then mapping the latent space back to the original space. In general, GANs are known to be more difficult to train and may suffer from "mode collapse" compared to VAE, where the generator produces only a limited amount of synthetic data. The results in Figure 3 show that VAEs exhibit faster stability with fewer runs over the training dataset, while GANs require longer training periods to achieve stability. The Figure shows the KL divergence and Wasserstein distance between the test dataset and the generated dataset for the elevation angle of \(70^{\circ}\), indicating that the data distribution generated by the TVAEs is very similar to that of the real data set, even after only a few runs over the training dataset. Thus, the TVAEs perform better than the GAN for the LOS estimation for the given dataset. We also consider the training times of the Fig. 2: Distribution of real vs synthetic data for CTGAN and TVAE at different satellite elevation angles. CTGAN and TVAE. CTGAN requires more training time than TVAE. For example, in our simulations, CTGAN required 1.18 hours in the given training environment, while TVAE required 47 minutes. This shows that for the given problem of LOS estimation and with the given dataset, TVAE is the best option in terms of performance and training time. ## 5 Conclusion In this study, a DL technique was used to generate synthetic data for LEO satellites operating in non-terrestrial 6G networks, considering both LOS and NLOS scenarios. Two generative network variants, CTGAN and TVAE, were used because they are well suited for tabular data. The simulation results showed that the generative models mimicked the real dataset very well and successfully estimated the LOS probabilities over multiple satellite channels. The statistical metrics used to measure the performance of the models showed that both models were comparable. However, TVAE outperformed CTGAN in terms of training time, while CTGAN exhibited some oscillations in training due to the min-max-adversarial effect.
2306.10791
Active Ising Models of Flocking: A Field-Theoretic Approach
Using an approach based on Doi-Peliti field theory, we study several different Active Ising Models (AIMs), in each of which collective motion (flocking) of self-propelled particles arises from the spontaneous breaking of a discrete symmetry. We test the predictive power of our field theories by deriving the hydrodynamic equations for the different microscopic choices of aligning processes that define our various models. At deterministic level, the resulting equations largely confirm known results, but our approach has the advantage of allowing systematic generalization to include noise terms. Study of the resulting hydrodynamics allows us to confirm that the various AIMs share the same phenomenology of a first order transition from isotropic to flocked states whenever the self propulsion speed is nonzero, with an important exception for the case where particles align only pairwise locally. Remarkably, this variant fails entirely to give flocking -- an outcome that was foreseen in previous work, but is confirmed here and explained in terms of the scalings of various terms in the hydrodynamic limit. Finally, we discuss our AIMs in the limit of zero self-propulsion where the ordering transition is continuous. In this limit, each model is still out of equilibrium because the dynamical rules continue to break detailed balance, yet it has been argued that an equilibrium universality class (Model C) prevails. We study field-theoretically the connection between our AIMs and Model C, arguing that these particular models (though not AIMs in general) lie outside the Model C class. We link this to the fact that in our AIMs without self propulsion, detailed balance is not merely still broken, but replaced by a different dynamical symmetry in which the dynamics of the particle density is independent of the spin state.
Mattia Scandolo, Johannes Pausch, Michael E. Cates
2023-06-19T09:08:03Z
http://arxiv.org/abs/2306.10791v1
# Active Ising Models of Flocking: A Field-Theoretic Approach ###### Abstract Using an approach based on Doi-Peliti field theory, we study several different Active Ising Models (AIMs), in each of which collective motion (flocking) of self-propelled particles arises from the spontaneous breaking of a discrete symmetry. We test the predictive power of our field theories by deriving the hydrodynamic equations for the different microscopic choices of aligning processes that define our various models. At deterministic level, the resulting equations largely confirm known results, but our approach has the advantage of allowing systematic generalization to include noise terms. Study of the resulting hydrodynamics allows us to confirm that the various AIMs share the same phenomenology of a first order transition from isotropic to flocked states whenever the self propulsion speed is nonzero, with an important exception for the case where particles align only pairwise locally. Remarkably, this variant fails entirely to give flocking - an outcome that was foreseen in previous work, but is confirmed here and explained in terms of the scalings of various terms in the hydrodynamic limit. Finally, we discuss our AIMs in the limit of zero self-propulsion where the ordering transition is continuous. In this limit, each model is still out of equilibrium because the dynamical rules continue to break detailed balance, yet it has been argued that an equilibrium universality class (Model C) prevails. We study field-theoretically the connection between our AIMs and Model C, arguing that these particular models (though not AIMs in general) lie outside the Model C class. We link this to the fact that in our AIMs without self propulsion, detailed balance is not merely still broken, but replaced by a different dynamical symmetry in which the dynamics of the particle density is independent of the spin state. **Keywords: Active Ising Model, Flocking, Field Theory, Phase Transition** ## 1 Introduction Flocking, in which a group of self-propelled particles align and move in the same direction, is displayed by a wide variety of biological and soft-matter systems [1]. The alignment effect creates many similarities between flocking and ferromagnetism, but flocking exhibits a richer phenomenology. Indeed, because the alignment interaction is among particle velocities rather than spatially fixed spin variables, flocking system are inherently active, and driven far from thermal equilibrium. Much recent research has addressed collective behaviour and phase transitions in this and other active matter systems, with many open questions remaining [2]. In this paper we address some questions concerning minimal models of active matter (specifically, flocking), exploiting similarities to magnetism (specifically, the Ising model). In doing so we follow a path exemplified decades ago by Fyl Pincus, who was among the first to properly explore the similarities between solid-state magnetism and liquid-crystal ordering in soft matter systems (e.g. [3, 4]). In their seminal paper on flocking - also known as collective motion - Vicsek _et al._[5] studied a system of active particles with fixed speed that align their velocities through a ferromagnetic interaction. The Vicsek model is thus an active version of the XY or Heisenberg model, in \(2d\) and \(3d\) respectively. Crucially, activity breaks the precepts of the Mermin-Wagner theorem, stabilizing the ordered phase even in two dimensions [6]. Moreover, the phase transition from disorder to the ordered (flocked) phase exhibits clear evidence of being first-order [7, 8, 9], in contrast to the usual second-order nature of the ordering transition in equilibrium ferromagnets. To help understand the transition, Solon and Tailleur introduced the Active Ising Model [10, 11]. Here flocking can emerge only along one privileged axis (the \(x\)-axis, say), just as the magnetization axis is pre-determined in the equilibrium Ising model. Although _the_ Active Ising Model was introduced with a specific choice of spin-alignment dynamics, below we will refer to any model in which velocities locally tend to align along a fixed axis as _an_ Active Ising Model (or AIM). In the present work, we study the behaviour of various AIMs through a field-theoretical approach. As in equilibrium, a main advantage of using field theory is that any large scale, collective, behaviours that do not depend on the specific microscopic details can be simpler to study and understand. This happens because, once the emergent (hydrodynamic) variables are identified, an expansion in small, slowly variating fluctuations is typically possible - either directly, or by re-expressing as an expansion in dimensionality via the Renormalisation Group (RG). Below, starting from the Master Equation for AIM systems, we derive a field theory through a coherent-state path-integral representation. This approach, known as Doi-Peliti field theory, offers an exact mapping between the coefficients of the microscopic model and the bare couplings of a field-theoretic action. It not only enables standard field-theoretical approximations including RG, but also gives the exact deterministic hydrodynamic equations, together with their lowest order fluctuation corrections [12]. Using our field theory, we will present several new results, some in line with prior expectations based on less formal analyses, but some others contradicting such expectations. Our results add significantly to what is known about Active Ising Models, although many questions lie beyond our scope and must remain unanswered here. The rest of this paper is structured as follows: in Sec 2 we define the various Active Ising Models studied, and review their main phenomenology. In Sec 3 we briefly review the derivation of the Doi-Peliti field theory, connecting it to physical variables through the so-called Cole-Hopf transformation. In Sec 4 we derive the hydrodynamic equations, together with lowest order noise terms, and give a linear stability analysis of homogeneous states. In Sec 5 we focus on a specific AIM in which spins align only pairwise, finding this unable to sustain flocking at any finite noise level, in agreement with previous arguments [13]. In Sec 6 we address the continuous ordering transition of our Active Ising Models that arises in the limit of zero self-propulsion. As previously noted [10, 11], such models remain active (i.e., out of equilibrium) because the remaining combination of unbiased spin hopping and alignment already breaks detailed balance. We discuss whether this transition shares a universality class with equilibrium models as previously argued [11] (a result also seen in other active models of Ising symmetry [14]). We establish a connection between our stochastic hydrodynamic equations and Model C (which describes an equilibrium Ising dynamics coupled to a conserved scalar density) but argue that AIMs may nonetheless inhabit a new, nonequilibrium universality class - a view supported by explicit RG calculations that we will publish elsewhere. Finally, in Sec 7 we offer some concluding remarks. ## 2 Active Ising Models An Active Ising Model (AIM) is a minimal description of a system in which individuals align their directions of motion. Contrary to the Vicsek Model [5], where collective motion may occur in any possible direction in space, in an AIM, individuals prefer to move parallel to a given axis, which we identify _wlog_ as the \(x\) axis. The _state_ of each particle is thus defined by its lattice position and a spin variable \(\pm 1\) that tells which direction \(\pm\hat{x}\) it prefers to move in. The particles reside on a \(d\)-dimensional square lattice _without any occupation number constraint_ and move through space by hopping onto neighbouring sites. In the \(x\) direction (only) the hopping rates are actively biased: particles with positive (negative) spin will hop preferentially towards more positive (negative) \(x\) values. In all directions other than \(x\), particles undergo unbiased, diffusive hopping. Finally, imitative behaviour among individuals, effectively encoded in a ferromagnetic spin alignment interaction among particles on the same site, creates a tendency towards mutual alignment and hence collective motion. Thus an AIM represents a minimal, Ising-like model of flocking, with a discrete symmetry replacing the full rotational symmetry of the Vicsek Model. Two crucial differences between an AIM and the equilibrium Ising Model must be borne in mind: (i) an AIM has no occupancy constraint on each lattice site, and (ii) the alignment interaction occurs only between same-site particles instead of between particles on nearest neighbour sites. The former means that particles are never blocked from hopping by excluded volume, allowing a simpler treatment of the bias. The latter choice is likewise made for simplicity in the hope that same-site interactions are sufficient to describe emergent properties; in most cases one expects diffusion to mix particles enough that on-site and nearest neighbour alignment interactions are equivalent. The state of the \(k\)-th particle is defined by its position on the lattice \(\mathbf{i}^{(k)}=\left(i_{1}^{(k)},\ldots,i_{d}^{(k)}\right)\) and its spin \(s_{k}=\pm 1\). The state of the whole system can then be identified via the number of \(+1\) and \(-1\) spin particles on each site \(\mathbf{i}\), respectively \(n_{\mathbf{i}}^{+}\) and \(n_{\mathbf{i}}^{-}\), or equivalently via the local density \(\rho_{\mathbf{i}}=n_{\mathbf{i}}^{+}+n_{\mathbf{i}}^{-}\), and magnetisation \(m_{\mathbf{i}}=n_{\mathbf{i}}^{+}-n_{\mathbf{i}}^{-}\). With no occupational constraint, \(\rho_{\mathbf{i}}\) has no upper bound, but the magnetisation \(m_{\mathbf{i}}\) is bounded by \(\rho_{\mathbf{i}}\), since \(-\rho_{\mathbf{i}}\leq m_{\mathbf{i}}\leq\rho_{\mathbf{i}}\). ### Description using reactions Due to its on-lattice definition, the dynamics of an AIM can be described as a set of reactions between two particle species \(A_{\mathbf{i}}\) and \(B_{\mathbf{i}}\), representing respectively particles at site \(\mathbf{i}\) having \(+1\) and \(-1\) spin. The model is completely defined once the following two processes are specified: 1. how particles move in space, namely with what rates they undergo hopping reactions \[A_{\mathbf{i}}\longrightarrow A_{\mathbf{j}}\qquad B_{\mathbf{i}}\longrightarrow B_{\mathbf{j}}\] (1) 2. how particles change direction, namely with what rate they undergo the spin-flip reactions \[A_{\mathbf{i}}\longrightarrow B_{\mathbf{i}}\qquad B_{\mathbf{i}}\longrightarrow A_{\mathbf{ i}}\] (2) We next address these processes in turn. #### 2.1.1 Hopping In an AIM, particles are assumed to hop with a fixed rate \(D\) in all spatial directions, except for the \(\hat{x}\) direction where there is a preferred motion set by the spin variable. We thus introduce biased hopping reactions in the \(\hat{x}\) direction as \[A_{\mathbf{i}}\longrightarrow A_{\mathbf{i}\pm\hat{x}}\qquad\text{rate: }D(1\pm\epsilon) \tag{3}\] \[B_{\mathbf{i}}\longrightarrow B_{\mathbf{i}\pm\hat{x}}\qquad\text{rate: }D(1\mp\epsilon) \tag{4}\] In all other directions \(\hat{y}\neq\hat{x}\) instead, the hopping is unbiased and hence \[A_{\mathbf{i}}\longrightarrow A_{\mathbf{i}\pm\hat{y}}\qquad\text{rate: }D \tag{5}\] \[B_{\mathbf{i}}\longrightarrow B_{\mathbf{i}\pm\hat{y}}\qquad\text{rate: }D\] Here the bias parameter \(0\leq\epsilon\leq 1\) quantifies self-propulsive activity. The hopping reactions are not influenced by the presence of other particles, and hence are independent of particle concentration. #### 2.1.2 Spin-flipping In this work we address three different types of AIM (AIM0, AIM1 and AIM2, the latter with several sub-variants), which are distinguished by different choices of spin-flip reaction rates. **AIM0: Original Ising flip rates** In the original formulation of the AIM, as introduced in [10], the rates for a spin-flipping event took inspiration from equilibrium dynamics of a fully-connected Ising model in the canonical ensemble. This means that, in absence of any hopping, each _site_ behaves as a fully-connected Ising model. In terms of reactions between \(A\) and \(B\) particles, this choice of rates leads to \[\begin{split} A_{\mathbf{i}}\longrightarrow B_{\mathbf{i}}& \text{rate: }\gamma\exp\left(-\beta\frac{m_{\mathbf{i}}}{\rho_{i}}\right)\\ B_{\mathbf{i}}\longrightarrow A_{\mathbf{i}}&\text{rate: }\gamma\exp\left(\beta\frac{m_{\mathbf{i}}}{\rho_{\mathbf{i}}}\right)\end{split} \tag{6}\] which we shall refer to as AIM0. Here \(\gamma\) is the rate of particle flipping in the \(m=0\) case, while \(\beta\) plays the role of an inverse temperature. This choice of flip rates is however unfeasible to implement in a Doi-Peliti framework: although we were able to formally derive a field-theoretical action for this choice, we could not express it in terms of simple functions but only as an infinite series. Given that the choice of rates in [10] is somewhat arbitrary, we are at liberty to make others for which the field theory is simpler. **AIM1: Alternative Ising-like flip rates** From a technical point of view, what makes it difficult to study the rates of (6) is the presence of the \(\rho_{i}\) in the denominator of the exponential argument. Hence, a choice of reactions which still mimics equilibrium dynamics of Ising spins is given by [15] \[\begin{split} A_{\mathbf{i}}\longrightarrow B_{\mathbf{i}}& \text{rate: }\gamma\exp\left(-\beta m_{\mathbf{i}}\right)\\ B_{\mathbf{i}}\longrightarrow A_{\mathbf{i}}&\text{rate: } \gamma\exp\left(\beta m_{\mathbf{i}}\right)\end{split} \tag{7}\] We shall refer to this model as AIM1. The two set of reactions (6) and (7) are expected to give qualitatively similar phase diagrams, but quantitative agreement is not expected. In particular, strong differences are expected to emerge in the zero and infinite density limits, where the absence of a normalization of \(m_{i}\) by \(\rho_{i}\) might lead to drastic consequences. However, we will later show how, at finite densities, the behaviour near the ordering transition is extremely similar. **AIM2: Collisional flip rates** In the context of off-equilibrium systems such as active matter, we have no particular reason to argue that the flip dynamics should mimic that of any equilibrium spin system. The rates that will be introduced here are inspired by the process of multiple-particle collisions, involving a finite and fixed number of particles (chosen at random from the same site), in contrast with the equilibrium-inspired rates, where all particles on the same site interact to set the rates. We consider the following three reaction processes: AIM2.1: One-body collisional flip rate \[\begin{split} A_{\mathbf{i}}\longrightarrow B_{\mathbf{i}}& \text{rate: }\gamma\\ B_{\mathbf{i}}\longrightarrow A_{\mathbf{i}}&\text{rate: }\gamma\end{split} \tag{8}\] AIM2.2: Two-body collisional flip rate \[\begin{split} A_{\mathbf{i}}+B_{\mathbf{i}}\longrightarrow 2\,B_{\mathbf{i}}& \text{rate: }\lambda\\ A_{\mathbf{i}}+B_{\mathbf{i}}\longrightarrow 2\,A_{\mathbf{i}}& \text{rate: }\lambda\end{split} \tag{9}\] AIM2.3: Three-body collisional flip rate \[\begin{split} 2\,A_{\mathbf{i}}+B_{\mathbf{i}}\longrightarrow 3\,A_{\mathbf{i}}& \text{rate: }\tau\\ A_{\mathbf{i}}+2\,B_{\mathbf{i}}\longrightarrow 3\,B_{\mathbf{i}}& \text{rate: }\tau\end{split} \tag{10}\] The one-body collision (or random) spin-flipping (8) introduces a random error in the alignment process, not dissimilar to thermal noise. In fact, AIM2.1 is exactly equivalent to the infinite-temperature limit \(\beta\to 0\) of both AIM0 (6) and AIM1 (7). It amounts to a random interconversion of \(A\) and \(B\) particles, and there is no phase transition. On the other hand, the two- (9) and three-body (10) collisional terms favour alignment. For both cases, in the absence of any additional random spin-flipping, the two fully ordered states (all \(A\) or all \(B\) particles) are absorbing states: once the system reaches them, it will remain there forever. We might therefore expect AIM2.2 and AIM2.3 to give rise to a phenomenology similar to the original AIM0, at least qualitatively, with spontaneous symmetry breaking leading to a strongly flocked state of positive or negative spins. However, in Sec. 5, we will show how this expectation fails for AIM2.2: the two-body flip reaction cannot create ordering in the presence of any random (one-body) spin-flipping rate, no matter how small. Therefore a three-body interaction (AIM2.3) will be needed below to get an ordering transition. With this term present, one can add back two- and one-body collisional flips without qualitatively altering the outcome; we use the inclusive nomenclature 'AIM2' for this most general case. In the current work we restrict attention to AIM1 and AIM2 as described above. For these (like AIM0) the hopping and spin-flip rules do not obey detailed balance even in this propulsion-free limit (\(\epsilon\to 0\)) [10, 11]. A different AIM variant was recently constructed specifically to restore detailed balance in this limit [16], but we do not address it here. ### Master Equation Having specified the hopping and flip rates, the behaviour of the model can be studied via a Master Equation \(\partial_{t}P=\mathcal{L}\left[P\right]\) for the probability distribution \(P(\mathbf{n}^{+},\mathbf{n}^{-};t)\) in configuration space. The Master Equation is linear in \(P\), and each different process gives an independent contribution to \(\mathcal{L}\): \[\partial_{t}P=\mathcal{L}_{D}\left[P\right]+\mathcal{L}_{\epsilon}\left[P \right]+\mathcal{L}_{\text{flip}}\left[P\right] \tag{11}\] where \(\mathcal{L}_{\text{flip}}\) is the contribution of the alignment process, \(\mathcal{L}_{D}\) arises from the unbiased hopping dynamics while \(\mathcal{L}_{\epsilon}\) takes into account the hopping bias and is linear in the bias parameter \(\epsilon\). In the cases of AIM2, the alignment contribution can be further written as \(\mathcal{L}_{\text{flip}}=\mathcal{L}_{\gamma}+\mathcal{L}_{\lambda}+ \mathcal{L}_{\tau}\), with terms stemming from reaction (8), (9) and (10) respectively. The explicit form of all the evolution operators is given in Appendix A. ## 3 The Doi-Peliti field theory The Master Equation is _exactly_ represented by a field-theoretic action [12, 17], constructed through a coherent-state path integral representation of the evolution operator \(\mathcal{L}\), following the second-quantization formalism to reaction-diffusion processes introduced by Doi [18, 19] and Peliti [20]. ### Building the action Briefly, the Doi-Peliti construction is as follows. For each particle species, _creation_ and _annihilation_ fields are introduced. The Master Equation is first written in a second-quantisation formalism, such that the _state_ of the system - namely the probability generating function - evolves via an imaginary-time Schroedinger equation with an evolution operator \(\hat{H}\) derived from \(\mathcal{L}\). The action for the creation and annihilation fields is obtained by computing the matrix elements of \(\hat{H}\) in the basis of the eigenvectors of creation and annihilation operators, of which our fields are the associated eigenvalues. Operationally, one first writes \(\hat{H}\) in normal ordered form, and then replaces annihilation and creation operators with their corresponding fields. See Appendix B for more details. ### Building the operators The main drawback of the Doi-Peliti formalism is that the fields it describes are of difficult physical interpretation. In fact, not only does the evolution operator have to be written in a second-quantised formalism, but so do the observables of the theory. For example, consider a model with a single species of particles on a lattice. The number of particles on a given site \(\mathbf{i}\) can then be expressed as \(n_{\mathbf{i}}=a_{\mathbf{i}}^{\dagger}a_{\mathbf{i}}\), where \(a^{\dagger}\) and \(a\) are creation and annihilation operators. Say we wanted to compute the expectation value of some observable containing products of \(n_{\mathbf{i}}\) at different sites and times. The rule to construct the corresponding field-theoretical operator is very similar to that needed to build the action. First, particle numbers are written in terms of creation and annihilation operators, then such operators have to be normal ordered, and then operators are substituted by fields. A simplifying feature of the Doi-Peliti theory is that any creation operators appearing at the last of the chosen times can then be dropped. The underlying reason is causality: the event of a particle _created_ after all the measurements should not affect the averages we are computing. Accordingly, the field-theoretical operator whose average is equal to the expected value of \(n_{\mathbf{i}}\) at time \(t\) is constructed as follows: \(n_{\mathbf{i}}(t)=a_{\mathbf{i}}^{\dagger}(t)a_{\mathbf{i}}(t)\to a_{\mathbf{i}}(t)\to\phi_{ \mathbf{i}}(t)\) with \(\phi\) the annihilation field. (The creation operator at time \(t\) can be dropped as stated above.) Therefore, the following relation for the expected value of \(n\) holds \[\mathbb{E}\left[n_{\mathbf{i}}(t)\right]=\langle\phi_{\mathbf{i}}(t)\rangle \tag{12}\] where we denote with \(\mathbb{E}\left[\cdot\right]\) expected values for the microscopic stochastic process, while \(\langle\cdot\rangle\) indicates the average over the field-theoretic measure. This case is simple, but more complicate operators are not always so intuitive. For example, the correlation between \(n_{\mathbf{i}}\) at time \(t\) and \(n_{\mathbf{j}}\) at time \(t^{\prime}<t\) is \[\mathbb{E}\left[n_{\mathbf{i}}(t)\,n_{\mathbf{j}}(t^{\prime})\right]=\langle\phi_{\bm {i}}(t)\phi_{\mathbf{j}}^{*}(t^{\prime})\phi_{\mathbf{j}}(t^{\prime})\rangle \tag{13}\] where \(\phi^{*}\) is the creation field. Meanwhile the equal time and equal position correlator obeys \[\mathbb{E}\left[n_{\mathbf{i}}(t)^{2}\right]=\langle\phi_{\mathbf{i}}(t)^{2}+\phi_{\mathbf{i }}(t)\rangle \tag{14}\] This follows from normal ordering whereby \[n^{2}\rightarrow\left(a_{\mathbf{i}}^{\dagger}a_{\mathbf{i}}\right)^{2}=a_{\mathbf{i}}^{ \dagger}a_{\mathbf{i}}^{\dagger}a_{\mathbf{i}}\,a_{\mathbf{i}}+a_{\mathbf{i}}^{\dagger}a_{\mathbf{ i}}\rightarrow\phi_{\mathbf{i}}^{2}+\phi_{\mathbf{i}}\,.\] ### The Doi-Peliti action for AIMs Active Ising Models have two distinct particle types \(A,B\) corresponding to spins \(\pm 1\) respectively, so alongside annihilation and creation fields \(\phi\), \(\phi^{*}\) for species \(A\) we need counterparts \(\psi\) and \(\psi^{*}\) for \(B\). Just as for the Master Equation, the space-time action \(S\) of the field theory is additive over the various hopping and jump processes, and also over spatial (site) and temporal variables. Thus \(S=\sum_{\mathbf{i}}\int dt\,\mathcal{S}\) with the action density \[\mathcal{S}=\phi_{\mathbf{i}}^{*}(t)\partial_{t}\phi_{\mathbf{i}}(t)+\psi _{\mathbf{i}}^{*}(t)\partial_{t}\psi_{\mathbf{i}}(t)+\\ +\mathcal{S}_{D}+\mathcal{S}_{\epsilon}+\mathcal{S}_{\text{flip}} \tag{15}\] Since the spin-flip dynamics involves only same-site particles, \(\mathcal{S}_{\text{flip}}\) is fully local in both space and time, while the diffusive \(\mathcal{S}_{D}\) and propulsive \(\mathcal{S}_{\epsilon}\) hopping contributions connect neighbouring sites. The explicit form of these various contributions for the different AIMs is given in Appendix C. ### The Cole-Hopf transformation The Cole-Hopf transformation [12, 21] connects the somewhat abstract Doi-Peliti fields to physical observables, namely number-density fields for \(A\) and \(B\) particles. For the one-species example of Sec 3.2, the transformed fields \(\rho\) and \(\tilde{\rho}\) obey \[\phi^{*}=e^{\tilde{\rho}}\,,\qquad\phi=e^{-\tilde{\rho}}\rho \tag{16}\] Thus the density field \(\rho=\phi^{*}\phi\) is analogue to the second-quantised number operator \(\hat{n}=a^{\dagger}a\), while the correlation function of Eq. (13) now takes the more intuitive form \[\mathbb{E}\left[n_{\mathbf{i}}(t)\,n_{\mathbf{j}}(t^{\prime})\right]=\langle\rho_{ \mathbf{i}}(t)\rho_{\mathbf{j}}(t^{\prime})\rangle \tag{17}\] More generally, for all density correlators evaluated at different times and/or different sites, one can now replace the expectation value by the average over the field-theoretical measure, and replace the particle number operators by the corresponding \(\rho\) fields. However, to compute correlation functions on the same site at the same time, subtleties remain, because the corresponding number operators must remain normal-ordered. Thus the correlator given in Eq. (14) obeys \(\mathbb{E}\left[n_{\mathbf{i}}(t)^{2}\right]=\langle\rho_{\mathbf{i}}(t)^{2}+\rho_{ \mathbf{i}}(t)\rangle\). This non-intuitive result is the unavoidable price for building an exact theory in terms of (almost!) physical density fields. Below we therefore pay careful attention when computing equal-time correlators. ## 4 The hydrodynamic limit Here we derive hydrodynamic-level equations for the various Active Ising Models proposed in Sec 2. (We exclude AIM0 because, as mentioned there, its Doi-Peliti action is intractable.) The derivation is lengthy, but offers important insights. The strategy is as follows: starting from the Master Equation we derive the Doi-Peliti field theory following Sec 3. Converting to physical fields via Cole-Hopf (as in Sec 3.4), we use a reverse Martin-Siggia-Rose procedure (see Appendix D) to derive, from the field-theory action, equations of motion for the density fields. This programme can be followed exactly to the last stage, at which point the non-Gaussian noise that emerges at exact level (see Appendix D) can be either gaussianized (to give the Langevin equations) or suppressed (to give deterministic hydrodynamics). The last stage is achieved by sending the linear size of the system \(L\rightarrow\infty\) while keeping fixed the density of particles. In this limit, exact hydrodynamic PDEs emerge, describing the behaviour of hydrodynamic variables on scales comparable with \(L\), while the leading order stochastic corrections give the Gaussian (Langevin) noises. ### Preliminaries The Cole-Hopf transformed action density reads \[\mathcal{S}=\tilde{\rho}_{\mathbf{i}}^{+}\partial_{t}\rho_{\mathbf{i}}^{+}+\tilde{ \rho}_{\mathbf{i}}^{-}\partial_{t}\rho_{\mathbf{i}}^{-}+\mathcal{S}_{D}^{CH}+ \mathcal{S}_{\epsilon}^{CH}+\mathcal{S}_{\text{flip}}^{CH} \tag{18}\] where \(\tilde{\rho}^{+}\) and \(\rho^{+}\) have replaced \(\phi\) and \(\phi^{*}\), and \(\tilde{\rho}^{-}\) and \(\rho^{-}\) have replaced \(\psi\) and \(\psi^{*}\). The fields \(\rho^{+}\) and \(\rho^{-}\) approach the physical densities for \(A\) and \(B\) particles respectively. The contributions to are found via the change of variables (16); their forms are given as needed, below. We first set (_wlog_) the lattice spacing to \(h=1\), and then consider the system at diffusive hydrodynamic scales, achieved by a further rescaling of spatial coordinates, \(\tilde{\mathbf{x}}=\mathbf{i}/L\), and of time, \(\tilde{t}=t/L^{2}\). This choice of rescaling follows from requiring diffusion to be the process that fixes the hydrodynamic time-scale. Under these rescalings, we have \(\sum_{\mathbf{i}}=L^{d}\int d\tilde{\mathbf{x}}\,;\,\int dt=L^{2}\int d\tilde{t}\), and \(S=\int d\tilde{\mathbf{x}}d\tilde{t}\,\tilde{\mathcal{S}}\), where \(\tilde{\mathcal{S}}\) is the hydrodynamic action density, which absorbs all the powers of \(L\) coming from space-time rescaling. This action density can be expanded in powers of \(L^{-1}\), dropping subleading terms as \(L\to\infty\). We continue to split \(\tilde{\mathcal{S}}\) into contributions from spin-flip, diffusive and biased hopping processes, whose rates must however be rescaled such that all three contribute in the hydrodynamic limit. Finally, the conjugate fields must also be rescaled as \(\tilde{\rho}\to L^{-d}\tilde{\rho}\). ### Hydrodynamics for AIM1 For AIM1, with spin-flip rates given by (7), the deterministic hydrodynamic equations are known from Ref. [15], offering an important cross check on our methods. At leading order in \(L^{-1}\), the action terms (dropping the \(CH\) superscript) are: \[\begin{split}\tilde{\mathcal{S}}_{D}&=-L^{d}D \tilde{\rho}^{+}\tilde{\nabla}^{2}\rho^{+}-L^{d}D\tilde{\rho}^{-}\tilde{\nabla }^{2}\rho^{-}-\\ &-L^{d}D\rho^{+}\left(\tilde{\mathbf{\nabla}}\tilde{\rho}^{+}\right) ^{2}-L^{d}D\rho^{-}\left(\tilde{\mathbf{\nabla}}\tilde{\rho}^{-}\right)^{2}\end{split} \tag{19}\] \[\begin{split}\tilde{\mathcal{S}}_{\epsilon}&=L^{d+1} v\tilde{\rho}^{+}\partial_{\tilde{\mathbf{\perp}}}\rho^{+}-L^{d+1}v\tilde{\rho}^{-} \partial_{\tilde{\mathbf{\perp}}}\rho^{-}\end{split} \tag{20}\] \[\begin{split}\tilde{\mathcal{S}}_{\text{flip}}& =L^{d+2}\gamma\,e^{-\beta}\left(e^{\tilde{\rho}^{+}}-e^{\tilde{ \rho}^{-}}\right)\times\\ &\quad\times\left(e^{-\tilde{\rho}^{+}}\rho^{+}e^{\left(e^{ \beta}-1\right)\rho^{-}+\left(e^{-\beta}-1\right)\rho^{+}}-\right.\\ &\quad-\left.e^{-\tilde{\rho}^{-}}\rho^{-}e^{\left(e^{-\beta}-1 \right)\rho^{-}+\left(e^{\beta}-1\right)\rho^{+}}\right)\end{split} \tag{21}\] For all three to contribute in the hydrodynamic limit, as previously discussed, then if \(D\) is fixed of order unity, we must choose \(\gamma\sim L^{-2}\) in (21) and \(v\sim L^{-1}\) in (20), and redefine these parameters now to absorb such factors. These choices ensure that the number of spin flips is order one in the time \(\sim L^{2}/D\) needed for a particle to diffuse a distance \(L\), and that propulsion likewise competes with both flipping and diffusion at this hydrodynamic scale. After the final rescaling mentioned above, \(\tilde{\rho}\to L^{-d}\tilde{\rho}\), we can look at all terms in \(\tilde{\mathcal{S}}\) (including the time derivative terms) scaling as \(L^{0}\), as is required for the \(L\to\infty\) limit to now be taken. We finally get to the hydrodynamic action density \[\begin{split}\tilde{\mathcal{S}}&=\tilde{\rho}^{+} \left(\partial_{\tilde{t}}-D\tilde{\nabla}^{2}+v\partial_{\tilde{x}}\right) \rho^{+}+\\ &+\tilde{\rho}^{-}\left(\partial_{\tilde{t}}-D\tilde{\nabla}^{2}-v \partial_{\tilde{x}}\right)\rho^{-}+\\ &+\gamma\,e^{-\beta}\left(\tilde{\rho}^{+}-\tilde{\rho}^{-} \right)\times\\ &\times\left(\rho^{+}e^{\left(e^{\beta}-1\right)\rho^{-}+\left(e ^{-\beta}-1\right)\rho^{+}}-\right.\\ &\quad-\left.\rho^{-}e^{\left(e^{-\beta}-1\right)\rho^{-}+\left( e^{\beta}-1\right)\rho^{+}}\right)\end{split} \tag{22}\] The absence of higher powers of the \(\tilde{\rho}\) fields finally allows us to map this field theory, via the inverse Martin-Sigga-Rose procedure outlined in Appendix D, onto the noiseless limit of a set of stochastic PDEs (the noisy version is given in Sec 4.2.1 below). The hydrodynamic equations governing \(\rho^{+}\) and \(\rho^{-}\) are thereby found as \[\partial_{t}\rho^{+} =D\nabla^{2}\rho^{+}-v\partial_{\tilde{x}}\rho^{+}-F(\rho^{+}, \rho^{-}) \tag{23}\] \[\partial_{t}\rho^{-} =D\nabla^{2}\rho^{-}+v\partial_{\tilde{x}}\rho^{-}+F(\rho^{+}, \rho^{-}) \tag{24}\] where \[F(\rho^{+},\rho^{-})=\gamma\,e^{-\beta} \left(\rho^{+}e^{\left(e^{\beta}-1\right)\rho^{-}+\left(e^{-\beta }-1\right)\rho^{+}}-\right.\] \[\left.-\left.\rho^{-}e^{\left(e^{-\beta}-1\right)\rho^{-}+\left(e ^{\beta}-1\right)\rho^{+}}\right)\] If written in terms of magnetisation \(m=\rho^{+}-\rho^{-}\) and total number of particles \(\rho=\rho^{+}+\rho^{-}\), these equations become \[\partial_{t}m =D\nabla^{2}m-v\partial_{\tilde{x}}\rho-2F(m,\rho) \tag{25}\] \[\partial_{t}\rho =D\nabla^{2}\rho-v\partial_{\tilde{x}}m \tag{26}\] where \[\begin{split} F&\left(m,\rho\right)=\gamma\,e^{-\beta- \rho+\rho\cosh\beta}\times\\ &\quad\times\left(m\cosh\left[m\sinh\beta\right]-\rho\sinh\left[m \sinh\beta\right]\right)\end{split} \tag{27}\] Notably, the 'aligning force' \(F\) is exactly as found in Ref. [15]. There the hydrodynamic equations were derived directly by averaging the microscopic process over a local Possion measure. Although the derivation is quite different, the Doi-Peliti formalism ultimately gives an equivalent result because it is constructed from coherent states that also correspond to a Poisson distribution [12]. #### 4.2.1 Fluctuating hydrodynamics An advantage of our Doi-Peliti field theory is that it provides a systematic way to address _fluctuating_ hydrodynamics. This can be done by keeping the next order in \(L^{-d}\) beyond the action (22). This captures for finite size systems the leading order (small, Gaussian) fluctuations around Equations (25), (26), by adding to them Langevin noises scaling as \(L^{-d/2}\). Adding these terms to the action (22), the equations for \(m\) and \(\rho\) become \[\partial_{t}m =D\nabla^{2}m-v\partial_{\tilde{x}}\rho-2F(m,\rho)+\frac{1}{ \sqrt{L^{d}}}\theta \tag{28}\] \[\partial_{t}\rho =D\nabla^{2}\rho-v\partial_{\tilde{x}}m+\frac{1}{\sqrt{L^{d}}} \boldsymbol{\nabla}\cdot\boldsymbol{\zeta} \tag{29}\] where \(F(m,\rho)\) is still given by (27), but now we have the noise contributions \(\theta\) and \(\boldsymbol{\zeta}\). The noise \(\theta\) can be further split in two contributions \(\theta=\eta+\boldsymbol{\nabla}\cdot\boldsymbol{\xi}\), where the latter arises from diffusion and thus conserves the total magnetisation. The statistics of these Gaussian noises is fully determined by a covariance matrix comprising \[\langle\eta(\boldsymbol{x},t)\eta(\boldsymbol{y},s)\rangle =\,4\,\gamma\,e^{-\beta-\rho+\rho\cosh(\beta)}\times\] \[\times\left(\rho\cosh\left[m\sinh(\beta)\right]-\right.\] \[-\left.m\sinh\left[m\sinh(\beta)\right]\right)\times\] \[\times\delta\left(\boldsymbol{x}-\boldsymbol{y}\right)\delta \left(t-s\right)\] with other noise covariances being zero except for \[\langle\xi_{i}(\boldsymbol{x},t)\xi_{j}(\boldsymbol{y},s)\rangle =2\,D\,\rho\,\delta_{i,j}\delta\left(\boldsymbol{x}-\boldsymbol {y}\right)\delta\left(t-s\right)\] \[\langle\zeta_{i}(\boldsymbol{x},t)\zeta_{j}(\boldsymbol{y},s)\rangle =2\,D\,\rho\,\delta_{i,j}\delta\left(\boldsymbol{x}-\boldsymbol {y}\right)\delta\left(t-s\right)\] \[\langle\xi_{i}(\boldsymbol{x},t)\zeta_{j}(\boldsymbol{y},s)\rangle =2\,D\,m\,\delta_{i,j}\delta\left(\boldsymbol{x}-\boldsymbol{y} \right)\delta\left(t-s\right)\] Note that \(\boldsymbol{\xi}\) and \(\boldsymbol{\zeta}\), namely the conservative noises, are gaussian also beyond the large \(L\) limit. This can be seen from the fact that they arise from the action terms (19) and (20), where no term is more than quadratic in \(\tilde{\rho}^{\pm}\). The non-conservative noise \(\eta\), on the other hand, has a non-gaussian statistics (higher powers of \(\tilde{\rho}^{\pm}\) in (21)) which becomes gaussian only al large \(L\) in virtue of the central limit theorem. ### Hydrodynamics for AIM2 The same procedure as used above for flip rates obeying (7) can be applied to the many-body rates (8)-(10). Spatial hopping is not affected, so all the contributions proportional to \(D\) and \(\epsilon\) will remain unchanged. But the contribution \(\tilde{\mathcal{S}}_{\text{flip}}\) to the hydrodynamic action now takes the form (before rescaling parameters) \[\begin{split}\tilde{\mathcal{S}}_{\text{flip}}=& L^{d+2}\gamma\,\left(e^{\tilde{\rho}^{+}}-e^{\tilde{\rho}^{-}}\right) \times\\ &\times\left(e^{-\tilde{\rho}^{+}}\rho^{+}-e^{-\tilde{\rho}^{-}} \rho^{-}\right)-\\ -& L^{d+2}\,\lambda\,\left(e^{\tilde{\rho}^{+}}-e^{ \tilde{\rho}^{-}}\right)^{2}\times\\ &\times e^{-\tilde{\rho}^{+}-\tilde{\rho}^{-}}\rho^{+}\,\rho^{-}+ \\ +& L^{d+2}\frac{\tau}{2}\left(e^{-\tilde{\rho}^{+}}-e^{ \tilde{\rho}^{-}}\right)\times\\ &\times\left(e^{\tilde{\rho}^{+}}\rho^{+}-e^{\tilde{\rho}^{-}} \rho^{+}\right)\rho^{+}\,\rho^{-}\end{split} \tag{30}\] As done previously, we now rescale the rates \(\gamma\), \(\lambda\) and \(\tau\) by \(L^{-2}\) such that each type of flip occurs competes with diffusion (and propulsion). Finally rescaling again \(\tilde{\rho}\to L^{-d}\tilde{\rho}\) and taking \(L\to\infty\), the resulting hydrodynamic action becomes equivalent to the same partial differential equations (24), (23), but with a different choice of \(F(\rho^{+},\rho^{-})\). Again rewriting this in terms of magnetisation \(m=\rho^{+}-\rho^{-}\) and particle density \(\rho=\rho^{+}+\rho^{-}\), we recover (25) and (26), with (27) replaced by \[F\left(m,\rho\right)=m\left(\gamma+\tau\frac{m^{2}-\rho^{2}}{8}\right) \tag{31}\] Just as in Sec 4.2.1, we can compute leading-order fluctuation corrections, recovering (28) and (29), in which \(F(m,\rho)\) obeys (31) and the noise correlator of \(\eta\) given by \[\begin{split}\langle\eta(\boldsymbol{x},t)\eta(\boldsymbol{y},s) \rangle&=\delta\left(\boldsymbol{x}-\boldsymbol{y}\right)\delta \left(t-s\right)\times\\ &\times\left[2\gamma\rho+\left(\lambda+\rho\frac{\tau}{4}\right) \left(\rho^{2}-m^{2}\right)\right]\end{split} \tag{32}\] while all other correlators remain the same. ### Homogeneous solutions Spatially homogeneous but time-dependent solutions of the noiseless hydrodynamic equations are found by assuming \(m\left(\boldsymbol{x},t\right)=m(t)\) and \(\rho(t)\) in (25,26), which become \[\partial_{t}m =-2F(m,\rho) \tag{33}\] \[\partial_{t}\rho =0 \tag{34}\] The second of these expresses particle conservation: \(\rho(t)=\rho_{0}\), the initial density. In contrast, \(m\) relaxes via the spin-flip dynamics, with an asymptotic solution \(\lim_{t\to 0}m(t)=m_{0}\) obeying \(F(m_{0},\rho_{0})=0\). For both choices of \(F\) considered above in (27), (31), \(m_{0}=0\) is always a solution but is unstable if \(\partial_{m}F(m_{0},\rho_{0})<0\), giving a magnetized phase. For definiteness we focus on AIM2 here (though AIM1 is similar [15]) for which the force \(F(m,\rho)\) obeys (31) so that \[\partial_{m}F=\left(\gamma-\frac{\tau}{8}\rho^{2}\right)-\frac{3\tau}{8}m^{2} \tag{35}\] The state \(m_{0}=0\) is thus stable for \(\rho_{0}\leq\rho_{c}=(8\gamma/\tau)^{1/2}\), and unstable for \(\rho_{0}>\rho_{c}\), where one has a symmetric pair of stable, magnetized states \(m_{0}=\pm\bar{m}\) with \(\bar{m}^{2}=\rho_{0}^{2}-\rho_{c}^{2}\). This resembles a standard, Ising-like spontaneous symmetry breaking where two vanishingly magnetic states merge at the critical point \(\rho_{0}=\rho_{c}\). However, in the passive Ising model, for all \(\rho_{0}>\rho_{c}\) the two solutions \(m=\pm m_{0}=\pm\bar{m}\) remain stable against _inhomogeneous perturbations_. For AIMs this is not the case: there is a region of parameter space where no homogeneous solution is stable. The AIM transition is thus better understood as a first-order transition, akin to a liquid-gas transition [11]. #### 4.4.1 Linear stability of uniform states To check the linear stability of homogeneous solutions \(m=m_{0}\), \(\rho=\rho_{0}\), we linearise the equations of motion and examine small perturbations \(\delta m\) and \(\delta\rho\) which then obey: \[\partial_{t}\delta m =D\,\nabla^{2}\delta m-v\,\partial_{x}\delta\rho \tag{36}\] \[-2\,\alpha\left(\rho_{0}\right)\delta m-2\,g\left(\rho_{0}\right) \delta\rho\] \[\partial_{t}\delta\rho=D\,\nabla^{2}\delta\rho-v\,\partial_{x}\delta m \tag{37}\] where \[\alpha\left(\rho_{0}\right) =\partial_{m}F\left(m_{0},\rho_{0}\right) \tag{38}\] \[g\left(\rho_{0}\right) =\partial_{\rho}F\left(m_{0},\rho_{0}\right) \tag{39}\] In Fourier space (\(f\left(\boldsymbol{k},t\right)=\int d^{d}x\,f\left(\boldsymbol{x},t\right)e^ {-i\boldsymbol{k}\cdot\boldsymbol{k}}\)), the linearised dynamics becomes \[\partial_{t}\left(\begin{array}{c}\delta m\\ \delta\rho\end{array}\right)=M\left(\boldsymbol{k}\right)\left(\begin{array}[] {c}\delta m\\ \delta\rho\end{array}\right) \tag{40}\] Here \[M(\boldsymbol{k})=\left(\begin{array}{cc}-i\,v\,k_{x}-2\,g(\rho_{0})&-D\,k ^{2}-2\,\alpha(\rho_{0})\\ -D\,k^{2}&-i\,v\,k_{x}\end{array}\right) \tag{41}\] and stability against perturbations at wavevector \(\boldsymbol{k}\) requires both eigenvalues of \(M(\boldsymbol{k})\) to have a nonpositive real part. These eigenvalues are \[\lambda_{1}\left(\boldsymbol{k}\right)= -\sqrt{\alpha(\rho_{0})^{2}+2i\,v\,g(\rho_{0})k_{x}-v^{2}k_{x}^{2}} \tag{42}\] \[-\alpha(\rho_{0})-Dk^{2}\] \[\lambda_{2}\left(\boldsymbol{k}\right)= \sqrt{\alpha(\rho_{0})^{2}+2i\,v\,g(\rho_{0})k_{x}-v^{2}k_{x}^{2}}\] (43) \[-\alpha(\rho_{0})-Dk^{2}\] Studying the eigenvalues at \(k=0\) (where \(\lambda_{1}=-2\alpha(\rho_{0})\) and \(\lambda_{2}=0\)) we confirm the analysis made above concerning stability within the subspace of homogeneous (mean-field) solutions. What happens if instead we perturb the system, not with a homogeneous perturbation, but with a slowly varying one? For a system with finite positive \(\alpha(\rho_{0})\) (hence stable against uniform perturbations) continuity in \(\boldsymbol{k}\) requires \(\Re\left(\lambda_{1}(\boldsymbol{k})\right)<0\) at small \(\boldsymbol{k}\). In contrast, \(\lambda_{2}\) at small \(\boldsymbol{k}\) takes the form \[\lambda_{2}\left(\boldsymbol{k}\right)= -Dk^{2}+v^{2}\frac{g\left(\rho_{0}\right)^{2}-\alpha\left(\rho_{0 }\right)^{2}}{2\,\alpha\left(\rho_{0}\right)^{3}}k_{x}^{2}+ \tag{44}\] \[+i\frac{v\,g\left(\rho_{0}\right)}{\alpha\left(\rho_{0}\right)}k _{x}+O(k^{3})\] We distinguish the cases \(\rho_{0}<\rho_{c}\) and \(\rho_{0}>\rho_{c}\): 1. At \(\rho_{0}<\rho_{c}\), the only solution is \(m_{0}=0\), \(g\left(\rho_{0}\right)=0\) and \(\alpha\left(\rho_{0}\right)=\frac{\tau}{8}\left(\rho_{c}^{2}-\rho_{0}^{2} \right)>0\). Hence, the eigenvalue \(\lambda_{2}\) becomes \[\lambda_{2}\left(\boldsymbol{k}\right)=-Dk^{2}-\frac{v^{2}}{2\alpha(\rho_{0}) }k_{x}^{2}+O(k^{3})\] (45) indicating stability of the uniform, nonmagnetic solution for all \(\rho_{0}<\rho_{c}\), in agreement with the predictions of mean field theory. 2. At \(\rho_{0}>\rho_{c}\) the homogeneous solutions that appear stable from a mean-field argument have \(m_{0}^{2}=\rho_{0}^{2}-\rho_{c}^{2}\neq 0\). In this case, we have that \(\alpha(\rho_{0})=\frac{\tau}{4}\left(\rho_{0}^{2}-\rho_{c}^{2}\right)>0\) and \(g(\rho_{0})=\mp\frac{\tau\rho_{0}}{4}\left(\rho_{0}^{2}-\rho_{c}^{2}\right)^{1/2}\). Therefore, \[\begin{split}\lambda_{2}\left(\mathbf{k}\right)&=-Dk^{ 2}+v^{2}\frac{2\rho_{c}^{2}}{\tau\left(\rho_{0}^{2}-\rho_{c}^{2}\right)^{2}}k_{ x}^{2}+\\ &\mp i\frac{\rho_{0}\,v}{\left(\rho_{0}^{2}-\rho_{c}^{2}\right)^ {1/2}}k_{x}+O(k^{3})\end{split}\] (46) The linear part (in \(\mathbf{k}\)) of \(\lambda_{2}\left(\mathbf{k}\right)\) is always imaginary, and hence does not affect the stability analysis. The quadratic part may, however, become positive for values of \(\rho_{0}\) close to \(\rho_{c}\). In particular, this happens when \[\rho_{0}^{2}-\rho_{c}^{2}<\sqrt{\frac{2}{\tau D}}\,\rho_{c}\,v\Rightarrow\] (47) \[\begin{split}\rho_{0}<\rho_{l}&:=\sqrt{\rho_{c} ^{2}+\sqrt{\frac{2}{\tau D}}\,\rho_{c}\,v}\;=\\ &=\rho_{c}+\frac{v}{\sqrt{2\tau D}}+\mathcal{O}(v^{2})\end{split}\] (48) In this second scenario, which arises for nonzero propulsion \(v\), the homogeneous magnetic phase becomes unstable with respect to long wavelength perturbations. Only for \(v=0\) is the passive-Ising-like second order transition recovered; for all \(v\neq 0\) there is a range of densities, \(\rho_{c}(\gamma,\tau)<\rho_{0}<\rho_{l}(\gamma,\tau,D)\), in which no homogeneous solution is stable. In this range the system is therefore driven towards a spatiotemporal pattern. Although we will not reproduce here the full calculation, note that the same qualitative behaviour arises for AIM1, in which the force \(F\) in (33) is replaced by (27): here it is again possible to show that for \(v\neq 0\) there is a finite range of densities \(\rho_{c}<\rho_{0}<\rho_{l}\) in which the ordered homogeneous solution is linearly unstable with respect to long-wavelength spatial perturbations. Hence, the transition is not second-order, but is better understood as a liquid-gas phase transition as in [11]. In both cases, for the zero propulsion limit \(v\to 0\), we find \(\rho_{l}\to\rho_{c}\), so that the homogeneous ordered and disordered phases are linearly stable on either side of \(\rho_{c}\), and we predict a second-order transition in that limit. ## 5 Role of two-body collisions In the previous section we analysed the hydrodynamic behaviour of AIM2, where the spin flipping process was given by the set of reactions (9,10). Strikingly, the critical density \(\rho_{c}=(8\gamma/\tau)^{1/2}\) depends on the one-body (random) spin-flip rate \(\gamma\), and the three-body rate \(\tau\), but not on the two-body rate \(\lambda\). This means that, contrary to naive expectation, two-body collisional alignment cannot by itself lead to ordering, no matter how large the rate \(\lambda\) at which this occurs. A physical interpretation of the relevant process is that two close enough particles, _i.e._ sharing the same lattice site, bump into each other with some rate \(\lambda\). When such a collision occurs, if the particles have opposite spin, they align (randomly choosing which of the two orientations to share). Since the spin sets the preferred direction of motion of the particle, the two colliding particles move in the same direction after the collision. This seems to capture a basic and intuitive mechanism through which flocking might occur, yet we find no ordered phase. Something closer to a'majority rule' (which gets encoded in the three-body collision rate \(\tau\)) is instead required. Intriguingly, several recent studies have proposed that two-body interactions are indeed not enough to sustain global alignment [13, 22, 23]. Our work confirms this prediction, which we believe has not been given enough emphasis in the community. The advantage of our field-theoretical approach is that our exact analysis can cleanly and unambiguously rule out any ordered state induced by the two-body collision term in the hydrodynamic limit addressed here. Specifically, if we retain only the one-body (randomizing) and two-body terms in by setting \(\tau=0\) in AIM2, we obtain (25,26) with a force term \(F(m,\rho)=\gamma m\). The homogeneous solution at zero magnetization, \(m_{0}=0\), is then stable for all \(\gamma>0\), regardless of the global density \(\rho_{0}\). Therefore, for any finite amount of random spin flipping, the two-body collision process described by the reaction (9) is not sufficient to induce collective motion. At \(\gamma=0\), things look slightly different. Without the two-body term (\(\lambda=0\)), all solutions can be written as a superposition of waves which travel in the \(\pm x\) direction with speed \(v\) and damping \(Dk^{2}\). These solutions not only conserve the total density, but also the total magnetisation; accordingly a state of uniform magnetization cannot emerge from an unmagnetized initial state. Remarkably, this result is sustained, at hydrodynamic level, even when the two-particle interaction (9) is switched on. This result seems counter-intuitive. Indeed, in the absence of random spin-flipping but with two-body collisions (\(\gamma=0,\lambda>0\)) the system has two absorbing states: whenever particles are all either of the \(A\) or \(B\) kind, no further spin flipping can occur. Either state would represent a permanently stable flock. As we have seen, this physics does not emerge in the hydrodynamic limit; we now ask why. A key factor will be that absorbing states are reached in a finite time only in finite-size system. We must therefore switch attention to the fluctuating hydrodynamics of this system arising at finite \(L\). The finite-size behaviour of the two-particle interaction model, at large \(L\), is given by \[\partial_{t}m =D\nabla^{2}m-v\partial_{\hat{x}}\rho+\frac{1}{\sqrt{L^{d}}}\left( \eta+\mathbf{\nabla}\cdot\mathbf{\xi}\right) \tag{49}\] \[\partial_{t}\rho =D\nabla^{2}\rho-v\partial_{\hat{x}}m+\frac{1}{\sqrt{L^{d}}}\mathbf{ \nabla}\cdot\mathbf{\zeta} \tag{50}\] We have already set \(\gamma=0\), so this is 'pure' AIM2.2 as defined by (9). As in the previous models, \(\eta\), \(\mathbf{\xi}\) and \(\mathbf{\zeta}\) are Gaussian noises whose correlators are found by setting \(\gamma=\tau=0\) in the more general results given already for AIM2 in Sec 4.3. The noises \(\mathbf{\xi}\) and \(\mathbf{\zeta}\) arise from the diffusive motion of particles, and hence conserve the total magnetisation. Flocking, were it to emerge, would have to stem from the \(\eta\) noise term. But, as seen from the covariance results in Sec 4.3, specifically (32), the noise \(\eta\) is larger the smaller the magnetisation. When \(m\sim 0\), this noise therefore pushes the system towards magnetised states with \(m\neq 0\). The noise then weakens, so it is less likely for the system to return to \(m\sim 0\). When eventually the system reaches the absorbing state \(m=\pm\rho\), all particles flock in the same direction forever after. The \(\eta\) term therefore does push the system towards a flocking state; but it is the only term that does so. This means that for AIM2.2 any collective motion arises by a purely stochastic mechanism, not a deterministic drift - a fact also clear from the shape of \(F(m,\rho)\) when \(\tau=0\). As previously discussed, stochasticity, and hence the probability of achieving this flocked state, vanishes in the hydrodynamic limit \(L\to\infty\). Therefore, exact conservation of the total magnetisation at deterministic level in is not because spin-flipping processes are absent altogether, but because the probability of having a fluctuation that macroscopically changes \(m\) vanishes when \(L\to\infty\). This peculiar scenario is of course radically changed by the three-body collisional coupling term \(\tau\), which restores a deterministic drift towards flocking that wins out above the critical density \(\rho_{c}\). ## 6 The AIM critical point The linear stability analysis performed in Sec 4.4.1 shows that AIMs generically undergo a first order transition, with a continuous transition recovered in the limit of unbiased hopping rates \(\epsilon\to 0\) (equivalently \(v\to 0\)): this accordingly defines the AIM critical point. An important question concerns the universality class of this critical transition. The answer would be obvious if this limit recovered a reversible model, which would surely lie in the kinetic Ising class known as Model C [24], as discussed further below. Indeed, numerical simulations in 2 dimensions of the AIM0 give results compatible with this prediction [11]. However, this outcome is not guaranteed because, as also shown in [11], the dynamics of AIM0 violates detailed balance even at \(v=0\), making the system out of equilibrium even in the absence of self-propulsion. This is equally true of AIM1 and AIM2, and given their shared symmetries one can expect all these models to lie in a single universality class (that may or may not be that of equilibrum Model C). A major advantage of our field-theoretic approach is that it creates a clear and unambiguous foundation for resolving this issue via a full renormalization group (RG) analysis. Such an analysis lies beyond our present scope and will be presented elsewhere [25]. Here we derive a suitable starting point for RG calculations, compare it with the corresponding Model C equations, and review what is known about the two cases. Our starting point is AIM2 where spin-flipping is given by the reactions (8) and (10). We set the two-body collision term (9) to zero but have checked that the results below are unchanged by this, and also checked that they hold for AIM1 with rates (7). ### Relevant and irrelevant terms The hydrodynamic methods used in Sec 4 generally identify a limit in which noiseless, mean-field critical behaviour is recovered; this approach does not capture all relevant terms for RG purposes. To identify these, we start instead from a coarse-grained continuous version of the microscopic theory, describing the system on mesoscopic scales (much larger than \(h\), the lattice spacing, and much smaller than \(L\), the system size). We are hence not assuming anymore the scaling with \(L\) of the coefficients investigated in Sec. 4. We will instead take a continuum limit by sending the lattice spacing \(h\to 0\). The continuum limit therefore represents a way to investigate the dynamics on scales much larger than \(h\), but yet much smaller than \(L\). To take the continuum limit \(h\to 0\), we must then appropriately rescale the hopping and flipping rates and also the particle density fields; see Appendix E. After these rescalings, the action becomes \(S=\int\left(\mathcal{S}_{D}+\mathcal{S}_{\text{flip}}\right)\,d\mathbf{x}dt\) where \[\begin{split}\mathcal{S}_{D}&=-D\tilde{\rho}^{+} \tilde{\nabla}^{2}\rho^{+}-D\tilde{\rho}^{-}\tilde{\nabla}^{2}\rho^{-}-\\ &-D\rho^{+}\left(\tilde{\mathbf{\nabla}}\bar{\rho}^{+}\right)^{2}-D \rho^{-}\left(\tilde{\mathbf{\nabla}}\bar{\rho}^{-}\right)^{2}\end{split} \tag{51}\] \[\begin{split}\mathcal{S}_{\text{flip}}=&\gamma\, \left(e^{\tilde{\rho}^{+}}-e^{\tilde{\rho}^{-}}\right)\times\\ &\times\left(e^{-\tilde{\rho}^{+}}\rho^{+}-e^{-\tilde{\rho}^{-}} \rho^{-}\right)-\\ +&\frac{\tau}{2}\left(e^{-\tilde{\rho}^{+}}-e^{\tilde {\rho}^{-}}\right)\times\\ &\times\left(e^{\tilde{\rho}^{+}}\rho^{+}-e^{\tilde{\rho}^{-}} \rho^{+}\right)\rho^{+}\,\rho^{-}\end{split} \tag{52}\] We now want to change variables from \(\rho^{+}\) and \(\rho^{-}\) to \(m\) and \(\rho\). To do this in the field theory, we must also transform the \(\tilde{\rho}\) fields. It is sufficient for RG purposes to work as usual in a Landau-Ginzburg expansion in fluctuations around the homogeneous disordered state at \(m=0,\,\rho=\rho_{0}\). Hence we shall write \(\rho=\rho_{0}+\delta\rho\), and expand in powers of \(m\) and \(\delta\rho\). The resulting action contains an infinite set of nonlinear terms of which only the first few are relevant, in the RG sense, near 4 dimensions. Retaining only these terms, the result is the sum of a Gaussian action density \(\mathcal{S}_{0}\) and a non-Gaussian interaction part \(\mathcal{S}_{I}\) \[\begin{split}\mathcal{S}_{0}=&\tilde{m}\left( \partial_{t}-D\,\nabla^{2}+a\right)m-\tilde{\lambda}\,\tilde{m}^{2}+\\ &+\tilde{\rho}\left(\partial_{t}-D\,\nabla^{2}\right)\delta\rho- \tilde{D}\left(\mathbf{\nabla}\bar{\rho}\right)^{2}\end{split} \tag{53}\] \[\begin{split}\mathcal{S}_{I}&=b\,\tilde{m}\,m^{3}+g \,\tilde{m}\,m\,\delta\rho+\text{irrelevant}\end{split} \tag{54}\] with coefficients derived from microscopic parameters as follows: \[\begin{split} a&=\frac{1}{4}\left(8\gamma-\tau\rho_{0 }^{2}\right)\,,\qquad b=\frac{\tau}{4}\,,\qquad g=-\frac{\tau\,\rho_{0}}{2}\\ \tilde{\lambda}&=\frac{\rho_{0}}{4}\left(8\gamma+ \rho_{0}^{2}\tau\right)\,,\quad\tilde{D}=\rho_{0}\,D\end{split}\] This action can be cast in more familiar form as a pair of Langevin equations, which read \[\partial_{t}m=D\nabla^{2}m-a\,m-b\,m^{3}-g\,\delta\rho\,m+\sqrt{2 \tilde{\lambda}}\,\eta \tag{55}\] \[\partial_{t}\delta\rho=-\mathbf{\nabla}\cdot\mathbf{J}\,;\quad\mathbf{J}=-D\, \mathbf{\nabla}\delta\rho+\sqrt{2\tilde{D}}\,\mathbf{\zeta} \tag{56}\] with \(\eta\) and \(\zeta_{i}\) independent Gaussian white noises of unit variance. Note that any nonlinearity of the form \(\mathbf{\nabla}(m^{2})\) in the current \(\mathbf{J}\) of (56), or equivalently a term \(\tilde{\rho}\,\nabla^{2}(m^{2})\) in the action (54), if present, would also be relevant in \(d<4\). However, since it is absent in the bare theory and there are no other non-Gaussian terms linear in \(\tilde{\rho}\), it will not be generated during an RG transformation. More generally one expects any relevant term, even if absent in the original action, to be generated during the RG flow, unless its absence is protected by some kind of symmetry or conservation law. The physics that prevents the generation of this term in our case is as follows: _when \(v=0\), the dynamics of the mass density \(\rho\) is independent of the state of magnetization \(m\)._ Such a condition clearly survives coarse-graining, and can arguably be viewed as a symmetry between \(A\) and \(B\) particles (or up- and down-spins) at microscopic level, stating that the diffusive jump rates of a particle is independent of its spin state. The symmetry is however absent in a model with detailed balance, where the hopping rates _must_ depend on the energy change caused by the hop, which does depend on the spin state. Since it is possible to construct an AIM that recovers detailed balance at \(v=0\)[16], one cannot view the symmetry found here as fundamental to all AIMs, but it remains a defining feature of all the AIMs studied in this paper (including AIM0). ### Connection with Model C The stochastic dynamics of Model C are [24] \[\partial_{t}m =\lambda\nabla^{2}m-\lambda r\,m-\lambda u\,m^{3}-\lambda\gamma\, \delta\rho\,m+\sqrt{2\lambda}\,\eta\] \[\partial_{t}\delta\rho =-\boldsymbol{\nabla}\cdot\boldsymbol{J}\] \[\boldsymbol{J} =-D\,\boldsymbol{\nabla}\delta\rho-\frac{D\gamma}{2}\boldsymbol{ \nabla}(m^{2})+\sqrt{2D}\,\boldsymbol{\zeta}\] Here \(\lambda\) is a mobility parameter (unrelated to previous use of the same symbol in this paper), while \(r,u,\gamma\) are coefficients in the free energy functional \(\mathcal{F}=\int d^{d}x\,\frac{1}{2}\left(\boldsymbol{\nabla}m\right)^{2}+ \frac{\tau}{2}m^{2}+\frac{u}{4}m^{4}+\frac{1}{2}\rho^{2}+\frac{\gamma}{2}m^{2}\rho\) that underlies the model. Model C obeys detailed balance with respect to this \(\mathcal{F}\). The noise terms are just as in (55,56). Strikingly, the _only difference_ between (55,56) for the AIMs under study and Model C is the absence in the AIM case of the term \(\boldsymbol{\nabla}(m^{2})\) in the current \(\boldsymbol{J}\). As already discussed, this term is relevant but structurally absent in our chosen AIMs, while in contrast it is structurally present, with a coefficient fixed by detailed balance, in Model C. The difference between these two cases need not be accessible via any approach that attempts to perturbatively deform one model into the other, for instance by considering small departures from detailed balance. The change in parameters is not small, and moreover replaces one symmetry (time-reversal) with a different and unrelated one (spin-independent density dynamics). Interestingly, a generalized model that includes both AIM and Model C as special cases has previously been introduced and studied using RG methods [26]. The model is defined by \[\partial_{t}m =\lambda\,\nabla^{2}m-a\,m-b\,m^{3}-g_{m}\,m\,\delta\rho+\sqrt{2 \tilde{\lambda}}\,\eta \tag{57}\] \[\partial_{t}\delta\rho =-\boldsymbol{\nabla}\cdot\boldsymbol{J}\] \[\boldsymbol{J} =-D\,\boldsymbol{\nabla}\delta\rho-\frac{g_{\rho}}{2}\boldsymbol{ \nabla}m^{2}+\sqrt{2\tilde{D}}\,\boldsymbol{\zeta} \tag{58}\] The AIM2 dynamics of (55,56) is recovered as \[g_{\rho}=0\qquad\quad g_{m}=g\qquad\quad\lambda=D \tag{59}\] while equilibrium Model C corresponds to \[a =\lambda r \lambda =\tilde{\lambda} g_{m} =\lambda\gamma \tag{60}\] \[b =\lambda u D =\tilde{D} g_{\rho} =D\gamma \tag{61}\] #### 6.2.1 RG flows In the present paper we do not review in detail the comprehensive perturbative RG study of this class of models offered by Akkineni and Tueber in [26] (which in fact addresses a much larger class spanning Heisenberg as well as Ising symmetry, and Model D as well as Model C dynamics). Briefly, for the model governed by (57,58), various fixed points of potential relevance to AIMs are considered in [26]. A Gaussian fixed point, stable for \(d>4\), becomes unstable for \(\epsilon=4-d>0\). In the absence of \(g_{m}\), the unstable flow is towards a Model A fixed point, at which the \(m\) dynamics is decoupled from \(\rho\) which is then ignorable. For nonzero \(g_{m}\), however, the Model A fixed point is unstable towards an equilibrium-like Model C fixed point where detailed balance is restored. This is perturbatively stable against detailed-balance violations; its basin of attraction should include all models in which such violation is weak. Beyond this basin, in addition to the \(g_{m}=0\) manifold where Model A behaviour is recovered, lies a further unstable manifold at \(g_{\rho}=0\). The strongly nonequilibrium dynamics on this manifold describes situations, like the AIMs studied here, in which it is the dynamics of \(\rho\) that decouples from \(m\). On this unstable manifold, a further fixed point was found, whose strongly nonequilibrium dynamics describes a situation in which \(m\) relaxes much faster than \(\rho\) at large scales. This fixed point is however unstable also within the \(g_{\rho}=0\) manifold. Interestingly, Akkineni and Tauber also found another nonequilibrium fixed point at \(g_{\rho}=0\) for which the coupling \(g_{m}\) seemingly flows to infinity for \(d<4\). The latter caused them to conclude that no true nonequilibrium fixed point is accessible at order \(\epsilon\)[26]. Elsewhere [25], we calculate the RG flow on the submanifold where \(g_{\rho}=0\) to which, as we have explained, the AIMs studied in this paper are confined; we argue that despite the conclusions of [26] a nonequilibrium critical point describing the AIM critical point in these strongly nonequilibrium models can be found within perturbative RG approach. More importantly for the present discussion, the \(g_{\rho}=0\) submanifold does not contain the Model C critical point. This can be seen directly from the following argument. As previously explained, the Model C fixed point splits off from the Gaussian one below \(d=4\). Here the coupling term involving \(g_{\rho}\) is relevant. Only if it were irrelevant could the fixed-point value of this coupling constant become zero at the Model C fixed point. Therefore, this fixed point cannot lie on the \(g_{\rho}=0\) manifold to which our AIMs are confined. This strongly suggests that, whether or not the AIM critical point is perturbatively accessible to order \(\epsilon\)[25, 26], it should indeed lie in different universality class from Model C. This suggestion is different from the one made concerning AIM0 in [11]. The situation is however delicate because, as previously stated, our result depends on a symmetry of all the AIMs considered here (including AIM0 of [11]) which might nonetheless be broken in more general models. Specifically, we know it _must_ be broken in any AIM that restores detailed balance by construction at the critical point (e.g. [16]), in which case there can be little doubt that the Model C universality class prevails. We also note that numerical evidence favours equilibrium Ising exponents for AIMs in \(d=2\)[11] - which we have also confirmed for ourselves numerically. It is unusual for universality classes to actually merge on reducing dimensionality, so this could indicate that while the Model C and AIM classes retain distinct exponents these are hard to distinguish numerically in two (and therefore possibly three) dimensions. ## 7 Conclusion We have considered a Doi-Peliti field theoretical formalism, and exploited it to derive an exact field theory able to describe the behaviour of a class of Active Ising Models (AIMs) that allows different choices of the spin-alignment interactions. We showed how field theory provides, as it so often does, a powerful framework to understand collective behaviour in active systems. We were able first to derive several previously known results within this framework. These include the deterministic hydrodynamic equations [15]; the peculiar behaviour of the two-body collisional interaction, which cannot sustain flocking in the presence of noise [13]; and the linear instability of the homogeneous ordered phase close to the transition, leading to phase-separated profiles and a first order scenario [7, 8, 9]. Thereafter we showed how the Doi-Peliti framework can take us far beyond these results. For example, we used it to go beyond the deterministic hydrodynamic equations, complementing them with sub-leading fluctuation terms needed to describe the system on finite scales. Developing the same field theory in a different manner allowed us to address the AIM critical point, defined as the second order alignment transition arising when the self-propulsion term is turned off. We defer to a separate paper a full analysis of the resulting RG flow [25]. Even without this, we could elucidate the relationship between the critical point of the AIMs studied here and Model C. The latter has the same combination of a nonconserved magnetization with Ising symmetry, coupled to a conserved density, but unlike our AIMs also respects detailed balance. Based on this comparison, we argued that the AIM critical points studied here, contrary to expectation [11], are _not_ governed by the the Model C universality class. However, this conclusion stems from a'symmetry' of these particular models whereby diffusive jump rates are not affected by the spin state of a particle. This symmetry need not hold for more general Active Ising Models, and specifically _cannot hold_ in AIMs constructed so that detailed balance gets restored in the zero self-propulsion limit, such as that of [16], which can then behave like Model C at criticality. **Acknowledgments.** MEC thanks Fyl Pincus for inspirational discussions on soft matter physics spanning the past 40 years. We thank Rosalba Garcia-Millan for fruitful discussions and Tal Agranov, Robert Jack and Etienne Fodor for a critical reading of the manuscript. MS also thanks Andrea Cavagna and Luca Di Carlo for discussions. This work was funded in part by the European Research Council (ERC) under the EU's Horizon 2020 Programme, Grant agreements No. 740269 and No. 785932.
2303.03655
Extended Lagrangian Born-Oppenheimer Molecular Dynamics with DFT+U
Extended Lagrangian Born-Oppenheimer molecular dynamics (XL-BOMD) [Phys. Rev. Lett. vol. 100, 123004 (2008)] is combined with Kohn-Sham density functional theory (DFT) using a DFT+U correction based on the Hubbard model. This combined XL-BOMD and DFT+U approach allows efficient Born-Oppenheimer molecular dynamics simulations with orbital-dependent corrections beyond regular Kohn-Sham density functional theory. The extended Lagrangian formulation eliminates the need for the iterative self-consistent-field optimization of the electronic ground state prior to the force evaluations, which is required in regular direct Born-Oppenheimer molecular dynamics simulations. This method provides accurate and stable molecular trajectories, while reducing the computational cost per time step. The combined XL-BOMD and DFT+U approach is demonstrated with molecular dynamics simulations of a nitromethane molecular liquid and a system of solid nuclear fuel, UO$_2$, using self-consistent-charge density functional based tight-binding theory.
Yu Zhang, Marc J. Cawkwell, Christian F. A. Negre, Oscar Grånäs, Anders M. N. Niklasson
2023-03-07T05:23:16Z
http://arxiv.org/abs/2303.03655v1
# Extended Lagrangian Born-Oppenheimer Molecular Dynamics with DFT+U ###### Abstract Extended Lagrangian Born-Oppenheimer molecular dynamics (XL-BOMD) [Phys. Rev. Lett. vol. 100, 123004 (2008)] is combined with Kohn-Sham density functional theory (DFT) using a DFT+U correction based on the Hubbard model. This combined XL-BOMD and DFT+U approach allows efficient Born-Oppenheimer molecular dynamics simulations with orbital-dependent corrections beyond regular Kohn-Sham density functional theory. The extended Lagrangian formulation eliminates the need for the iterative self-consistent-field optimization of the electronic ground state prior to the force evaluations, which is required in regular direct Born-Oppenheimer molecular dynamics simulations. This method provides accurate and stable molecular trajectories, while reducing the computational cost per time step. The combined XL-BOMD and DFT+U approach is demonstrated with molecular dynamics simulations of a nitromethane molecular liquid and a system of solid nuclear fuel, \(\text{U}\text{O}_{2}\), using self-consistent-charge density functional based tight-binding theory. + Footnote †: preprint: LA-UR-23-22052 ## I Introduction Quantum-mechanical Born-Oppenheimer molecular dynamics (QMD) simulations based on Kohn-Sham density functional theory (KS-DFT) and the local density or generalized gradient approximation [1; 2; 3; 4; 5] are widely considered as a gold standard for molecular dynamics simulations [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. However, QMD simulations based on KS-DFT have limitations in capturing the behavior of systems with strong electron correlation. They may fail to accurately predict properties such as the existence of a band gap or the number of valence electrons, leading to incorrect characterization of a material's physical nature and its response properties. Furthermore, QMD simulations based on first principles KS-DFT have a high computational cost, as the fully relaxed electronic ground state must be determined prior to the force evaluation for each new atomic configuration. This involves constrained iterative charge optimization of the nonlinear Kohn-Sham energy functional. This process limits the accessible simulation time and size of systems that can be studied. The nonlinearities of the KS-DFT functional can also cause instabilities with non-conservative forces and a drift in the total energy. This limitation if of particular significance in QMD simulations using reduced complexity solvers that are needed to study large systems [24; 25; 26; 27; 28; 22] or for QMD simulation using specialized AI-hardware with low-precision floating-point operations [29]. In all these cases the effect of numerical approximations can be magnified by the non-linearities of the Kohn-Sham functional and the associated iterative charge optimization. Strong electron correlation and a high computational cost are often interrelated problems. Materials with heavy elements and narrow bands pose computational challenges due to their large number of electrons per atom and difficulties in finding the relaxed electronic ground state solution. Also, these materials often require a theory level beyond regular KS-DFT to account for their strong electron correlation. Despite their close connection, these two problems have mainly been treated separately. One approach to address the problem of strong electron correlation is to incorporate DFT+U correction terms based on the Hubbard model [30; 31; 32; 33; 34]. This method approximates the effects of electron correlation through a semi-empirical and tunable correction term added to the Kohn-Sham energy functional. To tackle the issue of high computational cost for QMD simulations, a framework for extended Lagrangian Born-Oppenheimer molecular dynamics (XL-BOMD) was recently introduced [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. This approach, inspired by Car-Parrinello molecular dynamics [50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 8], includes extended electronic degrees of freedom alongside the nuclear degrees of freedom as dynamical variables. When combined with an approximate _shadow_ Born-Oppenheimer potential energy surface, XL-BOMD can avoid the computational overhead of the iterative electronic ground state optimization and the stability problems caused by non-conservative forces, providing physically accurate trajectories at only a fraction of the cost of regular direct Born-Oppenheimer molecular dynamics simulations [53]. The two methods, DFT+U and XL-BOMD, have so far only been used separately. The main purpose of this article is to present a framework for QMD simulations that combines DFT+U and XL-BOMD. In this way we can reduce the computational cost of QMD simulations also for some materials with electron correlation effects beyond the reach of regular KS-DFT. The construction of the combined framework for DFT+U and XL-BOMD presents an example of a fairly general approach that can be applied also to other corrections of the Kohn-Sham functional besides the DFT+U term, for example, self-interaction corrections (SIC) [54; 55; 56; 57]. The DFT+U term may also serve as a tunable correction that could be used in machine learning approaches to adjust, for example, the polarizability of molecular systems in atomistic simulations using approximate DFT or Hartree-Fock methods [58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68]. This can be achieved with a much lower computational overhead with the combined DFT+U and XL-BOMD approach. First we present KS-DFT using a density-matrix formulation and the orbital-dependent DFT+U correction term. We then introduce extended Lagrangian Born-Oppenheimer molecular dynamics (XL-BOMD) based on an approximate shadow potential energy surface for the DFT+U corrected Kohn-Sham density-matrix functional. The integration of the extended electronic equations of motion is discussed in terms of a Krylov subspace approximation [49; 69]. We then demonstrate QMD simulations using the combined XL-BOMD and DFT+U approach for a molecular system of liquid nitromethane and a solid of nuclear fuel, (UO\({}_{2}\)), using self-consistent charge density functional based tight-binding theory (SCC-DFTB) [60; 61; 68] before we present a summary and a discussion at the end. ## II Kohn-Sham density-matrix functional theory Density functional theory is a cornerstone of electronic structure theory [1; 2; 3; 4; 5]. KS-DFT in combination with the local density or generalized gradient approximations of the exchange-correlation energy is a computationally efficient and widely used formulation of DFT. KS-DFT is normally formulated in terms of the electron density. However, if we assume that all operators and potentials in KS-DFT are represented in some finite (atomic-orbital-like) basis set, \(\{\phi_{i}(\mathbf{r})\}_{i=1}^{N}\), then a formulation based on the effective single-particle density matrix and density matrix energy functions is a more natural choice compared to the electron density. This is in analogy to Hartree-Fock theory [70; 71] and is particularly useful when we introduce the orbital-dependent DFT+U energy correction. In a finite basis-set representation, with \(N\)-basis functions), the ground state electronic structure in spin-independent KS-DFT can be described by the single-particle density matrix, \(\varrho_{0}\in\mathbb{R}^{N\times N}\), that is given from a constrained density-matrix minimization (\(\varrho\in\mathrm{C}\)) of a matrix function, \[\varrho_{0}=\arg\min_{\varrho\in\mathrm{C}}\left\{F_{\mathrm{KS}}[\varrho]+2 \mathrm{tr}[v_{\mathrm{ext}}\varrho]\right\}. \tag{1}\] Here we assume that \(F_{\mathrm{KS}}[\varrho]\) is the matrix-function approximation of the Kohn-Sham ensemble representation of the universal functional [3; 72] in DFT at some chosen electronic temperature, \(T_{e}\geq 0\). The density matrix constraints, \(\rho\in C\), will be described below in Eq. (10). The matrix elements of the external potential, \(v_{\mathrm{ext}}\equiv v_{\mathrm{ext}}(\mathbf{R})\in\mathbb{R}^{N\times N}\), for the ions at positions \(\mathbf{R}=\{\mathbf{R}_{I}\}\), are given by \[\left\{v_{\mathrm{ext}}(\mathbf{R})\right\}_{ij}=\int\phi_{i}^{*}(\mathbf{r} )v_{\mathrm{ext}}(\mathbf{R},\mathbf{r})\phi_{j}(\mathbf{r})d\mathbf{r}. \tag{2}\] The ensemble Kohn-Sham energy function, \(F_{\mathrm{KS}}[\varrho]\), which is given at some electronic temperatures, \(T_{e}\geq 0\), can be written as \[F_{\mathrm{KS}}[\varrho]=2\mathrm{tr}[\textit{t}_{\mathrm{s}}\varrho]+2\sum_{ ij,kl}\varrho_{ij}\gamma_{ij,kl}\varrho_{kl}+E_{\mathrm{xc}}\left[\rho\right]+E_{ \mathrm{ent}}(f), \tag{3}\] where \[\{\textit{t}_{\mathrm{s}}\}_{ij}=\int\phi_{j}^{*}(\mathbf{r})\left(-\frac{1}{ 2}\nabla^{2}\right)\phi_{i}(\mathbf{r})d\mathbf{r}, \tag{4}\] are the single-particle kinetic energy matrix elements, and \[\gamma_{ij,kl}=\iint\frac{\phi_{i}^{*}(\mathbf{r})\phi_{j}(\mathbf{r})\phi_{ k}^{*}(\mathbf{r}^{\prime})\phi_{l}(\mathbf{r}^{\prime})}{|\mathbf{r}-\mathbf{r}^{ \prime}|}d\mathbf{r}d\mathbf{r}^{\prime}, \tag{5}\] are the two-electron (Coulomb) integrals, and \[E_{\mathrm{ent}}(f)=2k_{B}T_{e}\sum_{i}\left(f_{i}\ln f_{i}+(1-f_{i})\ln(1-f_ {i})\right)\!, \tag{6}\] is the single-particle entropy contribution to the free energy, where we assume a double fractional occupancy of all the states. Here the occupation numbers \(f_{i}\in[0,1]\) and \(E_{\mathrm{xc}}\left[\rho\right]\) is the exchange-correlation energy functional that is approximated using the local density or the generalized gradient approximation. \(E_{\mathrm{xc}}\left[\rho\right]\) depends on the density \(\rho(\mathbf{r})\), which here is determined by the density matrix and the basis set, i.e., \[\rho(\mathbf{r})=2\sum_{i,j}^{N}\varrho_{i,j}\phi_{i}^{*}(\mathbf{r})\phi_{j} (\mathbf{r}). \tag{7}\] Because of this direct dependency on the density matrix we can alternatively use the notation \(E_{xc}[\varrho]\equiv E_{\mathrm{xc}}[\rho]\). The two-electron integrals \(\gamma_{ij,kl}\) are never calculated explicitly. Instead we use a contraction corresponding to the Hartree potential, \(v_{\mathrm{H}}[\varrho]\), with matrix elements, \[\{v_{\mathrm{H}}[\varrho]\}_{ij}=\sum_{kl}\left(\gamma_{ij,kl}\varrho_{kl}+ \varrho_{kl}\gamma_{kl,ij}\right)\!, \tag{8}\] which can be calculated, for example, with an Ewald summation for periodic boundary conditions. The Kohn-Sham free-energy matrix function can then be expressed as \[F_{\rm KS}[\varrho]=2\mathrm{tr}[t_{\mathrm{s}}\varrho]+\mathrm{tr}[\varrho v_{ \mathrm{H}}[\varrho]]+E_{\rm xc}\left[\varrho\right]+E_{\rm ent}(f). \tag{9}\] The density-matrix minimization in Eq. (1) is performed under the costraints (\(\varrho\in\mathrm{C}\)), which require that \[\begin{split}&\varrho=\sum_{i}f_{i}c_{i}c_{i}^{\dagger},\ \ 2\sum_{i}f_{i}=N_{e},\ \ f_{i}\in[0,1],\\ &\sum_{i,j}c_{i}^{\dagger}s_{i,j}c_{j}=\delta_{i,j},\ \ s_{ij}= \int\phi_{i}^{*}(\mathbf{r})\phi_{j}(\mathbf{r})d\mathbf{r},\end{split} \tag{10}\] where \(\{c_{i}\}_{i=1}^{N}\) is some set of vectors where \(c_{i}\in\mathbb{C}^{N}\), \(N_{e}\) is the total number of electrons (two in each orbital), and \(s\in\mathbb{R}^{N\times N}\) is the overlap matrix. The optimized ground state density matrix, \(\varrho_{0}\) from Eq. (1), defines the interatomic potential energy surface, \(U_{\rm BO}(\mathbf{R})\), within the Born-Oppenheimer (BO) approximation, which is given by \[U_{\rm BO}(\mathbf{R})=F_{\rm KS}[\varrho_{0}]+2\mathrm{tr}[v_{\rm ext}\varrho _{0}]+v_{nn}(\mathbf{R}), \tag{11}\] where \(v_{nn}(\mathbf{R})\) is the ion-ion repulsion potential. From the Born-Oppenheimer potential energy surface we can calculate interactomic forces that can be used in a molecular dynamics simulation. Because we are using a finite temperature ensemble with fractional occupation numbers, we are not, strictly speaking, on a Born-Oppenheimer potential energy surface. However, it is a straightfoward ensemble generalization of the regular Born-Oppenheimer potential energy surface. We will therefore still refer to the free-energy surface in Eq. (11), which is determined by the fully relaxed (or the thermally equilibrated) electron density, as a Born-Oppenheimer potential. The constrained density-matrix minimization for \(\varrho_{0}\) in Eqs. (1) and (10) is given from the solution of the nonlinear Kohn-Sham eigenvalue equation, \[h[\varrho]c_{i}=\epsilon_{i}sc_{i}, \tag{12}\] with the fractional occupation numbers given by \[f_{i}=\left(e^{\beta(\epsilon_{i}-\mu)}+1\right)^{-1}. \tag{13}\] Where \(\epsilon_{i}\) and \(\mu\) are the molecular orbital (MO) energies and chemical potential, respectively. In the Kohn-Sham Hamiltonian, \[h[\varrho]=t_{\mathrm{s}}+v_{\mathrm{h}}[\varrho]+v_{\rm xc}[\varrho]+v_{\rm ext }, \tag{14}\] the Hartree potential matrix, \(v_{\mathrm{h}}[\varrho]\), is given by Eq. (8) and and the exchange-correlation matrix, \(v_{\rm xc}[\varrho]\), has matrix elements, \[\{v_{\rm xc}[\varrho]\}_{ij}=\frac{1}{2}\frac{\partial E_{\rm xc}[\varrho]}{ \partial\varrho_{ij}}. \tag{15}\] Because of the nonlinearity of the Kohn-Sham eigenvalue equation, where \(\varrho\) is given by the eigenvectors in Eq. (10), the optimized ground state solution, \(\varrho_{0}\), is found through an iterative solution of the Kohn-Sham eigenvalue equation. In this optimziation procedure the Kohn-Sham Hamiltonian, \(h[\varrho]\), is constructed from a mixture of previous density matrices that are given from the eigenvectors of previous Kohn-Sham Hamiltonians, until a stationary, self-consistent field (SCF) solution, is reached. This is an expensive procedure that in practice never is complete. The solution, \(\varrho_{0}\), is therefore always approximate. ## III KS-DFT+U KS-DFT in combination with the local density or generalized gradient approximations for the exchange-correlation energy is an effective single-particle theory. The theory provides a computationally efficient method to calculate the physical properties of a broad range of materials with predictive accuracy. Nevertheless, it has some shortcomings. The main source of errors are the self-interaction errors and electron correlation effects for localized states [73; 34; 74]. These errors can be reduced by including orbital-dependent corrections to the Kohn-Sham matrix function where individual Kohn-Sham states are shifted in their energy levels. The orbital-dependent corrections can be derived either from Kohn-Sham DFT with self-interaction corrections or from many-particle model Hamiltonians. Here we chose to include the orbital-dependent corrections through the second approach with a KS-DFT+U correction term based on the Hubbard model [30; 31; 32; 33; 34]. Our orbital-corrected Kohn-Sham+U matrix function is defined by \[F_{\rm KS+U}[\varrho]\equiv F_{\rm KS}[\varrho]+2\mathrm{tr}[u(\varrho s- \varrho s\varrho s)], \tag{16}\] where \(u\) is a diagonal matrix with matrix elements that can be tuned with respect to the different atomic orbital projections of the molecular-orbital eigenstates. Our \(u\)-dependent term is directly based one of the most commonly used DFT+U correction terms [33], which is translational and rotational invariant and well suited for molecular dynamics simulations. The \(u\)-dependent correction term typically also includes a spin-dependent term and has a factor \(1/2\) in front, which here has been replaced by a factor of \(2\) for consistency with the other energy terms. We will only use the \(u\)-dependent term in Eq. (16) as a semi-empirical adjustment for materials with strong electron correlation without any particular physical interpretation of the values of \(u\). By tuning the parameters in \(u\) we simply introduce orbital-dependent corrections that capture some of the effects of strong electron correlation that are beyond the reach of the local density or generalized gradient approximations in KS-DFT. As we will demonstrate in the simulation below, the main effect of the DFT+U correction is to adjust the electronic energy gap between the occupied and the unoccupied states. The electronic ground state solution for KS-DFT+U is found in the same way as before using the density-matrix minimization in Eq. (1) with the same density matrix constraints, \(\varrho\in\mathrm{C}\) in Eq. (10), as before, i.e. \[\varrho_{0}=\arg\min_{\varrho\in\mathrm{C}}\{F_{\mathrm{KS+U}}[\varrho]+2\mathrm{ tr}[v_{\mathrm{ext}}\varrho]\}. \tag{17}\] The solution to the constrained minimization is given through the same nonlinear Kohn-Sham eigenvalue problem as before, Eq. (12), but with the modified \(u\)-dependent effective single-particle KS-DFT+U Hamiltonian, \[\begin{split}& h[\varrho]=t_{\mathrm{s}}+v_{\mathrm{h}}[\varrho] +v_{\mathrm{xc}}[\varrho]+v_{\mathrm{ext}}\\ &+\frac{1}{2}(us-sgus-usgs+h.c.).\end{split} \tag{18}\] The ground-state Born-Oppenheimer potential energy surface for the KS-DFT+U corrected Kohn-Sham matrix function is then given by \[U_{\mathrm{BO+U}}(\mathbf{R})=F_{\mathrm{KS+U}}[\varrho_{0}]+2\mathrm{tr}[v_{ \mathrm{ext}}\varrho_{0}]+v_{nn}(\mathbf{R}). \tag{19}\] This potential can then be used to calculate the interatomic forces and integrate the equations of motion, \[M_{I}\mathbf{\ddot{R}}_{I}=-\nabla_{I}U_{\mathrm{BO+U}}(\mathbf{R}) \tag{20}\] in a molecular dynamics simulation, where \(\{M_{I}\}\) are the atomic masses. ## IV XL-BOD with DFT+U The main cost of a QMD simulation based on KS-DFT is the cost of finding the (thermally) relaxed self-consistent ground state, \(\varrho_{0}\), prior to the force evaluation in each time step. The iterative solution of the nonlinear eigenvalue problem, Eq. (12), is expensive with a prefactor that scales linearly with the number of iterations required to find a sufficiently converged self-consistent ground-state solution. By using a good initial guess to the SCF optimization, which can be generated from an extrapolation of the ground state density matrix from previous time steps, it is possible to significantly reduce the computational overhead. However, because the iterative ground state optimization is approximate, the calculated forces are never exact and there is an inconsistency between the calculated forces and the exact Born-Oppenheimer ground state potential energy surface. The extrapolation in combination with an incomplete ground-state optimization leads to non-conservative forces and a systematic drift in the total energy, because of a broken time-reversal symmetry in the fictitious propagation of the underlying electronic degrees of freedom that is generated through the extrapolation [9; 75; 19]. Alternatively, we may restart the ground state optimization in each new time step from overlapping atomic densities, which preserves the time-reversal symmetry and avoids a systematic drift in the total energy, but the computational cost is significantly higher. XL-BOMD [36; 38; 39; 40; 41; 42; 43; 44; 45; 47; 22; 48] is a framework that has been developed to avoid these shortcomings. XL-BOMD is based on the concept of backward error analysis or a _shadow_ Hamiltonian approach [76; 77; 78; 79]. Instead of calculating approximate forces using an expensive iterative ground-state optimization procedure for an underlying "exact" Born-Oppenheimer potential energy surface, we can calculate exact forces in a fast and simple way, but for an underlying approximate _shadow_ potential energy surface that closely follows the "exact" regular Born-Oppenheimer potential. In this way we can reduce the computational cost and at the same time restore a consistency between the calculated forces and the underlying shadow potential. With the consistent, conservative forces we can then generate stable molecular trajectories at only a fraction of the cost of regular direct Born-Oppenheimer molecular dynamics simulation. ### The Shadow Potential In XL-BOMD a shadow free energy matrix function is constructed from a linearization of the KS-DFT+U energy function, Eq. (16), around some approximate solution, \(\nu\in\mathbb{R}^{N\times N}\), to the exact ground state density matrix, \(\varrho_{0}\)[43; 47; 69]. The constrained stationary minima of this shadow energy matrix functional then generates the shadow Born-Oppenheimer potential. The shadow matrix functional for the orbital-corrected KS-DFT+U matrix function is given by \[\mathcal{F}_{\mathrm{KS+U}}[\varrho,\nu] =F_{\mathrm{KS+U}}[\nu]\] \[=2\mathrm{tr}[t_{\mathrm{s}}\varrho]+\mathrm{tr}[(2\varrho-\nu)v _{\mathrm{h}}[\nu]]+E_{\mathrm{xc}}[\nu]\] \[\quad+2\mathrm{tr}\left[(\varrho-\nu)v_{\mathrm{xc}}\left[\nu \right]\right]+E_{\mathrm{ent}}(f)\] \[\quad+2\mathrm{tr}\left[u(\varrho s-\nu s\varrho s-\varrho s\nu s +\nu s\nu s)\right]. \tag{21}\] The stationary ground-state solution, \(\varrho_{0}[\nu]\), of the linearized matrix function is \(\nu\)_-dependent_ and is found by a constrained density-matrix minimization with the same density matrix constraints, \(\varrho\in\mathrm{C}\) as before in Eq. (10), and \[\varrho_{0}[\nu]=\arg\min_{\varrho\in\mathrm{C}}\{\mathcal{F}_{\mathrm{KS+U}} [\varrho,\nu]+2\mathrm{tr}[v_{\mathrm{ext}}\varrho]\}. \tag{22}\] Because of the linearization, the minimization can be solved in a single step as a solution to a _linear_ Kohn-Sham eigenvalue problem \[h[\nu]c_{i}=\epsilon_{i}sc_{i}, \tag{23}\] where \[\varrho_{0}[\nu]=\sum_{i}f_{i}c_{i}c_{i}^{\dagger} \tag{24}\] and the fractional occupation numbers given from the Fermi function in Eq. (13). The Kohn-Sham Hamiltonian of the linearized orbital-corrected KS-DFT+U matrix function is given by \[\begin{split}& h[\nu]=t_{\text{s}}+v_{\text{h}}[\nu]+v_{\text{xc}}[ \nu]+v_{\text{ext}}\\ &+\tfrac{1}{2}(su-su\nu s-s\nu su+h.c.).\end{split} \tag{25}\] The \(\nu\)-dependent shadow Born-Oppenheimer potential energy surface, \(\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)\), is then given in the same way as before, but using the linearized KS-DFT+U matrix function, \[\begin{split}&\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)=\mathcal{ F}_{\text{KS+U}}[\varrho_{0}[\nu],\nu]\\ &+2\text{tr}\,[\varrho_{0}[\nu]\times v_{\text{ext}}]+v_{nn}( \mathbf{R}).\end{split} \tag{26}\] The difference between the shadow potential energy surface, \(\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)\), and the "exact" fully converged Born-Oppenheimer potential energy surface, \(U_{\text{BO+U}}(\mathbf{R})\), is small if the residual matrix function, \(\varrho_{0}[\nu]-\nu\), is small. The difference scales as \(\mathcal{O}(\|\varrho_{0}[\nu]-\nu\|^{2})\). The \(\nu\)-dependent approximate ground state, \(\rho_{0}[\nu]\), is different from the exact ground state density matrix, \(\rho_{0}\), of the exact fully converged Born-Oppenheimer potential, but \(\rho_{0}[\nu]\) is still the exact fully converged ground state solution of the shadow potential. The first-order variation of the shadow potential with respect to the density matrix around \(\rho_{0}[\nu]\) therefore vanish, i.e. \(\mathcal{U}_{\text{BO+U}}/\partial\rho|_{\rho=\rho_{0}[\nu]}=0\). This is important in the calculation of the interatomic forces, because it means that the partial force term including \(\left(\partial\mathcal{U}_{\text{BO+U}}/\partial\rho|_{\rho=\rho_{0}[\nu]} \right)\left(\partial\rho/\partial\mathbf{R}_{I}\right)=0\) will vanish, which simplifies the calculation of the forces, without relying on the Hellmann-Feynman theorem or additional adjustment terms. ### Extended Lagrangian In a molecular dynamics simulation the atoms are moving and at some point the approximate ground state density matrix, \(\nu\), around which we performed the linearization of the KS-DFT+U matrix energyfunction in Eq. (21) will no longer be close to the exact ground state \(\varrho_{0}\). We therefore need to update \(\nu\) along the molecular trajectory to keep it close to the unknown ground state \(\varrho_{0}\). Without an update the linearization of the KS-DFT+U matrix function, \(\mathcal{F}_{\text{KS+U}}[\varrho,\nu]\), will eventually deteriorate and the difference between the shadow potential and the fully converged "exact" Born-Oppenheimer potential energy surfaces may diverge. To simply update \(\nu\) with the atomic positions, \(\mathbf{R}\), would require the calculation of \(\partial\nu/\partial\mathbf{R}_{I}\) terms, and their effect on the \(\nu\)-dependent potential energy surface. In general, this would be quite expensive. Instead, in XL-BOMD the approximate ground state density matrix, \(\nu\), is included as a dynamical tensor variable that evolves through a harmonic oscillator that is centered around the ground state, \(\varrho_{0}\), or the best available approximation, which in our case is \(\varrho_{0}[\nu]\). The dynamics is defined through the extended Lagrangian, \[\begin{split}&\mathcal{L}(\mathbf{R},\mathbf{\dot{R}},\nu,\dot{ \nu})=\frac{1}{2}\sum_{I}M_{I}\dot{R}_{I}^{2}-\mathcal{U}_{\text{BO+U}}( \mathbf{R},\nu)\\ &+\frac{\mu}{2}\text{tr}[\dot{\nu}^{2}]-\frac{\mu\omega^{2}}{2} \text{tr}[(\varrho_{0}[\nu]-\nu)^{T}\mathcal{T}(\varrho_{0}[\nu]-\nu)].\end{split} \tag{27}\] Here \(\mathbf{R}\) and \(\mathbf{\dot{R}}\) are the atomic positions and their velocities; \(\nu\) and \(\dot{\nu}\) are the dynamical matrix variables that represent the extended electronic degrees of freedom; \(\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)\) is the shadow potential for the electronic free energy based on the linearized KS-DFT+U matrix function at some electronic temperature, \(T_{e}\geq 0\), that approximates the corresponding exact Born-Oppenheimer potential energy surface; \(\mathcal{T}\equiv\mathcal{K}^{T}\mathcal{K}\) is a symmetric positive definite metric tensor of the harmonic well that makes \(n\) oscillate around an even closer approximation to the exact ground state than \(\varrho_{0}[\nu]\) and will be defined below; \(\mu\) is a fictitious electronic mass parameter; and \(\omega\) is the frequency of the harmonic oscillator extension that defines the time scale for the dynamics of the extended electronic degrees of freedom. We may use different representations of the extended electronic degrees of freedom \(\nu\). Instead of the atomic-orbital matrix representation, \(\nu\), we can use an orthogonal representation, \(\nu^{\perp}=z^{-1}\nu z^{-T}\) where \(z\) is chosen such that \(\sum_{kl}z_{ki}s_{kl}z_{lj}=\delta_{ij}\), or we can chose a modified dynamical variable \(x=\nu s\). For simplicity, we will here express the dynamics in terms of the atomic-orbital representation, \(\nu\), but it is straightforward to use also the other representations. The choice of dynamical variables, \(x\equiv\nu s\) and \(\dot{x}\), as in Ref. [23] seems to be slightly more efficient and is a more natural choice because of its consistent tensorial behavior under integration. This is also the version that we will use in the examples demonstrating XL-BOMD using a DFT+U functional in section V. The expression for the harmonic oscillator of the extended Lagrangian in Eq. (27) includes a metric tensor, \[\mathcal{T}\equiv\mathcal{K}^{T}\mathcal{K}, \tag{28}\] where \(\mathcal{K}\) is a kernel that acts as a fourth-order tensor, which performs mappings between matrices. This kernel, \(\mathcal{K}\), is defined from the inverse of the Jacobian, \(\mathcal{J}\), of the residual matrix function, where \[\mathcal{J}_{ij,kl}=\frac{\partial(\{\varrho_{0}[\nu]\}_{ij}-\nu_{ij})}{ \partial\nu_{kl}}, \tag{29}\] and \[\mathcal{K}=\mathcal{J}^{-1}. \tag{30}\] ### Equations of motion The atomic coordinates typically evolve on a slow time scale compared to the electronic motion. If initially the electrons are in the ground state we may therefore assume they will evolve close to the electronic ground state as the atoms are moving. This adiabatic assumption is the reasoning behind the Born-Oppenheimer approximation in quantum-based molecular dynamics simulations [80; 81; 52]. In the derivation of the equations of motion of XL-BOMD from Euler-Lagrange's equations we can also apply an adiabatic approximation that separates the motion between the nuclear and the extended electronic degrees of freedom. Our derivation of the equations of motion of XL-BOMD are therefore performed in in an adiabatic limit where \(\omega\rightarrow\infty\) and \(\mu\to 0\) such that \(\mu\omega=\text{constant}\). This is a classical analogue to the Born-Oppenheimer approximation, where the extended electronic degrees of freedom is assumed to evolve on a fast time scale compared to the motion of the atomic positions [43]. In this adiabatic limit we get the equations of motion \[M_{I}\tilde{R}_{I}=-\left.\nabla_{I}\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu) \right|_{\nu}, \tag{31}\] for the nuclear degrees of freedom and \[\tilde{\nu}=-\omega^{2}\mathcal{K}\left(\varrho_{0}[\nu]-\nu\right), \tag{32}\] for the electronic degrees of freedom. The corresponding constant of motion is given by the total energy, \[E_{\text{BO+U}}^{\text{tot}}=\frac{1}{2}\sum_{I}M_{I}|\dot{\mathbf{R}}_{I}|^{2 }+\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu). \tag{33}\] These are the central equations of XL-BOMD, which are exact in continuous time, and can be used to generate the molecular trajectories in QMD simulations. In the adiabatic limit the residual function \(\left\|\left(\varrho_{0}[\nu]-\nu\right)\right\|\propto\omega^{-2}\), which simplifies the evaluation of the interatomic forces in the first equation, Eq. (31). We can express the equations of motion in Eq. (31) as \[\begin{split} M_{I}\ddot{\mathbf{R}}_{I}=&-\left. \frac{\partial\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)}{\partial\mathbf{R}_{I }}\right|_{\nu}\equiv-\mathcal{U}^{\prime}_{\text{BO+U}}(\mathbf{R},\nu),\\ M_{I}\ddot{\mathbf{R}}_{I}&=-2\text{tr}[t^{\prime }_{s}\varrho_{0}[\nu]+\text{tr}[(2\varrho_{0}[\nu]-\nu)v^{\prime}_{\text{h}} [\nu]\\ &+E^{\prime}_{\text{xc}}[\nu]+2\text{tr}[(\varrho_{0}[\nu]-\nu)v ^{\prime}_{\text{xc}}[\nu]]\\ &+2\text{tr}[u(\varrho_{0}[\nu]s^{\prime}-\nu s^{\prime}\varrho_ {0}[\nu]s-\nu s\varrho_{0}[\nu]s^{\prime})]\\ &+2\text{tr}[v^{\prime}_{\text{ext}}\varrho_{0}[\nu]]+v^{\prime} _{\text{nn}}(\mathbf{R}),\end{split} \tag{34}\] where we use the prime notation, \({}^{\prime}\), for the partial derivative with respect to the nuclear coordinates under constant \(\nu\), e.g. \(\mathcal{U}^{\prime}\equiv\partial\mathcal{U}/\partial\mathbf{R}_{I}|_{\nu}\). The forces above are the exact conservative forces for the shadow Born-Oppenheimer potential. Because \(\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)/\delta\varrho=0\) at \(\varrho=\rho_{0}[\nu]\), any force terms with \(\partial\varrho/\partial\mathbf{R}_{I}\) can be ignored. The force expression we use above therefore has the same simplicity as a Hellman-Feynman force expression. Here this is possible even if \(\rho_{0}[\nu]\) is not the exact regular ground state. The shadow Born-Oppenheimer potential, \(\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)\) in Eq. (26), can be seen as a generalized Harris-Foulkes functional [82; 83] for orbital-dependent Kohn-Sham corrections. However, because \(\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)\) is given as a variationally optimized ground state of a shadow matrix energy function, and as \(\nu\) appears as a dynamical variable within the extended Lagrangian formulations, no partial derivatives, \(\partial\nu/\partial\mathbf{R}_{I}\), appear in the force expression. In contrast to a Harris-Foulkes expression we can therefore calculate forces and these forces are exact for \(\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)\). The kernel \(\mathcal{K}\) in Eq. (32) is defined as the inverse Jacobian of the residual in Eqs. (30) and (29) and therefore acts as a Newton step in an iterative solution of a system of nonlinear equations, i.e. the residual matrix function equation \(\varrho_{0}[\nu]-\nu=0\). The dynamical matrix \(\nu\) therefore behaves as if it would oscillate around a much closer approximation to the exact ground state, \(\varrho_{0}\), than \(\varrho_{0}[\nu]\), because \[\mathcal{K}\left(\varrho_{0}[\nu]-\nu\right)=(\varrho_{0}-\nu)+\mathcal{O} \left(\|\varrho_{0}[\nu]-\nu\|^{2}\right). \tag{35}\] Unfortunately, it is expensive to calculate the exact kernel and instead we need to use some approximation in the integration of the electronic degrees of freedom. Either a scaled delta function, \(\mathcal{K}\approx-c\mathcal{I}\), with \(c\in[0,1]\) can be used or a more accurate low-rank Krylov subspace approximation, which we will present below. ### Integrating the equations of motion To integrate the equations of motion, Eqs. (31) and (32), a modified leapfrog velocity Verlet scheme can be used [37; 38; 39], which includes an additional dissipative term in the integration of the extended electronic degrees of freedom. This additional term breaks the time-reversal symmetry to some chosen higher odd-order in the integration time step, \(\delta t\), which dampens the accumulation of numerical noise that otherwise could cause instabilities in a perfectly reversible integration. In this way the evolution of the electronic degrees of freedom stays synchronized to the dynamics of the nuclear motion. The modified leapfrog velocity Verlet integration scheme for the integration of the nuclear and electronic degrees of freedom is given by \[\begin{split}\dot{\mathbf{R}}(t+\frac{\delta t}{2})=\dot{\mathbf{R }}(t)+\frac{\delta t}{2}\ddot{\mathbf{R}}(t),\\ \mathbf{R}(t+\delta t)=\mathbf{R}(t)+\delta t\dot{\mathbf{R}}(t+ \frac{\delta t}{2}),\\ \nu(t+\delta t)=2\nu(t)-\nu(t-\delta t)+\delta t^{2}\ddot{\nu}( t)\\ \qquad\qquad\qquad+\alpha\sum_{k=0}^{k_{\text{max}}}c_{k}\nu(t-k \delta t),\end{split} \tag{36}\] \[\dot{\mathbf{R}}(t+\delta t)=\dot{\mathbf{R}}(t+\frac{\delta t}{2})+\frac{ \delta t}{2}\ddot{\mathbf{R}}(t+\delta t).\] The last term in the integration of \(\nu(t)\) is the additional damping term, where the coefficients, \(\alpha\) and \(\{c_{k}\}_{k=0}^{k_{\text{max}}}\), as well as a dimensionless constant, \(\kappa=\delta t^{2}\omega^{2}\), have been optimized for various values of \(k_{\text{max}}\) and are given in Ref. [37]. In the initial time step \(\nu(t_{0}-k\delta t)\) for \(k=0,1,\ldots,k_{\text{max}}\) are all set to the fully converged regular Born-Oppenheimer ground state density, \(\varrho_{0}\), i.e. at \(t_{0}\) we set \(\nu(t_{0}-k\delta t)=\varrho_{0}\) for \(k=0,1,\ldots,k_{\text{max}}\). A reasonably well-converged iterative self-consistent field optimization is thus required, but only in the first initial time step. The modified Verlet integration scheme above works well without any significant drift in the constant of motion on time scales relevant for quantum-based Born-Oppenheimer molecular dynamics. Several alternative integration schemes for XL-BOMD have also been proposed and analyzed [84; 85; 86; 87; 45; 84; 88], but will not be used in this article. ### Krylov subspace approximation of the kernel A key challenge in the integration of the electronic degrees of freedom, Eq. (36), is the calculation of \(\ddot{\nu}(t)\), which is given by Eq. (32). By using a low-rank Krylov-subspace approximation [49] of the kernel, \(\mathcal{K}\), adapted to the density matrix formalism [69; 89], we can approximate \(\ddot{\nu}(t)\) as \[\begin{split}&\ddot{\nu}=-\omega^{2}\mathcal{K}\left(\varrho_{0}[ \nu]-\nu\right)\\ &\approx-\omega^{2}\sum_{i,j=1}^{m}v_{i}g_{ij}\langle w_{j},( \varrho_{0}[\nu]-\nu)\rangle.\end{split} \tag{37}\] The matrices \(v_{i}\), \(w_{i}\in\mathbb{R}^{N\times N}\) and \(g\in\mathbb{R}^{m\times m}\), are based on a rank-\(m\) Krylov subspace approximation and are generated through Algorithm 1. We use the matrix inner product notation, \(\langle v_{i},v_{j}\rangle=\text{tr}[v_{i}^{T}v_{j}]\). The algorithm requires the calculation of the perturbation in the density matrix, \(\partial\varrho_{0}[\nu+\lambda v_{m}]/\partial\lambda\) at (\(\lambda=0\)). This density matrix response can be calculated through the intermediate perturbation to first order in \(\lambda\) in the Kohn-Sham Hamiltonian, i.e. \(h_{0}+\lambda h_{1}\approx h[\nu+\lambda v_{m}]\), which can be performed with regular Rayleigh-Schrodinger perturbation theory when \(T_{e}=0\), or for fractional occupation numbers when \(T_{e}>0\)[49; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 288; 289; 291; 289; 281; 285; 286; 287; 288; 289; 292; 293; 294; 295; 296; 297; 298; 299; 30; 31; 324; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 84; 86; 87; 88; 89; 88; 85; 89; 91; 89; 92; 86; 87; 89; 93; 88; 89; 94; 89; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 102; 107; 108; 109; 103; 109; 111; 113; 114; 115; 116; 117; 118; 119; 120; 121; 123; 124; 125; 126; 127; 128; 129; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 179; 170; 180; 181; 183; 184; 185; 186; 187; 188; 189; 190; 182; 189; 191; 193; 187; 188; 189; 194; 188; 186; 189; 187; 188; 189; 195; 296; 297; 298; 299; 300; 31; 320; 32; 333; 344; 35; 36; 37; 38; 399; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 50; 52; 54; 55; 57; 58; 59; 61; 70; 72; 74; 75; 76; 78; 79; 80; 82; 83; 84; 85; 86; 87; 89; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 114; 109; 115; 116; 117; 117; 118; 119; 121; 122; 123; 124; 125; 126; 127; 129; 133; 140; 141; 148; 149; 151; 156; 157; 158; 159; 161; 172; 173; 174; 175; 176; 177; 178; 189; 199; 200; 211; 223; 234; 235; 236; 237; 238; 239; 241; 245; 246; 247; 248; 259; 261; 250; 262; 27; 278; 289; 293; 294; 295; 301; 23; 238; 285; 286; 287; 288; 296; 297; 302; 298; 303; 33; 341; 35; 36; 37; 38; 39; 40; 41; 42; 43; 45; 46; 47; 48; 49; 52; 53; 54; 56; 57; 58; 59; 60; 59; 61; 59; 70; 58; 59; 71; 59; 62 the dynamical matrix variables. What are different in our simulation examples is the shadow energy functional, \(\mathcal{F}_{\text{KS+U}}[\varrho,\nu]\) in Eq. (21), the corresponding shadow potential, \(\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)\) in Eq. (26), and the force term in Eq. (34). ### Nitromethane Figure 1 shows a combined SCC-DFTB+U and XL-BOMD microcanonical (NVE) simulation of liquid nitromethane, (CH\({}_{3}\)NO\({}_{2}\))\({}_{7}\), where the Hubbard U parameter is set 0 eV or 2 eV. The fluctuations in the total energy around their average value and the residue given by the Frobenius norm of the density matrix residual function, \(\|\varrho[\nu]s-\nu s\|_{\text{F}}=\|\varrho[\nu]s-x\|_{\text{F}}\), are following each other closely for the two cases, as is shown in the upper panel a) and lower panel c). The main difference is the size of the electronic HOMO-LUMO energy gap shown in the mid panel b). The only difference is a shift of about 2 eV. The total energy remains stable with no visible drift in the total energy. While the fluctuations in the total energy behave in the same way around their average values, the total energy is shifted. This is seen in Fig. 2 where an increased Hubbard U leads to a shift in the total energy. In this figure we also see how the amplitude of the total energy fluctuations for the Verlet integration scheme scales approximately as \(\delta t^{2}\), i.e. the amplitude is increased by a factor of 4 as we double the size of the integration time step from \(\delta t=0.25\) fs to \(\delta t=0.50\) fs. Also the size of the residual error, \(\|\varrho[\nu]s-\nu s\|_{\text{F}}\), which provides a measure of the difference to the exact regular ground state solution, scales quadratically with the integration time step (not shown). The error in the potential energy surface scales with the square of the residual, i.e., \(\|\mathcal{U}_{\text{BO+U}}(\mathbf{R},\nu)-U_{\text{BO+U}}(\mathbf{R})\| \propto\|\varrho[\nu]s-\nu s\|^{2}\), and the error in the sampling of the potential energy surface therefore scales as \(\delta t^{4}\)[53; 69]. This example demonstrates the ability of the combined KS-DFT+U and XL-BOMD simulation scheme to alter the size of the HOMO-LUMO gap, while providing stable molecular trajectories. The ability to tune the gap can be of significant importance if we need to modify the response properties of a material. For approximate DFT methods like SCC-DFTB or semi-empirical quantum-chemistry methods [58; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110], the molecular polarizability, which may affect the long-range Coulomb interactions between polarized molecules, could be tuned by modifying the Hubbard-U parameter using the DFT+U correction. ### U\({}_{2}\) Figure 3 illustrates the outcomes of microcanonical (NVE) simulations of a 96-atom supercell (periodic boundary conditions) of nuclear fuel, U\({}_{2}\), employing the combined SCC-DFTB+U and XL-BOMD simulation approach with a Hubbard U = 2 eV. The simulations are performed for two different time steps, \(\delta t=0.25\) fs and \(\delta t=0.5\) fs. Without a Hubbard-U parameter, U\({}_{2}\) is metallic, lacking an electronic energy gap, and does not match experimental observations, where a gap of about 2 eV is seen. The SCC-DFTB+U parameterization relies on first principled KS-DFT calculations fitted to first principles calculations [111]. The top panel presents fluctuations in the total energy that are shifted such that the initial total energy is set to 0. There is no visible systematic drift, and we observe an approximate \(\delta t^{2}\) scaling, i.e. the amplitude increases by a factor of 4 as the time step is doubled in size. The middle panel displays the size of the electronic energy gap, which oscillates near 2 eV, close to the chosen Hubbard-U value. The bottom panel depicts the Frobenius norm of the matrix residual function is on the order of \(10^{-5}\). This residue represents the difference to the exact ground state solution equivalent to a self-consistency error in a regular Born-Oppenheimer simulation. As the integration time step is halved, the size of the residual is reduced by a factor of 4, demonstrating the approximate \(\delta t^{2}\) scaling of the residual error. As discussed above, this gives an error in the potential energy surface that scales as \(\delta t^{4}\)[47; 53]. Figure 1: Combined SCC-DFTB+U XL-BOMD NVE simulation (with periodic boundary conditions) of liquid nitromethane (CH\({}_{3}\)NO\({}_{2}\))\({}_{7}\) with a Hubbard U set to 0 eV or 2 eV. The upper panel a) shows the fluctuations in the total energy (potential + kinetic) around the average. The statistical temperature was around 200 K with an integration time step \(\delta t=0.25\) fs. The residue in the lower panel c) was given by the Frobenius norm of the density matrix residual function, \(\|\varrho[\nu]s-\nu s\|_{\text{F}}\). The mid panel b) shows the fluctuation of the electronic HOMO-LUMO energy gap (Gap). ## VI Summary and discussion We have presented a framework for QMD simulations that combines DFT+U and XL-BOMD. In this way we have been able to reduce the computational cost of QMD simulations also for systems with electron correlation effects beyond the reach of regular KS-DFT based on the local density or generalized gradient approximations. With the extended Lagrangian formulation this is achieved without requiring an iterative self-consistent-field optimization of the electronic ground state prior to the force evaluations, which is necessary in regular direct Born-Oppenheimer molecular dynamics simulations. The method provides accurate and stable molecular trajectories at the same time as the computational cost per time step is drastically reduced by avoiding the iterative SCF optimization that normally is required prior to each force evaluation in a regular Born-Oppenheimer simulation. The basic idea behind our approach can be traced back to a backward error analysis or a shadow Hamiltonian approach [76; 77; 78; 79; 112]. This is a conceptually simple but highly powerful idea. Instead of calculating _approximate_ solutions for an underlying _exact regular_ Born-Oppenheimer potential, we do the opposite. Instead, we calculate the _exact_ electron density, energies, and forces, but for an underlying _approximate shadow_ Born-Oppenheimer potential. In this way the calculated forces are conservative with respect to the approximate shadow potential and generate accurate molecular trajectories with long-term energy stability. Here we have shown how this concept can be extended beyond regular KS-DFT to include also orbital-dependent DFT+U corrections. Our combined DFT+U and XL-BOMD framework for shadow QMD simulations was demonstrated with an implementation using the SCC-DFTB LATTE software package for liquid nitromethane and solid nuclear fuel. The combined DFT+U and XL-BOMD approach should be applicable also to a broad range of other methods. The theory in this paper may also demonstrate how similar formulations can be made for other electronic structure methods going beyond regular KS-DFT. Of particular interest are self-interaction corrections [54; 55; 56; 57]. ## VII Acknowledgements This work is supported by the U.S. Department of Energy Office of Basic Energy Sciences (FWP LANLE8AN) and by the U.S. Department of Energy through the Los Alamos National Laboratory. This research was also supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy Contract No. 892333218NCA000001. Dis Figure 3: DFTB+U XL-BOMD based NVE simulation of a 96 atom U\({}_{2}\) supercell using a Hubbard U = 2 eV. The upper panel a) shows the shifted fluctuations in the total energy and the mid panel the size of the electronic HOMO-LUMO gap. The residue in the lower panel c) was given by the Frobenius norm of the density matrix residual function, \(\|\varrho[v]\,s-\nu s\|_{\rm F}\). Two different integration time steps were used Figure 2: Combined SCC-DFTB+U XL-BOMD NVE simulations of liquid nitromethane (CH\({}_{3}\)NO\({}_{2}\))\({}_{7}\) with a Hubbard U set to 0 eV or 2 eV for a time step of \(\delta t=0.25\) fs and \(\delta t=0.50\) fs. The total energy is shifted upwards for U = 2 eV and the amplitude of the fluctuations increases approximately by a factor of 4 as the size of the integration time step is increased by a factor of 2. The statistical temperature fluctuated around 200 K. cussions with Heather Kulik, Benjamin Hourahine and Joshua Finkelstein are gratefully acknowledged.
2307.00873
End-To-End Prediction of Knee Osteoarthritis Progression With Multi-Modal Transformers
Knee Osteoarthritis (KOA) is a highly prevalent chronic musculoskeletal condition with no currently available treatment. The manifestation of KOA is heterogeneous and prediction of its progression is challenging. Current literature suggests that the use of multi-modal data and advanced modeling methods, such as the ones based on Deep Learning, has promise in tackling this challenge. To date, however, the evidence on the efficacy of this approach is limited. In this study, we leveraged recent advances in Deep Learning and, using a Transformer approach, developed a unified framework for the multi-modal fusion of knee imaging data. Subsequently, we analyzed its performance across a range of scenarios by investigating multiple progression horizons -- from short-term to long-term. We report our findings using a large cohort (n=2421-3967) derived from the Osteoarthritis Initiative dataset. We show that structural knee MRI allows identifying radiographic KOA progressors on par with multi-modal fusion approaches, achieving an area under the ROC curve (ROC AUC) of 0.70-0.76 and Average Precision (AP) of 0.15-0.54 in 2-8 year horizons. Progression within 1 year was better predicted with a multi-modal method using X-ray, structural, and compositional MR images -- ROC AUC of 0.76(0.04), AP of 0.13(0.04) -- or via clinical data. Our follow-up analysis generally shows that prediction from the imaging data is more accurate for post-traumatic subjects, and we further investigate which subject subgroups may benefit the most. The present study provides novel insights into multi-modal imaging of KOA and brings a unified data-driven framework for studying its progression in an end-to-end manner, providing new tools for the design of more efficient clinical trials. The source code of our framework and the pre-trained models are made publicly available.
Egor Panfilov, Simo Saarakkala, Miika T. Nieminen, Aleksei Tiulpin
2023-07-03T09:10:57Z
http://arxiv.org/abs/2307.00873v1
# End-To-End Prediction of Knee Osteoarthritis Progression With Multi-Modal Transformers ###### Abstract Knee Osteoarthritis (KOA) is a highly prevalent chronic musculoskeletal condition with no currently available treatment. The manifestation of KOA is heterogeneous and prediction of its progression is challenging. Current literature suggests that the use of multi-modal data and advanced modeling methods, such as the ones based on Deep Learning, has promise in tackling this challenge. To date, however, the evidence on the efficacy of this approach is limited. In this study, we leveraged recent advances in Deep Learning and, using a Transformer approach, developed a unified framework for the multi-modal fusion of knee imaging data. Subsequently, we analyzed its performance across a range of scenarios by investigating multiple progression horizons - from short-term to long-term. We report our findings using a large cohort (n=2421-3967) derived from the Osteoarthritis Initiative dataset. We show that structural knee MRI allows identifying radiographic KOA progressors on par with multi-modal fusion approaches, achieving an area under the ROC curve (ROC AUC) of 0.70-0.76 and Average Precision (AP) of 0.15-0.54 in 2-8 year horizons. Progression within 1 year was better predicted with a multi-modal method using X-ray, structural, and compositional MR images - ROC AUC of 0.76(0.04), AP of 0.13(0.04) - or via clinical data. Our follow-up analysis generally shows that prediction from the imaging data is more accurate for post-traumatic subjects, and we further investigate which subject subgroups may benefit the most. The present study provides novel insights into multi-modal imaging of KOA and brings a unified data-driven framework for studying its progression in an end-to-end manner, providing new tools for the design of more efficient clinical trials. The source code of our framework and the pre-trained models are made publicly available. ## Introduction Knee osteoarthritis (KOA) is a chronic musculoskeletal disease affecting millions of people worldwide [1]. Progression of KOA results in degeneration of knee joint's bony and soft tissues, which is often accompanied by worsening in symptoms [2]. Personalized prediction of structural KOA trajectory is important for multiple reasons, including early interventions and development of disease-modifying drugs, however, it is challenging due to high disease heterogeneity and rather poor understanding of KOA phenotypes [3, 4, 5]. Conventionally, the status of the suspected knees is assessed clinically from radiographic images. Weight-bearing X-ray images visualize alterations in bones' shape (e.g. osteophytes) and texture (e.g. subchondral sclerosis) with high contrast, as well as provide indirect measurements of cartilage and menisci degeneration via apparent joint space [6]. These are the primary joint changes, and they are highly consistent across subjects with KOA. To date, the most established KOA severity scoring system - Kellgren-Lawrence grading (KLG) [7] - is based on radiographic imaging. Studies published during the past decade have shown that many soft tissue changes, e.g. in cartilage, menisci, ligaments, synovial and adipose tissues, are also associated with OA onset and progression [8, 9, 10, 11, 12]. They are not visible in radiographs but can be detected and tracked using Magnetic Resonance Imaging (MRI), which enables three-dimensional imaging of the joint. Knee MRI studies typically include several MR imaging protocols with complementary contrasts, and they target morphological factors in major joint tissues, such as the severity of osteophytes, cartilage thickness, and menisci and ligament tears. The MRI protocols can be divided into structural - targetting tissue morphology - and compositional MRI - reflecting microstructure and biochemical content. The most apparent morphological changes in soft tissues have been incorporated into advanced grading schemes, such as MOAKS [13], however, utilization of such schemes for studying KOA progression remains limited [14, 15]. Quantitative MRI (qMRI) protocols, such as \(T_{2}\)-mapping, have been getting increased attention due to their sensitivity to compositional tissue changes (e.g. collagen anisotropy in cartilage and meniscus in early KOA [16, 17, 18, 19], fatty infiltration of muscles [20]) and considerable technology readiness level [21, 22]. Overall, despite the rich information provided by multi-sequence MRI in addition to radiography and sensitivity to early tissue changes, the real prognostic utility of MRI and, specifically, qMRI in KOA remains understudied [23]. The vast majority of prior art on MRI in KOA progression prediction operated with limited sample sizes and highly interpretable and localized imaging biomarkers, which are typically extracted via image segmentation and basic radiomics [24, 25]. Such conventional biomarkers are designed using a "bottom-up" approach, primarily describing apparent changes that occur in major joint tissues, particularly, in cartilage. As a result, the role of less affected tissues remains unstudied, and it is gaining attention only recently [10, 11, 26]. Another limitation of many prior works is that they perform aggressive subject exclusion for the definition of groups, omitting the study participants with mixed and inconsistent findings. This process allows studying the sensitivity of developed biomarkers in the discrimination of small-scale groups, while severely compromising/underestimating their specificity (i.e. generalization) [27]. While this knowledge lays the foundation for clinical management of KOA subjects by fine-grained differentiation of disease progression, it does not necessarily answer the question of how the disease will progress in the future in a particular subject from a general population. Modern computational methods, such as the ones based on Deep Learning (DL), have made possible the analysis of large-scale imaging studies and the development of new personalized prediction models [28, 29]. With DL, the design of imaging biomarkers can be seen as "top-down" process. Here, the informative features that are discriminative w.r.t. the defined target are first automatically derived in a data-driven manner [30]. Subsequently, the learned features and their interaction are analyzed from the model by factorization of model activations into interpretable concepts defined by a human expert. While interpretability of DL models remains challenging [31, 32], such methods allow to understand the peak performance of certain data in the considered task, long before the clinically applicable biomarkers are designed [33]. In the KOA domain, Tiulpin et al [34] have previously shown superior performance of DL applied to raw radiographic images in comparison to demographic variables and gold-standard KLG in the task of radiographic progression prediction. Studies on MRI data analysis in this scope, however, are very sparse. Wang et al [35] demonstrated high performance of DL with two MRI protocols in predicting whether the knee will undergo total knee replacement (TKR) within 9 years from the exam. In the same problem, but at 5 years horizon, Tolpadi et al [36] contrasted radiographic and MR images showing a slight advantage of the latter modality. While TKR is regulatory-approved as a KOA endpoint, it is not inherent to the disease, and we argue that it is a noisy progression surrogate. To this end, the recent work of Panfilov et al [37] have recently compared X-ray images and structural MRI in the prediction of radiographic KOA progression (increase of KLG as in the work of Tiulpin [34] et al.) within 8 years. All in all, KOA forecasting over short-term, which is more valuable for clinical trials, has not been thoroughly addressed. On top of that, the complementary value of clinically accessible imaging modalities, especially, compositional MRI, in identification of Figure 1: Schematic overview of the proposed framework. Structural and compositional features imaged by several presumably complementary modalities are fused in a deep learning model, based on a composition of Convolutional Neural Networks (CNNs) and Transformers (TRFs). The imaging biomarkers are optimised to discriminate the knees that will undergo radiographic osteoarthritis (OA) progression versus the ones that will not progress within a certain time interval. Multiple intervals are considered to clarify the value of imaging modality in prediction of both rapid and slow OA progression. The models are additionally developed with each of the present modalities independently. progress remains an open question. To date, the majority of DL-based multi-modal methods in medical image computing either perform aggressive data dimensionality reduction [38, 39] or multi-stage late fusion [40, 34], where the modalities are first processed separately and then combined in a second-level shallow model. Both considerations are applied due to high memory demand in processing typically large medical images. Accordingly, both of the aforementioned techniques limit the model's capabilities to derive rich and interrelated features. Lately, thanks to advances in computational platforms and DL methods, unified attention-based methods, such as Transformers [41, 42], were developed. Transformers have opened a possibility for holistic modeling in diverse multi-modal scenarios, with little to no modification of the original data [43, 44]. In medical imaging, they were shown to often provide higher accuracy, particularly, when used with pre-training or in high volume data setting [45, 46]. In this study, we introduce a multi-modal DL-based method for predicting radiographic KOA progression (hereinafter referred to as "KOA progression") and investigate the value of various modalities in this task. The contributions of our work are three-fold: * We propose a new end-to-end method to study KOA progression from multi-modal imaging data. We apply the method for prediction of rapid, middle-, and long-term radiographic progression, where we clarify the predictive value of imaging in the task and establish the new baseline models. * We comprehensively analyze the complementary value of common imaging modalities (X-ray, structural, and compositional MRI) with respect to the considered outcomes. Our study is the first to use the quantitative \(T_{2}\) maps of MRI in an end-to-end predictive model, and among the few to study compositional MRI in a large-scale setting. * We analyze the efficacy of the best-performing models across different subject sub-groups and discuss the directions for further development of top-down methods for KOA progression prediction. ## Results ### Training and testing datasets Five observation intervals were considered (0-12/24/36/48/96 months) to derive 5 independent datasets from the Osteoarthritis Initiative (OAI) database. The complete sample selection procedure is presented in Figure S1. The most common reasons for exclusion were patient dropouts and missing clinical or imaging data. The progression target was defined based on the change in KLG within the considered interval. The knees with no recorded change in KLG were assigned to the "control" group and the ones with observed worsening of KLG - to the "progressor" group. Following the popular research practice, grades KLG0 and KLG1 were pooled together, as the corresponding change is often not considered clinically significant or reliable (KL1 is defined as "doubtful OA") [47, 34, 48]. After grade pooling, a small number of subjects still showed an improvement in KLG, with or without accompanying worsening. To avoid ambiguity in the definition of disease progression, those subjects were excluded from the study. The final sample sizes were 3967, 3735, 3585, 3448, and 2421 for 12m, 24m, 36m, 48m, and 96m intervals, respectively. The ratio of progressors to the total number of subjects was notably higher with longer observation periods - 5.7, 8.4, 11.9, 14.5, and 27.7% for 12m, 24m, 36m, 48m, and 96m, respectively. The resulting datasets were split into training, validation, and testing subsets. In the OAI, the subjects were observed at multiple data acquisition sites. All the subjects from the site "D" were assigned to the test set. While the acquisition protocols in the OAI are supposed to be standardized between the sites, a small domain shift between the images from different sites is still present. This subject allocation scheme allowed us to additionally model the potential discrepancy between training-time and testing-time images and, thus, make the evaluation more objective. The testing subsets' sample sizes were 1016, 933, 896, 867, and 626 for 12m, 24m, 36m, 48m, and 96m targets, respectively, which is 25-26% of the total sample. The remaining samples were split following a 5-fold stratifed cross-validation scheme (\(\approx\)80/20%) while balancing the ratio of controls and progressors in the training and the validation subsets for each split (no overlapping subject-wise between the training and validation sets). ### Progression prediction from individual modalities Clinical data and semi-quantitative X-ray assessmentsTo better understand the predictive power of common clinical risk factors, a set of baseline models was developed. The variables included subject age, sex, BMI, history of past surgeries and injuries, symptomatic score (WOMAC; Western Ontario and McMaster Universities Osteoarthritis Index) [49], as well as routinely assessed radiographic KLG score. The models along with their performance are described in Table 2. For the 12-month prediction horizon, adding WOMAC score and history of knee alterations yielded a notable increase of 0.07 in both average ROC AUC (_p=0.079_) and AP (_p=0.042_). Inclusion of KLG further improved AP by 0.03 (_p=0.030_), suggesting an added value of imaging in predicting progression short-term. For 24m-48m horizons, similar findings were observed, however, the predictive power of knee history and WOMAC score decreased, and the additional value of KLG, given other risk factors, was marginal. Interestingly, for 96m horizon, the presence of knee alteration history, WOMAC (model _C3_), and also KLG (model _C4)_ yielded a notable increase in ROC AUC (0.03 [_p=0.031_] and 0.05 [_p=0.008_], respectively) and AP (0.05 [_p=0.019_] and 0.05 [_p=0.023_], respectively). Towards longer horizons, the average performance of all models grew faster in AP than the prevalence rate, suggesting that the identification of long-term regressors compared to rapid ones is more feasible. Taking the observed performance benefits of KLG into account, a purely non-imaging model _C3_ was used as a baseline in subsequent analysis. Raw X-ray imagesEnd-to-end models trained on raw radiographic images (XR) showed moderate performance at all horizons, as summarized in Table 3. Compared to the baseline, the modest were inferior in both metrics at 12m and comparable at 24m. From 36m onwards, the models showed higher scores than the baseline, reaching statistically significant (_p<0.021_) improvements of 0.08 in AP for 48-96m targets. MRI dataThe performance of MRI-based models varied depending on whether structural (DESS/TSE) or compositional (\(T_{2}\)map) protocol was used (see Table 3). Structural modalities showed improved performance in ROC AUC - comparable to _C3_ and higher than \(X\) at 12m, and generally higher than both from 24m onward. Most notable increases in average AUC were observed for the 24m and 96m horizons. \(T_{2}\)map-based model _M3_ showed similar ROC AUCs as the XR one. In terms of AP, all models were similar to \(X\), except for 48m (where the scores were marginally lower by 0.02-0.03) and 96m (where they improved the mean score by notable 0.05-0.07). Of all the observed improvements, the significant ones were found mostly for \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline \multicolumn{1}{c|}{**Target \(>\)**} & **12m** & **24m** & **36m** & **48m** & **96m** \\ \hline \# subjects/knees, of which: & 3967 & 3735 & 3585 & 3448 & 2421 \\ _- controls_ & 3740 & 3420 & 3160 & 2947 & 1751 \\ _- progressors_ & 227 & 315 & 425 & 501 & 670 \\ \hline Age & 61.1 (9.2) & 61.1 (9.1) & 60.9 (9.1) & 61.0 (9.1) & 60.1 (8.8) \\ BMI & 28.5 (4.8) & 28.4 (4.7) & 28.4 (4.7) & 28.4 (4.7) & 28.1 (4.6) \\ Sex: _F, M_ & 2314, 1653 & 2176, 1559 & 2091, 1494 & 2007, 1441 & 1400, 1021 \\ \hline WOMAC: _[0-10], (10-100]_ & 2534, 1433 & 2401, 1334 & 2309, 1276 & 2237, 1211 & 1644, 777 \\ KLG: _0, 1, 2, 3_ & 1555, 750, & 1454, 711, & 1403, 683, & 1354, 672, & 1154, 576, \\ Prior injury: _no, yes_ & 2907, 1060 & 2722, 1013 & 2613, 972 & 2527, 921 & 1782, 639 \\ Prior surgery: _no, yes_ & 3543, 424 & 3322, 413 & 3209, 376 & 3087, 361 & 2189, 232 \\ \hline \hline \end{tabular} \end{table} Table 1: Description of the created datasets. Only one knee per subject was selected. The values for Age and BMI represent variable mean and standard deviation. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline & \multicolumn{1}{c|}{**Target \(>\)**} & **12m** & **24m** & **36m** & **48m** & **96m** \\ \hline **Model** & \multicolumn{1}{c|}{**Data**} & \multicolumn{6}{c}{**ROC AUC @ target**} \\ \hline C1 & age, sex, BMI & 0.64\({}_{\pm 0.04}\) & 0.62\({}_{\pm 0.04}\) & 0.63\({}_{\pm 0.03}\) & 0.65\({}_{\pm 0.03}\) & 0.67\({}_{\pm 0.02}\) \\ C2 & + KLG & 0.63\({}_{\pm 0.04}\) & 0.65\({}_{\pm 0.03}\) & 0.66\({}_{\pm 0.03}\) & 0.68\({}_{\pm 0.03}\) & 0.74\({}_{\pm 0.02}\) \\ C3 & + Surg, Inj, WOMAC & 0.71\({}_{\pm 0.04}\) & 0.69\({}_{\pm 0.04}\) & 0.66\({}_{\pm 0.03}\) & 0.68\({}_{\pm 0.03}\) & 0.70\({}_{\pm 0.02}\) \\ C4 & + Surg, Inj, WOMAC, KLG & 0.72\({}_{\pm 0.04}\) & 0.70\({}_{\pm 0.03}\) & 0.68\({}_{\pm 0.03}\) & 0.70\({}_{\pm 0.03}\) & 0.75\({}_{\pm 0.02}\) \\ \hline **Model** & \multicolumn{1}{c|}{**Data**} & \multicolumn{6}{c}{**AP @ target**} \\ \hline C1 & age, sex, BMI & 0.06\({}_{\pm 0.01}\) & 0.08\({}_{\pm 0.01}\) & 0.17\({}_{\pm 0.02}\) & 0.19\({}_{\pm 0.02}\) & 0.36\({}_{\pm 0.03}\) \\ C2 & + KLG & 0.06\({}_{\pm 0.01}\) & 0.09\({}_{\pm 0.01}\) & 0.17\({}_{\pm 0.02}\) & 0.20\({}_{\pm 0.02}\) & 0.42\({}_{\pm 0.03}\) \\ C3 & + Surg, Inj, WOMAC & 0.13\({}_{\pm 0.04}\) & 0.16\({}_{\pm 0.04}\) & 0.20\({}_{\pm 0.03}\) & 0.22\({}_{\pm 0.03}\) & 0.41\({}_{\pm 0.03}\) \\ C4 & + Surg, Inj, WOMAC, KLG & 0.16\({}_{\pm 0.05}\) & 0.17\({}_{\pm 0.04}\) & 0.20\({}_{\pm 0.03}\) & 0.23\({}_{\pm 0.03}\) & 0.46\({}_{\pm 0.03}\) \\ & _prevalence_ & _0.04_ & _0.06_ & _0.10_ & _0.13_ & _0.25_ \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of the models based on the widely accessible clinical data. The variables include body mass index (BMI), radiographic severity via Kellgren-Lawrence grade (KLG), history of past injuries (Inj) and surgeries (Surg), symptomatic and knee function assessment via Western Ontario and McMaster Universities Arthritis Index (WOMAC). Prevalence indicates the rate of the progressed knees and, accordingly, a performance of a naive classifier. The values show average precision (AP) and area under the ROC curve (ROC AUC) along with the standard errors. 96m prediction horizon. Here, all the MRI models were significantly better than the clinical baseline in both metrics (_p<0.023_). When compared against XR, the structural MRI protocols (_M1_ [DESS] and _M2_ [TSE]) also showed higher performance, both in ROC AUC (_p=0.020_ and _p=0.007_, respectively) and AP (_p=0.138_ and _p=0.017_, respectively). The model _M1_ was significantly better than the clinical baseline in ROC AUC also at 48m (_p=0.030_). ### Multi-modal fusion To clarify the complementary value of the considered imaging modalities, we performed an exhaustive experimental investigation. Here, three sets of models were developed based on the individual modalities studied earlier: fusion of XR with single MRI protocol (XR1MR1), two MRI protocols (MR2), and XR with two MRI protocols (XR1MR2). The best models selected within each setting are summarized in Table 4 and the complete results including all models can be found in Table S1. Fusion of MRI sequencesA combination of two MRI modalities resulted in only marginal improvement over individual structural MR sequences. Particularly, the fusion of DESS and TSE showed an increase in ROC AUC over individual modalities by 0.03 (_p>0.221_), but only at the 12m horizon. When either DESS or TSE was used in combination with the _T\({}_{2}\)_map, no clear and consistent differences were observed compared to just the structural MR sequence. Against the individual _T\({}_{2}\)_map modality, the models yielded an increase by 0.02-0.04 of ROC AUC, which was, however, significant (_p=0.010_) for model _F5_ at 36m target and insignificant (_p>0.057_) elsewhere. The same models were able to marginally improve the AP scores at the 12m horizon by 0.02-0.03 (_p>0.375_) over individual TSE and _T\({}_{2}\)_maps, but not higher than the DESS model. Otherwise, no noticeable difference in AP was observed. Among the MR2 models, DESS with TSE was marginally better for 12-24m horizons in ROC AUC, while DESS with _T\({}_{2}\)_map was more dominant at 36-48m in both metrics. Fusion of multiple imaging modalitiesA combination of radiographic and single-protocol MRI images generally resulted in a performance similar to the latter, yet a few notable improvements were observed in the ROC AUC space. Namely, the _F1_ model (XR, DESS) showed an increase of 0.11 (_p=0.039_) and 0.05 (_p=0.106_) in the score at the 12m horizon compared to the individual XR and MRI DESS modalities, respectively. With the model _F3_ (XR, _T\({}_{2}\)_map), the gains of 0.03 (_p=0.103_) and 0.02 (_p=0.177_) in ROC AUC were observed over _M3_ at the 48m and 96m horizons. Several performance drops were observed for the model _F3_ at 12m (by 0.08) and all the models _F1-F3_ at 24m (by 0.01-0.04) horizons. In terms of AP, the _F1_ model showed a marginal gain of 0.02 (_p>0.238_) for 48m and 96m targets over the model _M1_. The models _F2_ (XR, TSE) and _F3_ yielded rather consistent performance regression of 0.01-0.04 at all targets compared to the corresponding models _M2_ and _M3_. In the setting with 3 modalities (XR and two MR sequences), the scores were largely similar to the XR1MR1 models. However, both ROC AUCs and APs recovered to the level highest across the included individual modalities at 12m-36m \begin{table} \begin{tabular}{c|l|l|l|l|l|l|l} \hline \hline & & **Target \(>\)** & **12m** & **24m** & **36m** & **48m** & **96m** \\ \hline **Model** & **Data** & **Arch** & \multicolumn{4}{c}{**ROC AUC @ target**} \\ \hline C3 & \begin{tabular}{l} age, sex, BMI, Surg, \\ Inj, WOMAC \\ \end{tabular} & LR & 0.71\({}_{\pm 0.04}\) & 0.69\({}_{\pm 0.04}\) & 0.66\({}_{\pm 0.03}\) & 0.68\({}_{\pm 0.03}\) & 0.70\({}_{\pm 0.02}\) \\ X & XR & XR1 & 0.65\({}_{\pm 0.04}\) & 0.68\({}_{\pm 0.04}\) & 0.69\({}_{\pm 0.03}\) & 0.70\({}_{\pm 0.03}\) & 0.73\({}_{\pm 0.02}\) \\ M1 & DESS & MR1 & 0.71\({}_{\pm 0.04}\) & 0.74\({}_{\pm 0.03}\) & 0.70\({}_{\pm 0.03}\) & 0.73\({}_{\pm 0.03}^{\{\pm 0.03}}\) & 0.76\({}_{\pm 0.02}^{\{\pm 0.02}}\) \\ M2 & TSE & MR1 & 0.71\({}_{\pm 0.04}\) & 0.75\({}_{\pm 0.03}\) & 0.70\({}_{\pm 0.03}\) & 0.70\({}_{\pm 0.03}\) & 0.78\({}_{\pm 0.02}^{\{\pm 0.02}}\) \\ M3 & T\({}_{2}\)map & MR1 & 0.69\({}_{\pm 0.04}\) & 0.69\({}_{\pm 0.04}\) & 0.68\({}_{\pm 0.03}\) & 0.68\({}_{\pm 0.03}\) & 0.74\({}_{\pm 0.02}^{\{\pm 0.02}}\) \\ \hline **Model** & **Data** & **Arch** & \multicolumn{4}{c}{**AP @ target**} \\ \hline C3 & \begin{tabular}{l} age, sex, BMI, Surg, \\ Inj, WOMAC \\ \end{tabular} & LR & 0.13\({}_{\pm 0.04}\) & 0.16\({}_{\pm 0.04}\) & 0.20\({}_{\pm 0.03}\) & 0.22\({}_{\pm 0.03}\) & 0.41\({}_{\pm 0.03}\) \\ X & XR & XR1 & 0.10\({}_{\pm 0.03}\) & 0.15\({}_{\pm 0.04}\) & 0.23\({}_{\pm 0.03}\) & 0.30\({}_{\pm 0.04}^{\{\pm 0.03}}\) & 0.49\({}_{\pm 0.03}^{\{\pm 0.03}}\) \\ M1 & DESS & MR1 & 0.12\({}_{\pm 0.04}\) & 0.15\({}_{\pm 0.03}\) & 0.24\({}_{\pm 0.04}\) & 0.27\({}_{\pm 0.03}\) & 0.54\({}_{\pm 0.03}^{\{\pm 0.03}}\) \\ M2 & TSE & MR1 & 0.09\({}_{\pm 0.02}\) & 0.16\({}_{\pm 0.04}\) & 0.21\({}_{\pm 0.03}\) & 0.27\({}_{\pm 0.03}\) & 0.56\({}_{\pm 0.03}^{\{\pm 0.03}}\) \\ M3 & T\({}_{2}\)map & MR1 & 0.11\({}_{\pm 0.04}\) & 0.16\({}_{\pm 0.04}\) & 0.24\({}_{\pm 0.04}\) & 0.28\({}_{\pm 0.04}\) & 0.54\({}_{\pm 0.03}^{\{\pm 0.03}}\) \\ & _prevalence_ & & _0.04_ & _0.06_ & _0.10_ & _0.13_ & _0.25_ \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of the single modality models, based on X-ray (XR) or MRI (DESS, TSE, T\({}_{2}\)map). The best non-imaging clinical model is chosen for reference. The values show average precision (AP) and area under the ROC curve (ROC AUC) along with the standard errors. Statistically significant improvements (_p<0.050_) are marked with superscripts: C - vs. C3, x - vs. X. horizons. Compared to the corresponding MR2 models, the metrics were also generally similar, with an exception being the 12m and 48-96m horizons. At the 12m target, the ROC AUCs further improved over MR2 by 0.01-0.04 (_p>0.090_), which resulted in the model _F7_ being significantly (_p=0.021_) better than the model \(X\) and the model _F8_ - over \(X\) (_p=0.005_) and _M1_ (_p=0.026_). At 48m and 96m targets, a marginal consistent gain of 0.01 over MR2 was observed in all models XR1MR2 in both metrics. Overall, the top performing model was _F8_ (XR, DESS, \(T_{2}\)map), yielding the highest number of statistically significant improvements over the individual clinical and imaging modalities. Fusion of all imaging modalities and clinical dataLastly, the modalities from the best performing model _F8_ were combined with the clinical variables in a holistic fusion model \(U\). Here, the XR1MR2 architecture was extended with an additional shallow fully connected branch to embed the clinical variables (see Figure S2d). The model demonstrated a performance similar or marginally lower to the one without clinical variables, namely, 0.70-0.76 in ROC AUC across the targets and 0.10 (0.02), 0.15 (0.03), 0.23 (0.03), 0.26 (0.03), and 0.55 (0.03) in AP for 12m, 24m, 36m, 48m, and 96m horizons respectively. Interestingly, the model \(U\) was not able to achieve the highest AP at the 12m target, demonstrated previously by _C3_ model. Performance with respect to patient sub-groupsThe performance of models on the heterogeneous patient cohorts brings rather limited interpretation capabilities and, thus, actionable insights. To explore which patients may benefit from using certain imaging modalities and predictive models, we analyzed the performance metrics sub-group-wise. Here, we selected only those subjects, for whom the labels were available at all the horizons. Next, all the subjects were assigned to one of the three groups - "no prior injury or surgery", "prior injury, but no surgery", or "prior surgery". The prevalence rates of progressors in the groups were 0.059, 0.106, and 0.067, respectively. Post-traumatic cases may show distinct imaging findings and are often considered separate phenotypes in scientific literature [4, 50], thus, such separation. Within each of these groups, the subjects were further divided into sub-groups, based on the severity of radiographic KOA ("KLG 0-1", "KLG 2", "KLG 3") and presence of symptoms ("WOMAC 0-10", "WOMAC 10-100"). Within each sub-group, we calculated the performance metrics by averaging them over all the horizons. For AP, to account for different prevalences across the targets, the metric was calibrated before averaging to a fixed prevalence of 0.15 [51]. The models compared included the individual modalities - clinical, X-ray, and DESS MRI -, as well as the top-ranked multi-modal fusion model. The latter was selected via a multi-objective ranking procedure over all horizons and both performance metrics (see the details in Methods). We first considered the "no prior injury or surgery" group. Here, the overall ROC AUCs were moderate with all the models. The highest performance (AUC=0.65-0.80) was observed in asymptomatic KLG0/1, as well as symptomatic KLG2 and KLG3 sub-groups. The X-ray model was more consistent across the sub-groups but was inferior to other models for symptomatic \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline & & **Target \(>\)** & **12m** & **24m** & **36m** & **48m** & **96m** \\ \hline **Model** & **Data** & **Arch** & \multicolumn{4}{c}{**ROC AUC @ target**} \\ \hline F1 & XR, DESS & XR1MR1 & 0.76\({}^{(\times)}_{+0.03}\) & 0.72\({}_{+0.04}\) & 0.70\({}_{+0.03}\) & 0.74\({}^{(\times)}_{+0.03}\) & 0.77\({}^{(\times)}_{+0.02}\) \\ F4 & DESS, TSE & MR2 & 0.74\({}_{+0.03}\) & 0.74\({}_{+0.03}\) & 0.70\({}_{+0.03}\) & 0.71\({}_{+0.03}\) & 0.76\({}^{(\times)}_{+0.02}\) \\ F5 & DESS, T\({}_{2}\)map & MR2 & 0.72\({}_{+0.04}\) & 0.74\({}_{+0.03}\) & 0.72\({}^{(\times)}_{+0.03}\) & 0.73\({}^{(\times)}_{+0.03}\) & 0.76\({}^{(\times)}_{+0.02}\) \\ F8 & XR, DESS, T\({}_{2}\)map & XR1MR2 & 0.76\({}^{(\times)}_{+0.04}\) & 0.75\({}^{(\times)}_{+0.03}\) & 0.70\({}_{+0.03}\) & 0.73\({}^{(\times)}_{+0.03}\) & 0.77\({}^{(\times)}_{+0.02}\) \\ U & _F8 \& C3 vars_ & XR1MR2C1 & 0.71\({}_{+0.04}\) & 0.73\({}_{+0.03}\) & 0.70\({}_{+0.03}\) & 0.72\({}^{(\times)}_{+0.03}\) & 0.76\({}^{(\times)}_{+0.02}\) \\ \hline **Model** & **Data** & **Arch** & \multicolumn{4}{c}{**AP @ target**} \\ \hline F1 & XR, DESS & XR1MR1 & 0.11\({}_{+0.03}\) & 0.15\({}_{+0.03}\) & 0.23\({}_{+0.03}\) & 0.29\({}^{(\times)}_{+0.04}\) & 0.56\({}^{(\times)}_{+0.03}\) \\ F4 & DESS, TSE & MR2 & 0.12\({}_{+0.03}\) & 0.16\({}_{+0.03}\) & 0.20\({}_{+0.03}\) & 0.27\({}_{+0.03}\) & 0.55\({}^{(\times)}_{+0.03}\) \\ F5 & DESS, T\({}_{2}\)map & MR2 & 0.12\({}_{+0.04}\) & 0.16\({}_{+0.03}\) & 0.24\({}_{+0.03}\) & 0.28\({}_{+0.03}\) & 0.54\({}^{(\times)}_{+0.03}\) \\ F8 & XR, DESS, T\({}_{2}\)map & XR1MR2 & 0.13\({}_{+0.04}\) & 0.16\({}_{+0.03}\) & 0.22\({}_{+0.03}\) & 0.27\({}_{+0.03}\) & 0.57\({}^{(\times)}_{+0.03}\) \\ U & _F8 \& C3 vars_ & XR1MR2C1 & 0.10\({}_{+0.02}\) & 0.15\({}_{+0.03}\) & 0.23\({}_{+0.03}\) & 0.26\({}_{+0.03}\) & 0.55\({}^{(\times)}_{+0.03}\) \\ & _prevalence_ & & _0.04_ & _0.06_ & _0.10_ & _0.13_ & _0.25_ \\ \hline \end{tabular} \end{table} Table 4: Performance of the selected top performing fusion models. Model F1 combines X-ray with single MRI sequence, F4-F5 - two MRI sequences, F8 - X-ray with two MRI sequences. The values show average precision (AP) and area under the ROC curve (ROC AUC) along with the standard errors. Statistically significant improvements (_p<0.050_) are marked with superscripts: \(c\) – vs. C3 (clinical), \(x\) – vs. X (XR), m – vs. M1 (DESS). The extended version of this table showing full factorial analysis of modality fusion can be found in Supplemental Table S1. More details on the architecture of the fusion models are provided in Supplemental S2. KLG2 subjects. All models performed poorly with the asymptomatic KLG3 sub-group (AUC<0.50), which was also the smallest one. In terms of AP, the performance was generally low (AP=0.20-0.55), showing the challenging nature of the OA progression prediction problem. MRI- (\(M1\)) and fusion-based (\(F8\)) models performed stronger with asymptomatic KLG0/1, all KLG2, and symptomatic KLG3 subjects. In the "prior injury, but no surgery" group, the overall performance in ROC AUC was high-to-very-high, with \(M1\) and \(F8\) models showing an increase up to 0.10 over the rest (Figure 1(b)). Here, the imaging models showed high AUC in all the sub-groups. The models using MRI were more accurate at KLG0-2, while the XR model was slightly more accurate at KLG3. In AP, \(M1\) and \(F8\) were dominant in the same sub-groups as previously (Figure 1(e)). The model based on the clinical data showed the highest score in the symptomatic KLG0/1 sub-group and was comparable at KLG2, otherwise performing poorly. The X-ray-based model was more accurate towards severe OA stages, particularly, at KLG3. Both metrics were notably higher than in the "no prior injury or surgery" subject group, suggesting the clear added value of imaging, particularly, MRI in post-traumatic subjects. In the "prior surgery" group analysis all the considered imaging models showed moderate-to-very-high ROC AUCs. Importantly, all the sub-groups here had very small sample sizes. The clinical model was notably inferior in performance, except for the small asymptomatic KLG2 sub-group. \(M1\) and \(F8\) showed performance similar to each other, with the former having much higher AP for the asymptomatic KLG0/1 sub-group. The X-ray model was more accurate in both metrics for the symptomatic KLG2 sub-group. To summarize the findings, the performance of all the models in predicting KOA progression was consistently higher in Figure 2: Performance of the selected models in subject subgroups averaged over all the horizons. The subjects are stratified by their trauma and intervention history - _“no prior injury or surgery”_ (Figures 1(a) and 1(d)), _“prior injury, but no surgery”_ (Figures 1(b) and 1(e)), _“prior surgery”_ (Figures 1(c) and 1(f)). The plots show mean area under the ROC curve (Figures 1(a) to 1(c)) and average precision (Figures 1(d) to 1(f)). The precision values were calibrated [51] to the prevalence of \(0.15\) before averaging. Each plot indicates the scores for the complete corresponding sample (_all_), as well as mutually exclusive sub-groups allocated w.r.t severity of radiographic OA (_KLG 0/1_ vs. _KLG 2_ vs. _KLG 3_) and symptoms (_WOMAC 0-10 [Sx-]_ vs. _WOMAC 10-100 [Sx+]_). post-traumatic and post-intervention knees. In the same groups, the imaging models showed more notable improvement over the clinical variable model, particularly, in positive predictive value. In the "no prior injury or surgery" group, the APs were poor with all models. However, the imaging with MRI provided additional value for normal and mild OA knees. Interestingly, the fusion model to a degree resembled the average performance of XR- and DESS MRI-based models. ### Contribution of imaging modalities in multi-modal setting To understand the relative contribution of imaging modalities to the final decision in the top-performing fusion models, a model interpretation technique called "feature ablation" was employed. Here, the entire inputs corresponding to the modalities were individually masked, and the drop in the model performance was recorded. The decrements were inverted and normalized across the modalities to derive Relative Utilization Rate (RUR). The RURs computed for the selected models are shown in Figure 3. In the case where radiographic and structural MRI (DESS) data were fused, the average contributions were 0.04-0.13 and 0.87-0.96, respectively, across the horizons (Figure 2(a)). This suggests that the anatomical information provided by the volumetric MRI scan is dominantly more informative in the scope of radiographic KOA progression prediction. When structural (DESS) and compositional (\(T_{2}\)map) MRI protocols were considered together (Figure 2(b)), the average RURs were 0.72 and 0.28 at 12m horizon and they gradually changed to 0.81 and 0.19 at 96m horizon, respectively. The reduced RUR for DESS MRI may indicate the importance of tissue compositional changes provided with \(T_{2}\)map in the scope of KOA progression, but also that certain imaging biomarkers are more easily derived from high-contrast \(T_{2}\)maps. The observed trend from 12m towards 96m horizon may indicate lower overall importance of the visualized tissue composition (particularly, cartilage) on the progression long-term. The model fusing radiographic data with two MRI protocols (Figure S2c) also showed that volumetric structural data dominates other imaging sources (0.85-0.92 [DESS] versus 0.08-0.14 [\(T_{2}\)map] and <0.02 [XR]). Interestingly, the model assigned very low RUR to the XR modality. When the clinical data were additionally incorporated into the model (Figure 2(d)), it also barely showed any contribution at all the horizons (average RURs<0.01). Overall, these findings suggest that MRI-based modalities are highly informative and visualize symptomatic, post-surgical, and post-traumatic cues at the level or higher than the clinical variables and X-ray data that are relevant to radiographic KOA progression. ## Discussion In this study, we presented a multi-modal method for prediction of radiographic KOA progression and applied it to perform an exhaustive study of commonly acquired modalities in the task. Our proposed approach enables leveraging unique large-scale longitudinal cohorts, such as OAI, for studying the disease progression in broad populations. The primary finding of our work is that the fusion of multiple widely acquired modalities, particularly, imaging, does not seem to provide significant improvement in the prediction of knee osteoarthritis progression, defined as radiographic worsening, over single modalities, both in short- and long-term horizons. It is important to note, however, that the overall best-ranked model in our experiments was based on XR, structural (DESS), and compositional (\(T_{2}\)map) MRI, suggesting that some of the subjects may still benefit from the multi-modal examination. We have shown that \(T_{2}\)maps seem to have marginal additional value in all prediction horizons. This may be partially explained by the potentially limited association between compositional tissue properties and KOA progression defined radiographically. Furthermore, unresolved methodological challenges, such Figure 3: Relative utilization rate of individual modalities in the top performing fusion models. _Horizon_ represents different intervals within which the progression is considered (correspond to the prediction targets). Means (solid lines) and standard deviations (color bands) are computed over the test subset samples. The discrete horizons (12, 24, etc months) are connected via linear interpolation for visualization purposes. as considerable field orientation dependence of \(T_{2}\), might have also contributed to this finding [52, 53]. Importantly, we also acknowledge the fact that the studied MRI protocols, despite providing excellent contrast for major tissues, such as cartilage, bone, menisci, fat, and associated lesions, may still provide incomplete details on the knee status. The emerging imaging methods, particularly, Magnetic Resonance Fingerprinting [54], have the potential to perform holistic parametric tissue mapping and, thus, deliver a more objective view on the value of KOA MR imaging, however, they are still in the process of getting wide adoption. Generally, all the imaging models yielded larger gains on top of the clinical data models towards longer progression horizons. This finding suggests that the role of imaging biomarkers in shorter-term progression prediction is lower, and other factors, such as subject metabolic health, environmental factors, or physical activity, may be more informative than imaging. From the practical utility perspective, using structural MRI sequences led to consistent, yet non-significant improvements over the model trained on radiographic images. While, currently, MRI is a rather expensive imaging modality, recent development in low-field MRI and fast multi-parametric techniques (e.g. aforementioned MR Fingerprinting) hold great promise that MRI could eventually become an affordable tool for osteoarthritis screening. It is important to note that not all subjects may necessarily benefit from imaging. In our sub-group analysis, we observed that the performance of the predictive models was heterogeneous and, at least, depended on whether the knee was subject to trauma, intervention, or neither. This finding suggests also that post-traumatic and post-surgical subjects should be considered independently in future scale imaging studies [50]. In our study, we defined the OA progression as an increase in KLG score. While KLG is the most established and widespread grading scheme for OA, it naturally lacks sensitivity to fine-grained joint changes that are not reflected directly or indirectly in radiographic images. Further works could explore more comprehensive grading schemes for the task, such as MRI-based MOAKS [13]. However, this comes with a challenge - how to define the common progression trajectory from multivariate scoring data [55]. Here, already existing considerations on OA phenotypes can be used [56, 4], however, they still require a thorough validation. Accordingly, the development of new OA surrogates in a data-driven manner could be an exciting area for future research. In this work, we aimed to clarify the value of imaging modalities in the prediction of radiographic OA progression within multiple horizons. When targeted for downstream clinical use, DL could be used within other established domain-specific frameworks, such as time-to-event [57] or disease trajectory forecasting [58, 59, 44]. Next, we used the data from a single observation point to produce the predictions. With the high-dimensional imaging data, it may be beneficial for the predictive model not only to rely on the joint anatomy but also on the rate of change derived from several successive exams of an individual. While this approach has been proven feasible for individual tissues [60], processing multiple complete 3D knee scans could be an expensive computational problem, and the development of new methods is still needed. Overall, computational and data efficiency is an important issue in multi-modal data fusion. Having larger sample sizes would likely be beneficial both for improving the performance and robustness of our models. Alternatively, modifications to the fusion model architecture can be done to reduce the number of parameters, e.g. via alternating or factorized attention in transformers [61, 62]. Further works could also investigate emerging foundation models for medical imaging [63, 64], which are to provide generic medical-specific visual features, thus, notably reducing data demand. Finally, as previously discussed, other modalities/factors could be studied in the problem, particularly, subject lifestyle, physical activity, and metabolic biomarkers. We interpreted the relative contribution of imaging modalities within the fusion models and observed that the structural DESS MRI was dominant across all the horizons. Such protocol certainly provides higher information and a more comprehensive view on the knee joint status. A recent study [65] suggested that DL-based models are prone to greedy learning, at least, in multi-view fusion scenarios, which practically leads to unequal optimization rates across the modality branches. While the effect of this finding on the performance shown by the authors was rather small, its magnitude with diverse modalities of different shapes needs further investigation. Furthermore, the fusion of modalities may be orchestrated in a more clinically meaningful way, where using highly accessible data (e.g. clinical variables or XR images) is prioritized during the model training. Given the scope of this study, we intentionally focused on high-level model interpretability. We acknowledge that finer methods for feature attribution exist and have been applied in the KOA studies [34, 36], yet their generalization and applicability to multi-modal imaging settings may not be straightforward [31, 66]. We hope that the findings from our study, along with the publicly released source code, will facilitate further advances in the data-driven development of knee OA progression surrogates, efficient OA progression prediction models, and clinical guidelines for OA screening. ## Methods Sample selectionThe data from The Osteoarthritis Initiative (OAI, [https://nda.nih.gov/oai/](https://nda.nih.gov/oai/)) - a multi-center longitudinal osteoarthritis study - was used in this work. We derived five datasets from the baseline visit of OAI, one per studied progression horizon - 12, 24, 36, 48, and 96 months (see Table 1). All the selected subjects had demographic and clinical variables recorded, and their studied knees were imaged with posteroanterior bilateral X-ray and underwent comprehensive MRI examination (3T Siemens MAGNETM Trio, quadrature T/R knee coils). Obtained X-ray images were weight-bearing and imaged in fixed flexion using a SynaFlexer positioning frame (CCBR-SYNARC, San Francisco, CA). The MRI exam included, among others, 3 MRI sequences - sagittal 3D dual-echo steady state (DESS, voxel \(0.37\times 0.37\times 0.7mm\), matrix \(384\times 384\), 160 slices, FOV \(140mm\), TR \(16.3ms\), TE \(4.7ms\), flip angle \(25^{\circ}\)), coronal intermediate-weighted turbo spin-echo (TSE, voxel \(0.37\times 0.37\times 3.0mm\), matrix \(384\times 384\), 31 slices, FOV \(140mm\), TR \(3.0ms\), TE \(29ms\), flip angle \(180^{\circ}\)), and sagittal multi-slice multi-echo \(T_{2}\) mapping (\(T_{2}\)map, voxel \(0.31\times 0.31\times 3.0mm\), matrix \(384\times 384\), 27 slices, FOV \(120mm\), TR \(2.7s\), TE \(10-70ms\)). Since \(T_{2}\)maps were only acquired for right knees, only one knee per subject was included. The knees within each dataset were marked as "progressor" if an increase in KLG was recorded during the respective follow-up period, and as "non-progressors" if there was no change in KLG between the baseline and the end of the interval. A small number of knees that showed an improvement in KLG during the interval was excluded. The complete sample selection procedure is provided in detail in Figure S1. Clinical variablesWidely acquired demographic variables, history of past injuries and past surgeries, symptomatic and knee function score - Western Ontario and McMaster Universities Arthritis Index (WOMAC), and radiographic OA severity - Kellgren-Lawrence grade (KLG) were considered. The continuous variables - age, history of past injuries and past surgeries, body mass index (BMI), and WOMAC total score - were standardized to zero mean and unit variance. The categorical variables - sex, KLG, history of past injuries, and past surgeries - were transformed using one-hot encoding. X-ray imagesThe ROIs were extracted from the bilateral posteroanterior X-ray images. For that, the DL-based tool KNEEL [67] was used, which was previously developed and validated on the OAI data. The tool localized a set of bone surface landmarks in the femortibial joint area. The landmarks were aggregated to derive the location of the knee joint center. The ROIs of \(140\times 140\)\(mm\) were cropped around the knee centers. The obtained patches were resampled to an isotropic pixel spacing of \(0.195\times 0.195\)\(mm^{2}\). After extraction of the knee ROIs, they were further cropped to the central patches of \(700\times 700\)\(pixels\). Before feeding the data into the model, the patches were first standardized in intensity to \([0;1]\) range, underwent data augmentation (for the training samples only), and finally standardized to zero mean and unit range. Data augmentation included cropping to a random \(700\times 700\)\(pixels\) patch instead of the center one, random rotation within \([-15,15]\) degree range, and random gamma correction with \(\gamma\) from the range \([0.0;2.0]\). Lastly, the patches were downsampled using bilinear interpolation to \(350\times 350\)\(pixels\) (pixel spacing of \(0.390\times 0.390\)\(mm^{2}\)). MR imagesOne of the aims of MR image preprocessing was to reduce the storage and memory demand while maintaining the ROI size and the visual quality of the samples. In DESS and TSE sequence data, the 3 least significant bits were truncated, resulting in 8 significant bits for DESS and 9 bits for TSE. Subsequently, the images were clipped in intensity to \([0.0;99.9]\) percentile range scan-wise. For all the sequences, to exclude the image registration artifacts, we cut the slice edges of 16 voxels. \(T_{2}\)maps were derived from the multi-slice multi-echo images via exponential fitting. On average, the OAI \(T_{2}\) mapping acquisition protocol yielded 27 slices over 7 echo times. We used the \(T_{2}\) relaxation monoexponential model (Equation 1) and optimized both \(I_{0}\) and \(T_{2}\) parameters voxel-wise using the available raw image intensities \(I_{TE_{i}}\) and the corresponding echo times \(TE_{i}\). All the available echoes were used for fitting. The obtained \(T_{2}\)maps were clipped in intensity to \([0;100]\)\(ms\) range. Since the \(T_{2}\) mapping protocol in the OAI is optimized for cartilage tissues, this helped to ensure that unreliable T2 values, which corresponded mainly to bone and fat pads, are excluded [68]. An example of the resulting \(T_{2}\)map is shown in Figure 1. \[I_{TE_{i}}=I_{0}\times exp(-TE_{i}/T_{2}) \tag{1}\] In the next step, the images were cropped to the central area of [320, 320, 128] voxels for DESS, [320, 320, 32] for TSE, and [320, 320, 25] for \(T_{2}\)maps, where the first two dimensions correspond to the number of voxel rows and voxel columns in-slice, respectively, and the last dimension corresponds to the number of slices. Similarly to the radiographic data, the images were then transformed to \([0;1]\) intensity range, augmented, and standardized to zero mean and unit range. Data augmentation started with random cropping to the aforementioned dimensions, in-slice rotation (random degree from \([-15,15]\) range), and gamma correction (random \(\gamma\) from \([0.0;2.0]\) range). The gamma correction was not applied to the \(T_{2}\)maps. Finally, the images were downsampled using trilinear interpolation to [160, 160, 64] voxels for DESS, [160, 160, 32] for TSE, and [160, 160, 25] for \(T_{2}\)maps. Clinical data baselinesAn independent logistic regression model was constructed for each target and each considered set of clinical variables (scikit-learn, version \(0.24.2\)[69]). In every setting, 5-fold cross-validation was used on the development data subset to find the best hyper-parameter - whether to use balanced class-weighting. Subsequently, 5 models were optimized using average precision scoring on the training data and evaluated on the testing subset. The ensemble predictions were derived by averaging softmax outputs across the folds. Imaging model architecturesThe imaging model architectures varied depending on the considered set of modalities while following the same design principles. A schematic description of the architectures is shown in Figure S2, with more details provided in Section S1 and the accompanying source code (PyTorch, version 1.8.2 [70]). For radiographic data processing, we reimplemented the previously validated model [34] based on pre-trained ResNeXt-50_32x4d CNN (see Figure S2). For individual MRI sequences, the models comprised a shared pre-trained ResNet-50 CNN to extract slice-wise image descriptors, followed by a Transformer module to aggregate the representations across slices. Such design was previously shown to achieve higher performance compared to purely CNN-based models [37], while also providing pre-training capability that is challenging to obtain with pure Transformers and moderate sample size. For the fusion of two modalities - XR1MR1 and MR2, an overall similar design was used. Here, two independent CNNs were used for each of the modalities, and their outputs were concatenated before the Transformer to allow for cross-modal fusion (see Figure S2). Lastly, in the fusion of three-to-four modalities, the MRI-related branches of the model had their independent mid-level Transformers to embed the features into common latent space before combining with other sources (Figure S2). The models with clinical data input had a shallow Fully-Connected network to transform the variables before fusion. A Transformer module was used on top of concatenated multi-modal embeddings, as previously. All the described models were trained in 5-fold cross-validation, where the splits were done maintaining the consistent distribution of target labels. The training was run until convergence with a computational budget of 60 epochs. Adam optimizer [71] was used with weight decay of 1e-4 and learning rate warmup (from 1e-5 to 1e-4) over 5 initial training epochs. To address the effects of severe class imbalance, Focal loss (\(\gamma=2.0\)) was used along with an oversampling of the minority class. The best model within each fold was chosen based on the highest average precision score at validation. The batch size was 16 for the models with at least two MRI modalities, and 32 otherwise. Hardware-wise, a computational node with 4 NVIDIA A100 GPU was used for model training, and a PC with 2 NVIDIA 2080 Ti was used for evaluation and subsequent analysis. The single model training time (i.e. one fold) for the highest sample size 0-12m target varied from 0.5 (XR) to 6.5 hours (fusion of 4 modalities). Evaluation and model comparisonFor each prediction target, the corresponding models were scored with ROC AUC and AP on the hold-out data. The mean and the standard error of each metric were estimated using bootstrapping (iter=1000) stratified by the target label. The statistical significance of improvements was assessed in two scenarios - (1) single-modality imaging models against the best clinical model, (2) fusion models against the clinical, XR, or DESS MRI models. For this, one-sided paired permutation testing (iter=1000, SciPy, version 1.9.3 [72]) was used. For the subsequent analysis, the "best overall" multi-modal fusion setting \(s^{*}\) was selected using a multi-objective ranking procedure: \[s^{*}=\underset{s\in S}{\text{argmin}}\Big{(}\sum_{f\in\{ROC\ AUC,AP\}}\sum_{t \in\{12,...,96\}}rank(\tilde{f}(s_{t}))\Big{)},\ \ S=\{F1,...,F9,U\} \tag{2}\] Here, every fusion setting \(s\) was ranked from 1 to 10 (best to worst, respectively) for each target \(t\) and in each metric independently by the mean metric value \(\tilde{f}\). Then, the ranks were summed, and the model with the highest total rank was chosen. In subgroup analysis, average model performance across different targets was derived. Since the prevalence of progressors is different for different targets, which prohibits direct averaging, instead of standard AP we used its calibrated version [51]. Here, the scores within subgroups were calculated for target prevalence of 0.15, and only then averaged. ROC AUC scores were used unchanged. Symptatic and non-symptomatic patient subgroups were defined based on the WOMAC total score. Clinical interpretation of WOMAC score is still rather non-standardized [73, 74]. We used a threshold value of 10 on a total score 0-96 scale, which is an estimate of the minimal clinically important difference [74, 75]. The importance of individual modalities in the multi-modal fusion settings was estimated using the feature ablation method (Captum, version 0.5.0, Facebook Open Source [76]). Here, the unimodal inputs were replaced with the mean values one-by-one and degradation of the model performance was recorded for each sample. The values were normalized and averaged across the testing subset, which resulted in Relative Utilization Rates.
2303.14329
Edge-Based Video Analytics: A Survey
Edge computing has been getting a momentum with ever-increasing data at the edge of the network. In particular, huge amounts of video data and their real-time processing requirements have been increasingly hindering the traditional cloud computing approach due to high bandwidth consumption and high latency. Edge computing in essence aims to overcome this hindrance by processing most video data making use of edge servers, such as small-scale on-premises server clusters, server-grade computing resources at mobile base stations and even mobile devices like smartphones and tablets; hence, the term edge-based video analytics. However, the actual realization of such analytics requires more than the simple, collective use of edge servers. In this paper, we survey state-of-the-art works on edge-based video analytics with respect to applications, architectures, techniques, resource management, security and privacy. We provide a comprehensive and detailed review on what works, what doesn't work and why. These findings give insights and suggestions for next generation edge-based video analytics. We also identify open issues and research directions.
Miao Hu, Zhenxiao Luo, Amirmohammad Pasdar, Young Choon Lee, Yipeng Zhou, Di Wu
2023-03-25T02:11:31Z
http://arxiv.org/abs/2303.14329v1
# Edge-Based Video Analytics: A Survey ###### Abstract Edge computing has been getting a momentum with ever-increasing data at the edge of the network. In particular, huge amounts of video data and their real-time processing requirements have been increasingly hindering the traditional cloud computing approach due to high bandwidth consumption and high latency. Edge computing in essence aims to overcome this hindrance by processing most video data making use of edge servers, such as small-scale on-premises server clusters, server-grade computing resources at mobile base stations and even mobile devices like smartphones and tablets; hence, the term edge-based video analytics. However, the actual realization of such analytics requires more than the simple, collective use of edge servers. In this paper, we survey state-of-the-art works on edge-based video analytics with respect to applications, architectures, techniques, resource management, security and privacy. We provide a comprehensive and detailed review on what works, what doesn't work and why. These findings give insights and suggestions for next generation edge-based video analytics. We also identify open issues and research directions. edge-based video analytics, architecture, technology, resource management, security and privacy ## I Introduction In the past few decades, we have witnessed the explosive growth of data particularly with video data. Video cameras, such as closed-circuit television (CCTV) cameras, webcams and dashboard cameras, are everywhere. They are used for various purposes, such as surveillance, security and safety. They record "everyone" and "everything" all the time. In a 2015 Information Handling Services report [1], authors estimated 245 million security cameras installed globally meaning one camera for every 29 people. However, it is not simply the video data these cameras produce, but insights from such data. In other words, the timely processing and analysis of such data is of great practical importance. As reported by Fortune Business in Sights, the global video analytics market size is projected to reach USDS 12 billion by the end of 2026, exhibiting a compound annual growth rate of 22.67% [2]. Current video analytics applications are generally built on deep neural network models, which will bring great computational pressure from both model training and inference. Up until recently, these applications have run mostly in clouds [3, 4]. However, the "distant" clouds are facing serious challenges in meeting real-time processing requirements due to network bottleneck, i.e., high bandwidth consumption and high latency as video data has to travel back and forth over the internet. Recently, the edge computing paradigm has emerged as a solution to that. It is rather a complementary approach to cloud computing making use of computing resources at the edge of the network, close to data sources. These resources are called edge servers. They include small-scale on-premises server clusters, server-grade computing resources at mobile base stations and even mobile devices like smartphones and tablets. While running video analytics application at the edge with these computing resources cannot be perceived as a similar case to that in the cloud because there are some unique challenges in edge-based video analytics. In this paper, we first provide a survey of edge-based video analytics applications and use cases. We then identify and discuss those challenges with a comprehensive survey of state-of-the-art works on edge-based video analytics. The following are four categories of these challenges. * _Architecture Design_. Different architectures for edge-based video analytics may emphasize on various aspects and can be applied to different use cases and service scenarios. * _Video processing and analysis techniques_. Real-time processing requirements are a key challenge with edge servers that are often static with lower resource capacity compared to cloud servers. In-situ data analytics and collaborative processing are of particular interests for edge-based video analytics. * _Resource management_. With constraints on the quality of experience (QoE) metrics (e.g., accuracy, latency and energy consumption) and resources (hardware and software), it is of great importance to develop efficient resource management policies. It is largely unknown how effective traditional heuristic and optimization methods are for edge-based video analytics. * _Security and privacy_. The use of highly heterogeneous and decentralized edge servers is subject to security vulnerabilities. Besides, the processing, storage and caching of video data on these servers have serious privacy implications. While there have been several surveys on video analytics at the edge, they are not as comprehensive or in-depth as our survey in this paper. Shi _et al._[5] surveyed the works related to edge computing and its use cases. Zhou _et al._[6] summarized the research efforts on artificial intelligence (AI) from the perspective of edge computing and its corresponding intelligent applications. These works mainly discussed the technologies and applications with edge computing, however, the discussion on video analytics-related applications is limited. Vega _et al._[7] reviewed state-of-the-art QoE management methods for video-related services based on machine learning. Their goal is to optimize the QoE from the perspective of clients (i.e., devices), while efficiently utilizing network resources from the perspective of providers. Barakabil\(\acute{\text{e}}\)r _et al._[8] studied QoE management solutions for edge-based multimedia applications. They mainly discussed about how to provide users with better video services. They considered this aspect as well as efficient video processing and analytics. As the most similar survey to our work, Jedari _et al._[9] studied the state-of-the-art researches on edge video caching, edge computing, and communication. However, they mainly considered the combined use of resources at the edge to support several video-oriented applications, while only a small part of the content discussed the studies on video analytics. This paper provides a comprehensive and detailed review on what works, what doesn't work and why. These findings give insights and suggestions for next generation edge-based video analytics. We also identify open issues and research directions. The structure of this paper is shown in Fig. 1. In particular, Section II presents the general use cases and service scenarios for edge-based video analytics. Section III summarizes the proposed architecture for edge-based video analytics. Section IV illustrates the technologies and methods for edge-based video analytics. Section V reveals relation between resources and performance for edge-based video analytics, and discusses the scheduling strategies in resource-limited video analytics scenarios. Section VI discusses the security and privacy issues occurred in edge-based video analytics system design. Section VII summarizes the challenging issues and outlooks future research directions. ## II Use cases and service scenarios This section introduces use cases and service scenarios built with edge-based video analytics. They are divided into three categories: _Smart Cities, Safety and Security_, and _XR (including AR, VR, and MR)_. ### _Smart Services_ #### Ii-A1 Smart city The "smart city" takes advantages of artificial intelligence into the urban construction for vehicle and human monitoring, city management and regulation of city flows and processes [10, 11]. Smart city applications include but are not limited to road traffic monitoring, road safety and security control, smart parking control, stolen cars search, etc. Specifically, Zhang _et al._[12] highlighted the resource-quality mapping correlation in smart city applications, including license plate reader, vehicle counter, crowd classifier and object tracker. The core technology for most smart city applications is video analytics based on automated object detection [13]. However, it is tremendously high for the computation requirement for real-time object detection on Fig. 1: The structure of this article. live video streaming and the communication requirement for video uploading from cameras to the remote cloud server. Fortunately, it has been proved that the edge-based solution can help improve system efficiency and user experience when enjoying the smart city services. For example, Grassi _et al._[14] proposed an edge-based in-vehicle video analytics system for monitoring parking spaces. Xie _et al._[15] proposed a video analytics-based intelligent indoor positioning system with the aid of edge computing. With deep neural networks (DNNs), Barthelemy _et al._[16] designed an edge-based computer vision framework to monitor transportation while ensuring privacy of citizens. These works offer a novel way to analyze videos at edge to achieve bandwidth saving as well as data privacy. #### Ii-A2 Smart farming The "smart farming" utilizes the technology of artificial intelligence to improve the quantity and quality of crops. By precisely improving the development of intellectual and automatic agriculture machine, farmers can significantly increase the efficiency of agricultural cultivation. Video analytics techniques can offer farmers an effective way to monitor the status and requirements of their animals or crops and adjust the planting methods correspondingly, thereby preventing animal and crop diseases and enhancing their health [17]. The agricultural automation also presents a high requirement on the application latency, where the edge-based farming-related video analytics applications are expected to be tremendously focused in the near future. Recently, Alharbi [18] proposed an integrated edge-cloud architectural paradigm to enhance the energy-efficiency of smart agriculture systems and diminish carbon emissions. #### Ii-A3 Smart health (e-Health) The public and personal health has always been the first place. In the year 2020, the World Health Organization declared the outbreak of COVID-19 as a pandemic. To prevent the virus spread, it is a critically important issue to conduct a real-time temperature scanning on people in public areas and workplaces. Recently, many countries have deployed the AI thermal cameras for automated and contactless monitoring, especially for group temperature scanning. This can help implement strong protective measures while keeping the economy going. However, there are still some challenges for conducting efficient thermal video analytics. First and most important, it is essential for public health monitoring to run the accurate video analytics for temperature scanning and people tracking. Second, the temperature scanning results should be retrieved back in a real-time manner. Third, the cameras not only store temperature data but also personal identifiable thermal information, which might cause privacy issues. From the authors' perspective, the edge-based video analytics framework can better solve the above-mentioned issues than either the cloud-based video analytics and the in-situ video analytics. Specifically, a low latency performance can be achieved by placing the computation-intensive tasks (e.g., temperature scanning and people tracking) to the nearby edge servers. Meanwhile, the style of data storage at the edge brings information closer to the location, and it is tipped to improve privacy and security protection for users. #### Ii-A4 Smart business A key example for smart business is Amazon Go, which is a new kind of humanless stores without manual checkout. Relied on computer vision techniques, the automated checkout system can map sales actions (e.g., picking products and checking out) to consumers, enabling the merchant to accurately charge customers for their picked products, e.g., [19]. In smart business system design, the video analytics technology has also been verified to be helpful in studies such as Cheng _et al._[20] and Xu _et al._[21] where the former harnessed constrained resources in service industry via video analytics while the latter used autonomous cameras for object counting. Another popular business case is unmanned aerial vehicles (UAVs, also known as drones). Due to mobility and low cost, UAVs can help in various business applications such as express industry [22, 23, 24]. Most drone-based delivery designs focused on the path scheduling problem aiming to minimize the total delivery time. #### Ii-A5 Smart education Smart education enables learners to learn by accessing digital resources through Internet. Long _et al._[25] presented a video analytics-based lecture framework that transforms literal teaching contents into visual formats, based on the lecture video system deployed at the University of Houston. Jang _et al._[26] implemented a smart conference system with two prototype video analytics applications (monitoring and tracking). Tarasov _et al._[27] addressed the emotion classification application by utilizing video analytics in education systems. Hu _et al._[28] first designed an edge-based framework for the video analytics assisted education system design in the real-time manner. The edge-based video analytics-assisted smart education applications have also been verified in practical system designs, e.g., the high-performance computing (HPC) education platform [29, 30]. ### _Safety and Security_ #### Ii-B1 Surveillance In major public safety events (e.g., terrorist attacks in public transportation scenarios), law enforcement may want to track down the identified perpetrators [31]. Across modern cities, a large amount of cameras are installed in public areas around us, including underground transportation networks, ground transportation networks, and air transportation networks. It is urgently required that law enforcement and counter-terrorism departments can realize the real-time tracking on the public threats [32]. By placing cameras in public places such as roads, public transport, retail stores, parks and libraries, relevant departments can help prevent, track and solve crime problems via video analytics technologies [33]. Many works focused on the video analytics-based surveillance applications. For example, Yi _et al._[3] highlighted the low-latency video analytics requirements for public safety applications, such as counter-terrorism. Jain _et al._[34] focused on applications based on multiple cameras, like crowd control and spotlight search. To sum up, the "real-time" characteristic is one of the most essentially focused metrics in surveillance applications, where the edge computing technology can help boost the performance. 2 Rescue applications Different from surveillance applications, rescue applications need to not only detect, but also identify and track the corresponding objects. In response to a disaster, the capabilities of a remotely controlled drone to search large areas quickly and efficiently with high definition cameras make rescue operations much more efficient and effective. Chowdhery _et al._[35] proposed a novel approach for drone video analytics based on model predictive compression methods. Wang _et al._[36] built an adaptive video analytics pipeline for searching tasks in domains such as life search and rescue. George _et al._[37] investigated the use of drones for live rescue, where the key technical challenge lies in the ingest of live video streams, based on which the architectural plan can be timely updated. #### Ii-B3 Road and traffic safety Given cameras installed along highways and city streets, the video analytics technologies can be used to re-identify and track the suspect's vehicle [38]. Qiu _et al._[39] designed and implemented a video analytics system, named Kestrel, that tracks the vehicles' trajectories with the aid of a heterogeneous camera network. For the case of traffic monitoring, Chen _et al._[40] proposed an edge-based video analytics system to timely get the vehicle speed information and track speeding vehicles. For urban traffic surveillance, Chen _et al._[41] proposed a dynamic video stream processing scheme with the ability of real-time video processing and decision making. For the purpose of monitoring the mobility within a network, Barthelemy _et al._[16] designed a sensor that can detect and track objects of interest in real-time video feed with the aid of video analytics technologies. ### _XR (AR, VR, and MR)_ Extended reality (XR) technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR) have emerged as methods to create simulated experiences that resemble the real world. In the XR processing pipeline, a crucial task is the detection and tracking of real-world object positions, which enable accurate overlaying of virtual annotations on top of them [42]. For example, Jain _et al._[43, 44] proposed the first effort on end-to-end AR system implementation. While commercial mixed reality platforms are capable of detecting surfaces and certain objects (e.g., a particular person) with the understand of 3D geometry of the scene, they often lack the capability to track and detect intricate and non-stationary objects. In augmented vehicular reality system proposed in [45], vehicles exchange raw dynamic 3D sensor outputs (also named as point clouds). Liu _et al._[46] designed a high accuracy object detection system for commodity AR/MR systems with 60fps. The system separates the rendering and offloading pipelines and employs a fast object tracking approach to ensure detection accuracy. Zhang _et al._[47] found that the latency lower bound to enable cloud-based mobile AR with acceptable QoE is around 250ms, which means that there exists room for further improvements. Ran _et al._[48] firstly provided a clear illustration of the information flow in multi-user augmented reality (AR). Specifically, they examined both Google ARCore1 and Apple ARKit2, and found that both of they employ either cloud-based or peer-to-peer architectures, where the edge-based architecture has not yet been taken into consideration. The battery capacity is the major constraint for executing XR applications. Apicharttrisorn _et al._[49] found that locally executing DNN-based object detection on mobile devices could significantly increase battery usage, which is a major concern for mobile users. They found that the screen, camera, and operating system already consume a considerable amount of battery (3-4W in their measurements), and the DNN executions can further drain a significant portion (1.7-3W) of the battery. Footnote 1: ARCore, a Google augmented reality SDK designed for AOS. To further compensate for the lack of bandwidth and computing capability, Qiao _et al._[50] proposed a web AR service-provisioning framework with edge servers. Moreover, the collaboration among edge servers can enhance the XR system performance. Zhang _et al._[51] enabled the coordination and collaboration of computation resources, e.g., sharing the results of computation intensive AR tasks, and annotating high quality AR modifications by users. ## III Architecture This section specifies and compares several types of existing edge-based video analytics system architectures. Different architectures to the edge-based video analytics may prioritize different aspects and have distinct optimization approaches. In this regard, we describe the components and features of edge-based video analytics systems and provide examples of typical systems for each architecture type. ### _Edge/Fog-based Architecture_ Edge computing or fog computing, as an extension of cloud computing, allows for computing tasks to be performed at the edge of the network with low latency and real-time computing capabilities. With the help of edge computing, video analytics tasks can be conducted on the edge servers instead of only being executed on end devices with relatively limited computing resources. The edge/fog-based architecture generally consists of two tiers, the end device tier and the edge computing tier, in which the near-site edge tier is essential for achieving real-time video analytics. Generally, a wide range of devices can serve as edge nodes, e.g., smart phones, laptops and drones. The edge computing platforms (e.g., ParaDrop [52]) provide application program interfaces (APIs) to manage edge nodes as well as running edge services. Based on the operation modes, the edge-based architecture can be divided into two types, i.e., the dedicated edge-based architecture and the shared edge-based architecture. As shown in Fig. 2 (a), each camera is equipped with a dedicated server for the dedicated edge-based architecture. While for the shared edge-based architecture, cameras will share the resource of an individual edge server as illustrated in Fig. 2 (b). #### Iii-A1 Dedicated Edge-based Architecture The dedicated edge-based architecture is a simple and reliable way to build a video analytics system. One example of an edge-based system that leverages real-time video surveillance is Vigil [31]. It utilizes edge computing to allow for wireless video surveillance to scale to multiple cameras. The Vigil architecture is designed to enable basic vision analytics tasks to be performed at the edge nodes, which are connected to camera devices. To reduce the transmission overhead, only relevant portions of the video feed are uploaded to a controller. Grassi _et al._[14] presented the ParkMaster architecture for road sign detection and utilized smartphone cameras with camera calibration for video processing. King _et al._[53] designed the EdgeSum framework for video summarization and compression of dash cam (a.k.a. drive recorder) videos using mobile devices as edge servers before uploading to the cloud. However, the number of edge servers will increase with the number of deployed cameras. With the increase of installed cameras, it will impose a rather heavy burden on installing too many edge servers. In view of this challenge, the shared edge-based architecture is preferred in many studies. #### Ii-A2 Shared Edge-based Architecture Different from the dedicated architecture, the shared edge-based video analytics architecture is built on the virtualization technology. For example, Jang _et al._[26] proposed an edge camera virtualization architecture that leverages an ontology-based application description model to virtualize the camera. They used container technology to decouple the physical camera and support multiple applications on board, thus improving resource utilization and flexibility in edge computing environments. Wang _et al._[54] proposed a smart surveillance system that leverages edge computing and application program interface technologies to enable flexible monitoring of security events in urban regions where have a dense network of cameras, with low latency and minimal backbone bandwidth consumption. Similarly, a real-time video analytics system called EdgeEye [55] was proposed. EdgeEye offered a high-level abstraction of partial video analytics functions through DNNs and provided tools to deploy and execute DNN models on edge servers. Specially, most surveillance and rescue applications with cameras on drones are realized on the shared edge server. In [36], the edge server is connected directly to the LTE base station and packets transmitted from the drones are directed to the edge server without traveling through the Internet backbone. George _et al._[37] proposed a system architecture that leverages edge computing resources for drone-sourced video analytics in live building inspection. The high computational demands are met by using substantial edge computing resources, while the ability to virtualize on an edge server allows for the deployment of a virtual machine that contains the engineering and architectural drawings for the construction site. ### _P2P-based Architecture_ In order to improve the analytics performance by utilizing cross-camera correlations, the analytics pipeline must have the capability to access inference results from related video streams and enable peer-triggered inference at runtime. It means that any relevant camera can assign an analytics task to process a video stream regardless of the time, which divide the logical analytics pipeline from its execution. In order to achieve this, the inference results must be shared between pipelines in real-time. Although prior research has explored task offloading across cameras and between the edge and cloud, Jain _et al._[34] argued that the video streams of other related cameras should be considered in such dynamic triggering. At present, the execution of video analytics pipelines is typically predetermined in terms of resource allocation and video selection. However, to leverage cross-camera correlations, a pipeline should have knowledge of the inference results of other relevant video streams and support real-time triggering based on this information. This enables the compute resources of related cameras to handle the analytic tasks dynamically according to the video streams. Stone _et al._[56] proposed Tetris system that focuses on scalable video analytics at the edge. The system identifies active regions across all video feeds and compresses them into a compressed volume which are then passed a Convolutional Neural Networks (CNNs) layer and carefully organized system pipelines to achieve a high parallelism. Luo _et al._[57] proposed an EdgeBox solution that a group of cameras are managed by an edge device and deployed on the same local area network. This approach is suitable for covering relatively small areas. However, when edge nodes are connected and collaborate to perform complex activity detection utilizing deep learning and computer vision, they can cover larger areas such as a building or a factory. ### _Hierarchical Architecture_ It is a challenging task to coordinate highly heterogeneous computing nodes to work as homogeneous computing nodes, which is shown in Fig. 3. The three-layer hierarchical video analytics architecture consists of an end device layer (also called application layer or user layer), an edge layer, and Fig. 3: The three-layer hierarchical video analytics architecture consists of camera layer (also called user layer), edge layer, and cloud layer. Fig. 2: Two types of the edge/fog-based video analytics architecture. \begin{tabular}{|c|c|c|c|c|c|} \hline **Type** & **Literature** & **End Device Layer** & **Edge/Fog Layer** & **Controller** & **Cloud Layer** \\ \hline \multirow{6}{*}{Edge/Fog-based Architecture} & Vigil [31] & video recording and offloading & simple vision analytics & Internet & \(\times\) \\ \cline{2-5} & Jang _et al._[26] & IoT camera which has specific functionalities & edge-based video & accepts video & \(\times\) \\ & & specific functionalities & analytics & requests from & \\ & & (e.g., video recording) & & applications via the & \\ & & & cloud (or the user) & & \\ \cline{2-5} & Wang _et al._[54] & video recording and offloading & edge-based video & \(\times\) & \(\times\) \\ \cline{2-5} & EdgeEye [55] & video recording and offloading & an easy and efficient way to execute DNN models & ParaDrop & \(\times\) \\ \cline{2-5} & Wang [36] & drone-sourced cameras & edge servers connected to LTE base stations & \(\times\) & without traversing the Internet backbone \\ \cline{2-5} & George [37] & drone-sourced cameras & edge servers are used to meet the high computation and virtualization demands & \(\times\) & \(\times\) \\ \cline{2-5} & Dao _et al._[58] & extract and upload features when environmental changes are detected & installed in each camera & detection metadata & \(\times\) \\ \hline \multirow{3}{*}{\begin{tabular}{c} P2P-based Architecture \\ \end{tabular} } & Jain _et al._[34] & video recording and offloading & a video analytics pipeline about resources to use and video to analyze & \(\times\) & \(\times\) \\ \cline{2-5} & Tetris [56] & video feeds & a solution for large-scale video analytics in edge & identifies active regions across all video feeds and compresses them into a compressed volume & \(\checkmark\) \\ \hline \multirow{6}{*}{ \begin{tabular}{c} Hierarchical Architecture \\ \end{tabular} } & LAVEA [3] & cameras & clients are one-hop away from edge server via wire or wireless links & \(\times\) & run heavy tasks on resource rich cloud node to improve response time or energy cost \\ \cline{2-5} & Anveshak [33] & cameras & edge-based video analytics & automates application deployment and orchestration across edge and cloud & \(\checkmark\) \\ \cline{2-5} & Ali _et al._[59] & cameras & deep learning at the edge & deep learning across resources & set to achieve an improved performance for object inference \\ \cline{2-5} & Ananthanarayanan _et al._[60] & cameras & decode video, detect objects, and perform video analytics tasks & \(\times\) & \(\checkmark\) \\ \cline{2-5} & Chen _et al._[40] & surveillance application layer & process end users data and return the results back on time & \(\times\) & \(\checkmark\) \\ \cline{2-5} & Perala _et al._[61] & capture videos in different resolution based on their configuration & computing devices directly connected to the cameras & the gateway on which the devices are connected and share & the cloud server to which the gateway is connected \\ \hline Drolia _et al._[62] & cameras & use an edge server as a cache with compute resources & \(\times\) & over the Internet \\ \hline CloudSeg [63] & camera sensor & send a downsampled high-resolution video adaptively over Internet & \(\times\) & process the video with DNN inference, and return the inference results to the edge \\ \hline \end{tabular} a cloud layer. The camera, edge, and public cloud clusters differ in the available hardware types. For instance, GPUs are commonly found in some clusters (including the cameras), while other types of hardware, such as field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs), are typically found in public clouds [60]. FilterForward [64] is a platform that enables multi-tenant video filtering for edge nodes with limited bandwidth. While purely edge-based approaches are limited by static compute and storage resources, datacenter-only analytics require heavy video compression for transport. It addresses these challenges by allowing applications to split the work flexibly into edge and cloud. By leveraging high-fidelity data available at the edge, this approach allows for relevant video sequences to be made available in the cloud. Similarly, Yi _et al._[3] utilized the cloud layer to execute computationally intensive tasks on powerful cloud nodes in order to reduce response time or improve energy efficiency. Khochare _et al._[33] built Anveshak framework, which automates the application deployment and orchestration across edge and cloud resources. Ali _et al._[59] proposed a deep learning pipeline that utilizes resources at the edge, in-transit, and cloud to achieve low latency, reduced bandwidth costs, and improved performance of video analytics tasks. Ananthanarayanan _et al._[60] proposed a hierarchical geo-distributed infrastructure which consists of edge clusters and private clusters with heterogeneous hardware for video decoding, object detection, and other video analytics tasks. Ran _et al._[65] proposed a framework that integrates front-end devices with more powerful backend "helpers" (such as home servers) to enable local or remote execution of deep learning in the edge/cloud. In [40], the system comprises of three different layers, where the edge layer is made up of different on-site smart devices that act as both data producers and computing nodes. Besides the above mentioned works, most edge-based video analytics systems, e.g., [66, 67, 68, 61, 62, 63], consist of three parts, including end devices, edge servers, and the cloud. To authors' best knowledge, the hierarchical architecture has been verified as the most efficient and scalable one for edge-based video analytics. ## IV Techniques and Methods In this section, we overview various technologies and methods for edge-based video analytics, describe their roles and functions, and list some typical examples for each type. ### _Video Preprocessing_ Generally, the practice of offloading all raw videos to the edge or cloud will cause a huge burden on the network and may lead to intolerable latency. Facing this challenge, a direct approach is to preprocess the raw videos before offloading them to the edge or cloud. Many researches have applied various techniques for video preprocessing, including _frame sampling_, _frame cropping_, _compression_, and _feature extraction_. We briefly introduce these techniques as follows. #### Iv-A1 Frame Sampling Frame sampling skips frames that may be "useless" in video streams for analytics tasks [74, 75]. For example, Xu _et al._[21] proposed a split-phase planning mechanism for making frame sampling decisions and resolving the tension of frame capturing/processing. The FFS-VA system proposed in [69] utilizes two prepositive stream-specialized filters and a tiny-YOLO model [76] to filter out irrelevant frames. Zhang _et al._[67] extracted valuable information by using OpenCV [77] on edge nodes instead of sending the raw videos to cloud. Chowdhery _et al._[35] proposed a method that uses the predicted trajectory of a drone to choose and send the most relevant frames to a ground station, with the goal of maximizing the utility of application while minimizing the consumption of bandwidth. Wang _et al._[36] leveraged state-of-the-art deep neural networks (e.g., MobileNet [78]) with transfer learning technology to transmit the interesting data selectively from the video stream. These solutions significantly reduce the number of video frames sent to the cloud for video analytics tasks. Recently, some researches used the correlations among cross-cameras views to further improve the efficiency of frame sampling. For instance, Vigil [31] only uploads the most relevant frames to the cloud according to the user's query when there are different views of an object or person captured by multiple cameras. Similarly, Dao _et al._[70] utilized the joint camera views of the object to determine the views which achieve the best accuracy regard to the object of interest and improve the quality of detection. Collaboration between cameras reduces the amount of offloaded data, thereby reducing the bandwidth consumption. #### Iv-A2 Frame Cropping Different from frame sampling, frame cropping focuses on regions of interest (RoIs) in video frames, such as the face regions for face detection tasks or the vehicle regions for vehicle monitoring tasks. Frame cropping removes the unimportant regions of the raw images, thus, reduces the data transferred in the network. Chen _et al._[40, 41] extracted the RoI containing the suspicious vehicle instead of whole video frame to improve the video analytics efficiency. Guo _et al._[66] captured image data on end devices in a real-time manner and compressed them by a RoI based image compression algorithm. The compressed images were then transmitted to the cloud or the edge server, reducing the bandwidth usage and improving the transmission efficiency. #### Iv-A3 Compression The compression technology has been widely used in video analytics applications to ensure a high processing efficiency. Wang _et al._[71] proposed an adaptive approach for compressing screen content videos based on their utility, which involves identifying and processing low-utility content with a Gaussian low-pass filtering. Rippel _et al._[79] developed an architecture for video compression that extends motion estimation to perform learned compensation beyond simple translations. Fouladi _et al._[80] presented a video encoder that reach a fine-grained parallelism while keeping compression efficiency. Recently, the super resolution (SR) technology has been regarded as an essential solution for video compression and transmission. With SR techniques, the cameras or edge servers can send the video stream in low resolution, while the high resolution frames can be recovered from the low-resolution stream. Through SR technology, it can reduce the transmission delay and reduce the pressure on network bandwidth while meeting the high requirements of video quality and analytics accuracy [63, 72]. For instance, Wang _et al._[63] presented CloudSeg, which reduces the quality of the video during transmission to the cloud, but then performs the super-resolution process on the cloud server to reconstruct high-quality videos before executing video analytics. Yang _et al._[81] proposed an SR method to minimize SR errors by dividing the training samples into multiple clusters and learning dictionaries to achieve more faithful reconstructions in edge video analytics. Chen _et al._[72] presented SR360 framework, in which the low resolution video tile in 360-degree videos can be upsampled at the client side to a high resolution tile with SR techniques. Guo _et al._[82] proposed a semantic-aware SR transmission system for wireless multimedia sensor networks. The system encodes different bit-rate video with semantic information on the multimedia sensor, and uploads them to users. On the user side, the video quality is enhanced using SR techniques. #### Iv-B4 Feature Extraction Feature extraction is a process that extracts image information and identifies whether each image point belongs to an image feature or not. Based on a preliminarily trained and lightweight neural network, the extracted feature map can be directly sent to the video analytics functions at the edge or cloud server. The extracted feature map can be regarded as inputs to the object classifier, or stored and converted into useful information for other functions [83]. Canel _et al._[64] presented FilterForward, where the feature maps are extracted from video frames on edge servers by a single reference DNN. Micro-classifiers are trained to take these feature maps as input and return the relevance of the specific applications and the frames. Besides, FilterForward allows micro-classifiers to get the feature maps of one layer of the model, making it versatile enough to support various tasks. George _et al._[37] proposed an edge-based prototype that employed computer vision algorithms for live building inspection with drone-sourced video. Their system used visual features and Scale Invariant Feature Transform (SIFT) features [84] matching to identify relevance between the reference images and view of live camera. Grassi _et al._[14] presented ParkMaster, which captures the parked vehicles in the mobile camera, extracts the feature, and then uploads the data to the cloud to run a clustering algorithm to count the number of parked cars on the road. Mainstream [73] is an edge system for video processing that employs transfer learning from a common base DNN model to train multiple applications. By sharing partial-DNN compute among these applications, it reduce the computing time of per-frame aggregation. Kang _et al._[85] performed model specialization by using the full neural network to generate labeled training data and subsequently training smaller neural networks that are tailored to a given video stream and to a smaller class of objects. ### _Video Analytics_ #### Iv-B1 DNN-Based Processing Undoubtedly, DNNs contribute significantly to video analytics as they are believed to be the backbone of recognition and detection in images and videos. These DNNs can be a part of tool or library or they can be embedded into the hardware for on-device video analytics. Augur [86] is a tool providing insights about the efficiency of a CNN on specify mobile platform. Augur profiles and models the resource requirements of CNNs by using a configuration of the CNN to predicts the computational overhead and resource utilization of the model. The CNNs are selected from wide range of libraries such as ResNet [89], VGG [90], NASNet [91], AlexNet [92], and GoogleNet [93] on NVIDIA TK1 and TX1 hardware [94]. Augur [86] evaluates each model on CPU and GPU considering memory where holds the parameters of the CNN, stores intermediate data, and the workspace for computation. Unlike workstations, in mobile platforms GPU shares the system memory with CPU that is not addressed by Caffe, hence, it causes generating redundant copy of memory. Augur finds that the matrix multiplications of CNN computation is the core for performance evaluation. They are measured by the BLAS and cuBLAS libraries for matrix multiplications on CPUs and GPUs, respectively. The matrix size of a CONV or FC layer is related to the input dimension (e.g., images size) and network configuration (e.g., kernel size). Besides, time measurements have shown that _matmul_ contributes more than 60% of the computation time of a CNN on mobile platforms. To model the time for prediction purposes, several matrix sizes are benchmarked with respect to the number of kernels, the size of a kernel, and the spatial size of output feature maps. Therefore, Augur first parses the CNN descriptor and determines the minimal memory needed based on the type and setting of each layer. Then, Augur extracts matrix multiplications (matmuls) from the computation of the CNN and calculates the compute time of individual matmuls. Finally, Augur sums up the compute time of all matmuls to provide an estimate of the total computation time of the model on the mobile platform. Deep learning-based video analytics systems may consist a few hyper-parameters, including _learning rate, activation function_ and _weight parameter initialization_. Yaseen and colleagues [95] addressed the challenge of optimizing hyper-parameters in deep learning-based video analytics systems. They proposed a mathematical model to evaluate the impact of different hyper-parameter values on system performance. Their work also included proposing an automatic object classification pipeline for efficient large-scale object classification in video data. Nvidia Deep Learning GPU Training System (DIGITS) proposed a general DNN architecture. Liu _et al._[55] used Nvidia Deep Learning GPU Training System (DIGITS), a general DNN architecture, to design their DetectNet, and proposed a framework called EdgeEye for video analytics in real-time at the edge. The EdgeEye server allows applications to offload live video analytics tasks through its API, eliminating the need for deep learning framework specific APIs. Other usage of DNNs are found in [87, 88, 96] studies for reducing the response time and improving the transmission rates. Lu _et al._[87] present NetVision as a system for on-demand video processing that uses deep learning to minimize query response time across a network of mobile and edge devices. The transmission rate is also improved in [88] which relies on a deep reinforcement learning algorithm focusing on higher perceptual quality rate with lower bitrate. The algorithm works based on a trained neural network to predict future bitrates based on observation of network status. The proposed model in their study relies on two separate neural networks; (1) it precisely predicts future video qualities based on the previous video frames, and (2) a reinforcement learning algorithm to determine the proper bit rates with respect to the output of the first neural network. The output of the first neural network model relies on a combination of CNNs to extract image features and a recurrent neural network for capturing temporary features for providing the better video qualities. Han _et al._[96] studied the DNNs usage to execute multiple applications on cloud-connected mobiles to process a stream of data. It relies on a trade-off between resource usage and accuracy to be coped with workloads considering less accurate variants of optimized models. Hence, an adaptive framework was presented to select model variants at different accuracy levels while staying with the request resource constraints and energy constraints. The framework keeps track of accuracy, energy usage, and resource usage to form a catalogue based on a series of model optimization techniques. It uses different settings and heuristically allocates resources in proportion according to the frequency of use and selection of the most accurate corresponding model variant. This selection is interpreted as a model execution either on the device or on the cloud such that which application model should be chosen at a specific time step and which models should be evicted from the mobile cache. #### Iv-B2 Profiling Profiling reduces computation overload of a system for obtaining handful configurations for video analytics. VideoStorm [12] and VideoEdge [4] are systems that take advantage of profiling for improving performance. In Zhang _et al._[12], the system processes many video analytics queries on live video stream over large clusters. The system leverages an offline profiler generating a profile of query resources and uses an online scheduler for resource allocation to maximize the performance in terms of both quality and response time. The quality and lag are encoded as utility functions in which becomes penalized for violations. The profiler uses greedy search and domain sampling specific for obtaining a set of handful configurations on the Pareto boundary of profile to be taken into account by the scheduler. Hung _et al._[4] showed that VideoEdge is capable of achieving a trade-off between resource consumption and model accuracy by narrowing down the configuration space in the hierarchical structure (i.e., cameras, private clusters, and public clouds) for live video analytics. In VideoEdge, the configuration space is downsized through maximum computation of maximum demand to capacity ratio for each configuration to find the dominant demand. This enables system to compare configurations across demand and accuracy. The system also leverages a profiler for efficient merging components (e.g., decoder) for improving the accuracy and reducing the computation workload. ### _Computation offloading_ Deep learning algorithms are often computationally intensive, while the front-end equipment usually lacks the computing power for executing large-scale deep learning tasks. Transferring the data to a powerful cloud and executing deep learning algorithms in the cloud may cause unacceptable latency for users. Cloud-based solutions for deep learning are dependent on reliable network access, and it is challenging to offload the full or partial compute tasks to the proximate edge servers. #### Iv-C1 Full Offloading Some researches focused on the offloading strategy to decide whether to perform lightweight analytics on the edge or send the videos or images to the cloud for more computationally intensive analytics. By considering current network conditions and application's requirements as the trade-offs, Ran _et al._[65] presented DeepDecision to determine an optimal offload strategy in real-time AR applications, and decide either analyze the input video locally with small CNNs or send to the server to analyze with a big CNN. Felemban _et al._[97] proposed PicSys, an intelligent system that decides whether to process images locally with an accelerated CNN or offload them to the cloud for processing with a more complex and accurate CNN. This decision is made based on several factors such as network conditions, energy state of the mobile device, cloud backlog, and the hit-rate estimate. Ananthanarayanan _et al._[13] optimized their work based on the previous work [60]. They executed a cheap CNN at the edge and a heavy CNN at the cloud. Only if the lightweight CNN model does not have sufficient confidence do they invoke the heavy CNN model. Yi _et al._[3] presented LAVEA, a edge computing platform that utilizes serverless architecture to enable computation offloading between clients and edge nodes. They formulated the offloading task selection as an optimization problem to prioritize offloading requests in order to minimize response time. #### Iv-A2 Partial Offloading Some researches partition the entire analytics task to each computing nodes, that is, the edge node shares part of the computing pressure for the cloud node to reduce end-to-end latency and resource consumption. Kang _et al._[98] proposed a system called Neurosurgeon to optimize the partitioning of deep neural network (DNN) computation between mobile and cloud. The system employs a series of models to predict the response time and power consumption of the model according to its configuration and type. This allows Neurosurgeon to lower the system latency, reduce mobile energy consumption, and enhance datacenter throughput. Emmons _et al._[99] proposed a concept of "split-brain" inference to perform video analytics. The approach involves processing the video partially on the camera, with the computation limited to a certain extent. Constrained by network capacity, the intermediate results are then transmitted to a cloud datacenter for further DNN inference. ### _Collaborative Intelligence_ Computation offloading introduced above can be viewed as collaboration between nodes at different levels. We will then introduce collaborative intelligence, which refers to the collaboration between nodes at the same level. #### Iv-D1 Mobile Collaboration Many districts are deploying cameras on a large scale, and when considering this scenario, we have to face the problem of increased massive deployment cost and effort. For example, when the overlapping angles of deployment of multiple cameras are large, or the same video analytics tasks are performed, a large amount of redundant data is generated and unnecessary computing resources are consumed. In general, it is assumed that the computing power of the camera nodes is very limited, and it can only collect video streams, perform certain preprocessing and then send it to the edge or cloud. We have briefly introduced cross-camera collaboration in Sec IV-A1 to optimize frame sampling and suppress redundancy [31]. Next, we will introduce other research work on mobile collaboration. Khochare _et al._[33] developed Anveshak, a framework for creating video analytics applications that track objects across a camera network. In case an object is not detected by one camera, the system expands its search to include more cameras. O'Gorman _et al._[101] quantified the temporal and spatial characteristics of video data and demonstrated how the sparse nature of real-world signals from public cameras can significantly reduce the computational load. This understanding can aid in making decisions to place the tasks on the edge or in the cloud. Dao _et al._[58] proposed a framework that allows cameras coordination for high accuracy and reducing energy consumption. By coordination among cameras, each camera can use the non-optimal algorithm for the detection task and avoid unnecessary energy consumption. Video analytics pipelines involve several adjustable parameters or "knobs", such as frame resolution, sampling rate, and detector models. The selection of these configurations has an impact on both the accuracy and the resources consumed of the video analytics. However, the number of potential configurations can increase exponentially with the number of knobs and their respective values. The Microsoft Research team is focusing on using cross-camera correlation to optimize the search space. Jiang _et al._[100] presented Chameleon, a controller that selects the optimal configuration for neural-based video analytics pipelines dynamically. The insight behind Chameleon is that the underlying characteristic that impacts the best configuration has enough spatio-temporal correlation, amortizing the search cost over time and across several video feeds. Jain _et al._[34] proposed ReXCam, which leverages spatio-temporal correlations in video feeds from wide-area camera deployments to reduce search space of inference. By exploiting these correlations, ReXCam reduces both the workload and false positive rates in multi-camera video analytics. Specifically, ReXCam guides its search for a query identity by taking advantage of spatial and temporal locality dynamically of the camera networks. #### Iv-D2 Edge Collaboration Compared with camera nodes, edge nodes have stronger computing power and can complete part of or even entire video analytics tasks. It is urgent to design task scheduling strategies or to share computation results among multiple edge nodes. Assigning a task to only one edge node may result in underutilization of the redundant computational resources of the other edge nodes. Hence, there is a need to explore optimal ways to group edge nodes and enable collaborative processing of video analytics tasks. Long _et al._[68] presented a framework for edge computing that enables cooperative processing of latency-sensitive multi-media tasks on resource-rich mobile devices. The framework relies on optimally dividing the mobile devices into different groups and assigning video chunks to the proper group to enable cooperative processing of tasks. Wang _et al._[54] designed a three-tiers architecture for real-time surveillance applications of edge computing system. The architecture is designed to elastically adjust computing capacity and dynamically route data to the most appropriate edge server. Surveillance tasks are executed by a group of Virtual Machines (VMs) or Virtualized Network Functions (VNFs) that work together on specific tasks. The system is effectively configured, monitored, and managed by the SDN controller. ## V Resource management Determination and detection of temporal and spatial events can be automatically done by video content analytics. This capability is supported by resources allocated to algorithms on cameras or devices to compute and output the object of interest [74, 75, 102]. Accordingly, three main dimensions are identified to characterize resource management in the context of video analytics, each consisting of various sub-dimensions. Hence, in a general overview, resource management is illustrated based on the following classification, and the summary is given in Table V. * _Quality of Experience (QoE) Metrics_ as a holistic concept is to measure a user satisfaction degree for a service or an application. This satisfaction can be fallen into three major groups: accuracy, latency, and performance (e.g., energy), taking into account edge or cloud resources. * _Resources_ to _be provisioned_ for providing the guaranteed performance for application through selection, deployment, and runtime management of software and hardware resources contributing to video analytics. * _Resource provisioning methods_ to _deal_ with providing the guaranteed performance for applications. This can be done by a heuristic-based resource provisioning scheme enforced by deep learning strategies, or it may consider content-aware approaches for resource allocation prioritization. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Category** & **Literature** & **Method** & **End Device Layer** & **Edge/Fog Layer** & **Cloud Layer** & **Performance** \\ \hline \hline \multirow{4}{*}{QoE Metrics} & Hung _et al._[4] & Narrowing down the configuration space & Choosing from configuration space for five video feed & Determination of & ✓ & Accuracy and response time \\ & & & & & capacity ratio for & \\ & & & & analytics & configurations & \\ \cline{2-6} & Zhang _et al._[12] & Combination of offline profiler and online & Resource-quality profile of queries & \(\times\) & ✓ & Performance and lag \\ & & scheduler for resource-quality profile and allocation & & & & \\ \cline{2-6} & Lu _et al._[74], [75] & Batch processing and filtering out videos & Optimal video offloading and transmission sequence in terms of minimizing the query response time. & Deep learning techniques for object detection & \(\times\) & Distributed processing, energy efficiency, and performance improvement \\ \hline Lu _et al._[86] & Profiling and modeling resource resource resource resource resource resource resource requirements on the mobile platform for CNNs & Estimation of resource usage and compute time for CNN configurations & \(\times\) & \(\times\) & Compute time \\ \cline{2-6} & Lu _et al._[87] & Usage of greedy algorithm and adaptive algorithm & Control transmission rate & Optimal video offloading and processing & \(\times\) & Response time and transmission rate \\ \hline Han _et al._[96] & Heuristic scheduling algorithm & Frequency of use and the choosing the accurate model & \(\times\) & DNN & Accuracy \\ \hline Wang _et al._[102] & Light-weight edge cloud platform & \(\times\) & Network switch and intelligent edge servers & Usage of container and SDN & Response time and QoE \\ \hline Xu _et al._[103] & Tuning canary inputs and usage of prediction & Configuration selection considering video memories & \(\times\) & \(\times\) & Accuracy, low-latency and low-energy processing \\ \cline{2-6} & Fu _et al._[104] & Balancing operation queues by thread workers & Optimizing data flow by a scheduler & \(\times\) & \(\times\) & Latency \\ & & & & congestion-aware scheduler & & \\ \hline \multirow{4}{*}{Resource Provisioning Method} & [3, 14, 105] & Heuristic-based & Localization algorithm or choosing among CNNs’ models & Usage of nearby devices for processing & \(\times\) & Performance \\ & & & & & speeding up & \\ \cline{2-6} & [62, 68], [101], [106] & Optimization-based & Load balancing & Algorithm & ✓ & Latency and performance \\ \cline{2-6} & [88, 107], [108] & DRL-based & Combination of deep reinforcement and ANN & & \(\times\) & Bitrate and network usage \\ \cline{2-6} & [109, 110, 111] & Incentive-aware & Game theory strategies & \(\times\) & \(\times\) & Latency and cost \\ \cline{2-6} & [55, 56], [112] & CPU/GPU Optimization & Optimization techniques and libraries & & \(\times\) & \(\times\) & Performance and cost \\ \hline [113, 114] & Data/Streaming Storage & Local storage & Maintaining cache servers & ✓ & Latency and reducing network load \\ \hline \end{tabular} \end{table} TABLE V: Summary of the literature for resource management ### _QoE Metrics_ #### Iv-A1 Accuracy Canary inputs (i.e., small input numbers, e.g., subsampling) have contributed to video accuracy optimization. Xu _et al._[103] proposed VideoChed that uses canary inputs to tune the computation accuracy and transferring the approximate configurations to full inputs, which also uses a prediction model considering user constraints (e.g., Peak Signal-to-Noise Ratio) with respect to canary input's error. The canary inputs are chosen based on the dissimilarity ratio between the sample and the full-size video to determine how close a canary input is. This roots in the fact that most stages in the video pipeline are approximate, leading to low-latency and low-energy video processing in specific/generic domains. These approximations may depend on both algorithms and the video content, which necessitate exploring a large number of configurations for finding the optimal one. This selection takes place offline and through an efficient search strategy considering video encoding parameters. Since searching the best approximate level is _computation-intensive_, VideoChef aims at an encoder-based approach emphasizing significant video changes through pixel and histogram-based for detecting the scenes. Hung _et al._[4] present VideoEdge as a system that is capable of achieving a trade-off between resource and accuracy through narrowing down the configuration space in a hierarchical structure (i.e., cameras, private clusters, and public clouds) for live video feed analytics. The configuration space is downsized by computation of maximum demand to the capacity ratio for each configuration to find the dominant demand, allowing the system to compare configurations across demand and accuracy. In addition, streaming and multi-programming can also lead to improving the accuracy as Han _et al._[96] proposed a heuristic scheduling algorithm for allocating resources proportionally based on the frequency of use and selection of the most accurate model variant. Felemban _et al._[97] proposed PicSys, a system that optimizes the deployment of a CNN pipeline by assigning each computation stage to resources that maximize their utilization. PicSys splits computation into several filtering stages, and assigns each stage to a resource using a heuristic algorithm that solves an optimization problem. To balance the trade-off between accuracy and speed, PicSys uses a lighter version of the CNN that requires less computation. #### Iv-A2 Latency Wang _et al._, [102] proposed a lightweight edge computing platform that is powered by small-size and cost-efficient edge intelligent servers integrating computation and network capabilities to constitute a large scale edge cloud. The framework targets reducing response time while improving the quality of experiences. These goals are achieved by integrating software-defined network and container features. The latter provides network switches and the ease of services integration within containers for resource integration and controlling management. The streaming processing engine presented in [104] addressed improvement for latency through focusing on the operation queues to be balanced by thread workers. The engine that is called EdgeWise is powered by a congestion-aware scheduler that monitors the queues for selection of the highest operation priority to optimize the data flow. This optimization is mainly done by considering fixed worker pools for decoupling workers from operations (i.e., thread contention) stemming from ready threads to be in charge of operations with the most pending data. Moreover, EdgeWise considers data consumption policy for the queues to improve the overall scheduling decisions allocation of heavy-loaded operations with more frequent workers. The NetVision system [87] used a greedy algorithm along with an adaptive algorithm to manage variable transmission rates to improve latency. It provided a solution for optimizing query response time in on-demand video processing scenario by formulating the processing scheduling problem. #### Iv-A3 Performance Cloud resources may also contribute to performance improvement by pushing video processing to the cloud (i.e., offloading), which can be seen in [74, 75]. In these works, a deep learning-based CrowdVision platform is proposed that is distributed and energy-efficient and considers batch processing as the characteristics of CNNs. The features contribute to the distributed processing, balancing the waiting time of frame rates and the processing time of each batch for performance improvement while considering the network conditions for offloading benefits. To further improve the video processing, CrowdVision filters out videos by location and timestamp and applies deep learning techniques for the object of interest recognition in a batch-like manner. Zhang _et al._[12] proposed a video analytics system, VideoStorm, which leverages an offline profiler generating query resource-quality profile and employs an online scheduler to allocate resources for maximizing the performance on quality and lag rather than relying on fair sharing in clusters. The profiler uses greedy search and domain sampling specific for obtaining a set of handful configurations on the Pareto boundary of the profile, which the scheduler takes into account. The quality and lag are encoded as utility functions, which are penalized for violations and prioritization. Artificial neural networks can also assist performance improvement. Lu _et al._[86] present Augur as a CNN performance analyzer to determine the efficiency of a CNN on a given mobile platform. This tool profiles and models the resource requirements of CNNs (i.e., the forward pass) by taking the configuration of CNN to estimate the computational overhead of the model. ### _Resources_ #### Iv-B1 Hardware Power-efficient AI computing devices (i.e., AI-kit) have enabled video analytics at the edge and provide users with applications including image classification, object detection, segmentation, and speech processing running in parallel. These devices are also suitable for use in environments with intermittent connectivity, such as remote locations. NVIDIA Jetson TX1 and TX2 [115], and Jetson Nano [116] are power-efficient embedded AI computing devices that bring trustworthy AI computing at the edge. They have emerged from NVIDIA Pascal-family GPU capable of being integrated into any products due to having hardware standard features. Amazon Snowball [117] is a device optimized for edge computing that provides virtual CPUs, block and object storage, and even an optional GPU. These devices can be clustered together in a rack-mounted form to create larger temporary installations. Snowball has been designed to support advanced machine learning and video analytics applications, especially in disconnected environments such as manufacturing, industrial, and transportation settings or in highly remote locations, including military or maritime operations. Azure IoT Starter kit [118] is a vision AI developer kit to run AI models at intelligent edge. The kit runs models built by Microsoft Azure Machine Learning (AML) and other Azure services for edge analytics and AI processing. Intel NUC mini PC [119] is an AI Development Kitequipped with a powerful Intel Core processor, integrated graphics, and the advanced Intel Movidius Myriad X Vision Processing Unit (VPU). This combination allows for seamless execution of a wide range of AI workloads with high performance and low power consumption. The kit offers a comprehensive AI capability and is suitable for running diverse AI workloads. Raspberry Pi [120] is known for a series of small single-board computers that provides computing in a low cost, credit-card sized computer. There are also a majority of alternatives for this device which fall into single-board computers (SBCs) with powerful system-on-chip (SoC) such as Onion Omega2+ [121], Orange Pi [122], Banana Pi [123], Rock64 [124], Arduino [125], Asus Tinker Board [126], Odroid [127], Pin64 [128], Cubieboard [129], BeagleBoard [130], LatePanda Alpha [131], UDOO BOLT [132], Libre Computer Le Potato [133], and NanoPi [134]. #### V-B2 Software From a software perspective, there have been libraries for processing, such as OpenCV [135] a cross-platform library that contains various image process and computer vision algorithm. This library also supports deep learning models (e.g., TensorFlow) and over 2500 optimized classic or state-of-the-art computer vision algorithms using MMX and SSE instructions when available. There are other frameworks alike OpenCV including SimpleCV [136] a Python-based open-source framework that is recommended for prototyping, and the other Python-based library is Scikit-image [137] acting as a toolbox for SciPy and provides different algorithms for image processing. Accord.NET framework [138] that is a C#-based framework providing several functionalities such as providing machine learning, computer vision, and image processing methods. BoofCV [139] as an open-source java library for real-time computer vision and robotics applications consisting of application-based packages for image processing, standard functions, and feature extraction algorithms. FastCV computer vision [140] that is implemented for ARM architecture and is optimized for Qualcomm's Snapdragon processors for providing the most frequently used vision processing functions. MATLAB [141] is a paid programming platform that comes with the computer vision processing toolbox. Deepface [142] is a Python-based and lightweight face recognition and facial attribute analytics framework (e.g., age, gender, or emotion and race) that employs state-of-the-art models such as Google FaceNet [143] or VGG-Face [144]. Point Cloud Library (PCL) is a C++-based library for three-dimensional image processing [145]. In addition, NVIDIA CUDA-X [146] provides libraries, tools, and technologies for delivering high-performance artificial intelligence application domains. There is also the NVIDIA Performance Primitives (NPP) [147] library that facilitates GPU-accelerated vision processing. Detectron2 [148] is Facebook artificial intelligence library for providing the latest technology and development in detection and segmentation algorithms, and supports computer vision research projects and production applications in Facebook. Moreover, neural network frameworks (and related tools) have been introduced to the image processing which have been used widely such as YOLOv3 [149], MobileNet [78], ResNet [89], VGG [90], NASNet [91], PNASNet [150], Keras [151], Caffe [152], PyTorch [153], Albumentations [154], OpenVINO [155], and TensorFlow [156]. These are based on the convolutional deep neural network for the purpose of object of interest detection. ### _Resource Provisioning Methods_ #### V-C1 Heuristic-based Previous studies mentioned in Section V-A utilize heuristic methods to improve the QoE, e.g., [4, 12, 87, 96, 4, 97]. There still exist other research studies whose aims lead to better resource allocation. Grassi _et al._[14] proposed a method to estimate the location of parked cars in a single frame using a localization algorithm. The approach involves camera calibration against well-known objects in the surrounding environment. In contrast, Yi _et al._[3] proposed a light-weight virtualization on top of the operating system. It is a two-phase optimization process in which, in the first phase, bandwidth is allocated, and the next phase aims to leverage nearby edge resources to expedite the task completion time. Shen _et al._[105] leveraged short-term class skew (i.e., objects of interests over some time) to accelerate video classification using CNNs. The research proposes a heuristic algorithm based on exploration and exploitation strategies to address the sequential model selection problem, which is formulated as the Oracle Bandit Problem (OBP). The proposed approach seeks to estimate skew at test-time, producing a specialized model if possible and using the model as long as the skew lasts and then reverts to one of the classifier models referred to as oracle (e.g., GoogleNet). To create a specialized model, the research study selects a specified number of dominant classes and a randomly selected subset from other classes with a different label from the original data, creating new training data sets. The dominant classes make up a percentage of the new dataset used to train the compact model. The oracle model is swapped with a less expensive but compact model for exploiting skew to return early with the classification result if inputs belong to the frequent classes in the incoming distribution. Otherwise, it uses the oracle model. The research proposes a heuristic algorithm based on exploration and exploitation strategies to address the sequential model selection problem, which is formulated as the Oracle Bandit Problem (OBP). #### V-C2 Optimization theory-based Similar to [74, 75, 104, 105], Drolia _et al._[62] applied tunable cache size as the knob to minimize the latency. The authors presented Cachier, a system that optimizes the cache size to minimize latency in computation-intensive recognition applications. The authors model edge servers as caches and use novel optimizations to adaptively balance load between the edge and the cloud. Long _et al._[68] proposed a solution for the group formation problem by transforming it into a winner determination problem. This new formulation can be solved using a 2-approximation algorithm that significantly reduces the complexity of the problem. O'Gorman _et al._[101] studied the upper-bound requirements for processing and bandwidth for a video analytics application analytically and experimentally on real videos to provide guidance for algorithm placement between the edge and the cloud. However, Valls _et al._[106] modeled the resource allocation as a network processing model in which targets the order of processing, where two resource allocation algorithms are presented due to the dynamicity of the network. The proposed approach utilizes two algorithms: a backpressure-based algorithm and a backpressure plus interior-point method algorithm. The former allocates data analytics resources based solely on the congestion level in the system, while the latter considers the fact that some resources have a constant demand. #### V-C3 DRL-based Huang _et al._[88] proposed a rate deep reinforcement learning algorithm focusing on a higher perceptual quality rate with lower bitrate. Based on a trained neural network, the algorithm predicts future bitrates considering the observation of current network status. The model is divided into two separate neural networks; an ANN to predict future video quality based on the previous video frame and the other ANN as a reinforcement learning one that determines the bitrate with the first model's result. The first model has a combination of CNNs to extract image features and a recurrent neural network for capturing temporary features. Rao _et al._[107] introduced a deep reinforcement learning method for video face recognition that leverages attention mechanisms. They approached the problem of identifying relevant regions of a video as a Markov decision process and trained an attention model through a DRL framework, which does not require additional labeling. Supancic _et al._[108] modeled online video object tracking and transformed the tracking formula into a partially observable decision-making process with DRL to understand the best decision-making strategy. #### V-C4 Incentive-aware Zhan _et al._[109] presented a scheme for optimizing the interaction between video providers and mobile users, by formulating it as a two-person cooperative game. The proposed scheme used Nash bargain game theory to obtain the optimal cooperation decision, resulting in a high data delivery ratio, low communication and computation overhead, and excellent economic properties. Wu _et al._[110] proposed a pricing mechanism based on Stackelberg game theory to inspire device-to-device video distribution from core users. This approach can reduce the load on base stations and improve the effectiveness and reliability of video transmission. Hu _et al._[111] present a solution to the challenge of offloading heterogeneous video analytics tasks that leverages game theory. They cast the problem as a minority game, in which each participant must make decisions independently in each round, and the players who make the minority choice win. The game is played by multiple players who lack complete information sucn as the number of video analytics tasks or resources, which creates incentives for players to cooperate with each other. ### _Other Optimizations_ In the following, some other optimization studies in video analytics not mentioned above will be introduced, including GPU/CPU acceleration and data/streaming storage. #### V-D1 GPU/CPU acceleration To reduce the costs of supporting a large-scale deployment with hundreds of video cameras, Stone _et al._[56] introduced Tetris, a system that incorporates various optimization techniques from computer vision and deep learning fields in a synergistic manner. Liu _et al._[55] simplify the memory transfer between CPU and GPU with Nvidia CUDA mapped memory feature. The DetectNet component leverages TensorRT to manage the GPU inference engine, including tasks such as initializing the inference engine, creating the inference runtime, loading the serialized model, creating the inference execution context, and executing the inference engine. Integral histograms are widely used for extracting multi-scale histogram-based regional descriptors, which are essential components of many video content analytics frameworks. Poostchi _et al._[112] evaluated different approaches to mapping the computation of integral histograms onto GPUs, using various kernel optimization strategies. #### V-D2 Data/Streaming Storage Maintaining a cache server in the edge node can directly provide video service-related content to users and at the same time, reduce the number of requests forwarded to back-end servers. It can minimize the expected latency for users and reduce network load and back-end load. The storage system can store various data ingested and generated during the video encoding process as this data can play a role in the subsequent analytics, and the video service can be pushed to the user faster. Huang _et al._[114] presented a streaming video engine that uses a preprocessor to store original upload videos in parallel onto disk for fault tolerance. To process the video data, the engine employs a scheduler that schedules tasks on workers. These workers pull data from the preprocessor and store the processed data onto disk. The term "zero-streaming" camera refers to cameras that capture videos to local storage without streaming anything. They are reactive and highly efficient, consuming only network and cloud resources to analyze queried video footage. Xu _et al._[113] exploited the zero-streaming paradigm and minimized the ingestion cost, shifting as much as possible to the query execution. Zero-streaming cameras capture videos to the local flash storage (cheap and large) without uploading any; only in response to user queries, they communicate and cooperate with the cloud to analyze the stored videos. ## VI Security and Privacy People are extremely concerned about privacy leakage, especially for video applications with rich personal information. Privacy protection enables individuals to have a certain degree of control over their sensitive data, preventing it from being abused by third parties. In cloud-based video analytics, the privacy-preserving operations need to be locally executed, which demands a high level of computation resources and a low complexity on the encryption algorithms. With the help of edge servers, the privacy-preserving operations can be executed at edge servers, avoiding straight exposure of privacy information at the cloud. For a video analytics-based application, massively captured video data commonly go through the following three phases: _Data Collection_, _Data Analytics_ and _Data Storage_. Different phases suffer from different privacy risks and corresponding privacy preserving mechanisms are required for privacy protection. In this section, we summarize the related works on these three categories. ### _Privacy-preserving Video Collection_ In the stage of data collection, privacy concerns can come from unreliable application owners as well as unreliable network conditions. For video analytics applications, some sensitive information (e.g., facial information and the ID number) should not be collected by an untrusted party in avoid of being illegally used. Besides, in over-the-air transmission of raw video contents, there is a giant risk of information leakage against eavesdroppers. Considering these issues, it is essential to design privacy-preserving video collection solutions. As an efficient and proven solution, it is the features not the raw contents that should be transmitted to the edge or cloud servers. For example, it is sufficient to transmit the outline of a pedestrian without the need for facial information in a pedestrian counting application. Thus, the facial information as the private source should not be exposed to the central cloud server, and thus, the privacy can be reserved. Based on the integrity of the collected video contents, we summarize recent works as _complete collection_ and _partial collection_ as follows. #### Iv-A1 Complete Collection Encryption methods are widely used for a full collection of video contents. Kim _et al._[157] encrypted video frames on edge servers in the surveillance station and then delivered encrypted frames to a trusted third-party cloud server. The specific regions of a video frame are first filtered out according to a preset privacy map and then decrypted into original video data with a shared key at the cloud server. However, privacy map cannot provide a more fine-grained and automated privacy protection. Besides, it does not always provide a trusted third-party server as a medium between video data and video subscribers. Li _et al._[161] proposed a privacy-preserving data aggregation scheme, where the edge server aggregates the encrypted data from terminal devices and sends data to the cloud server, which can then decrypt the aggregated data through its private key. In this kind of privacy-preserving systems, edge servers always serve as an intermediate layer to provide privacy protection for the entire system. Traditional encryption methods are generally computation-intensive, which might not work well on resource-limited edge servers. To address such deficiency, Wang _et al._[163] proposed a VideoDP platform that provides a novel differential privacy (DP) function. In VideoDP, adding or removing any sensitive visual element into/from the input video does not significantly affect the analytical result. Xu _et al._[164] proposed a local DP obfuscation framework for data analytics, where data is distilled in edge servers with limited ability to make inferences about users' sensitive data. Mao _et al._[165] proposed to partition a DNN model after the first convolutional layer between the end device side and the edge server side. Then, DP is applied to protect convolutional layers to guarantee the privacy of users' sensitive data. #### V-A2 Partial Collection Recall the frame cropping technology introduced in Sec. IV-A2, many video analytics-related applications focused on RoIs in each video frame, such as the face regions in face detection and the vehicle regions in vehicle monitoring. Thus, end devices (e.g., cameras) should also enhance the privacy preservation in partial video collection. Neural networks can offer more granular privacy protection measures. OpenFace [158, 159] implemented a privacy-preserving data collection mechanism by denaturing video data on the edge server instead of directly sending the raw video to the cloud. This technology selectively blurs faces that appear in video frames to alleviate privacy concerns related to face data. Similarly, Wang _et al._[160] proposed a privacy-preserving face verification system, which applies the edge server to extract face feature by CNNs, as well as encryption of the feature data before sending to the cloud server by using advanced encryption system (AES). Zarepour _et al._[162] introduced a novel context-aware privacy-preserving framework, which uses the contextual information to estimate the set of potential sensitive subjects in each image. In this framework, user activity extraction and sensitive information filtering are completed on the end device locally before publishing the raw image. ### _Privacy-preserving Video Analytics_ During video analytics, edge servers with limited computing power may need to process sensitive information on untrusted platforms. Therefore, before the original data is submitted to a untrusted platform for further analysis, corresponding preprocessing should be performed at the edge, such as encryption or abstract knowledge extraction. A crucial question is how to apply the computing power of the cloud server without exposing the privacy of raw video. There are three technologies that can be possible solutions, which are _encryption-based technology_, _obfuscation-based technology_ and _privacy-preserving machine learning_. #### V-B1 Privacy-preserving Training In an edge-based video analytics system, data can be easily collected by terminal devices. However, plain-text data cannot be directly sent to the cloud server when users are concerned about privacy. Instead, an edge server can use its own data to conduct model training locally without sharing data with the cloud server, or use a neural network to perform preprocessing. The use of edge computing has enabled video analytics applications to benefit from low-latency and distributed data processing services by leveraging the storage and computing capabilities of nearby end devices. However, in a distributed environment, the training process remains challenging due to the fragmented knowledge base across edge and cloud servers. As a promising solution, _federated learning_ enables a distributed server to train its model locally and does not need to share local data with others. Thus, federated learning greatly alleviates the risk of privacy leakage caused in data sharing. Sada _et al._[170] proposed an edge-based video analytics architecture, using federated learning to update object detection models and avoid sending the local data to the cloud. The federated learning layer is deployed in an edge server, which is situated between end devices and the cloud server. Similarly, Chen _et al._[171] proposed a distributed learning framework that can be trained at each base station and cooperatively builds a learning model which can predict the mobility and orientations of users. Liu _et al._[172] developed a platform called FedVision, which allows for the development of federated learning powered computer vision applications. It aims to develop effective visual object detection models by utilizing image data owned by multiple organizations through federated learning. FedVision is the first industry application of FL in computer vision-based tasks, and it has the potential to assist organizations in complying with stricter data privacy protection laws, such as GDPR. Although federated learning does not require the transmission of raw data during training, the attacker may still obtain user privacy from the exposure of gradients. For example, [179] shows that an attacker can infer whether a participant's data has been included in the dataset by collecting and analyzing shared models with an accuracy of 90%. In federated learning paradigm, there are several methods that can be used to improve the video analytics privacy-preserving level. The main methods include the _homomorphic encryption_ method and the _differential privacy_ method. _(i) Homomorphic encryption-empowered solution_ Homomorphic encryption [180] enables data to be analyzed and manipulated while still encrypted, without the need for decryption. Jiang _et al._[166] explored the use of homomorphic encryption to perform scale-invariant feature transform on encrypted images, enabling data analytics to be performed directly on encrypted data. To enable homomorphic encryption operations on resource-constrained edge servers, TargetFinder [167] applied optimization techniques to reduce computation overhead of cryptography primitives. These technologies enable secure and privacy-preserving image processing and analysis on edge devices with limited resources. Li _et al._[168] proposed a novel framework for privacy-preserving computing that utilizes lightweight permutation-substitution encryption and homomorphic encryption on end devices. This edge-assisted framework offloads the burden of computation, communication and storage while ensuring data security. Akkaya _et al._[169] proposed to perform background subtraction to get the foreground to transmit, and thus reduce the size of the data to be transmitted and the computational cost for applying homomorphic encryption. In this way, the receiver can aggregate the background and foreground to do further analytics solely based on the encrypted data. Ma _et al._[181] presented a privacy-preserving motion detection algorithm for HEVC compressed videos which operates in the compressed domain. It can detect the coarse-grained shapes of moving objects and estimate the motion trajectory without decoding the video. By searching in the compressed-domain, the algorithm can preserving the compression efficiency of the video codec without incurring extra transmission bandwidth or storage overhead. _(ii) Differential privacy-empowered solution_ Without the high computation burden of the homomorphic operation, differential privacy provides a lightweight solution for the federated learning paradigm. Differential privacy-empowered solution adds zero-mean "noises" to the trained parameters by using some randomized mechanism which is called differential privacy-preserving, such as Gaussian mechanism. Hu _et al._[175] proposed _FedEVA_, a distributed training framework for edge video analytics that protects user privacy with fast convergence rates. The framework implements local differential privacy (LDP) on user updates before sending gradients to the parameter server, which then updates the neural network model based on the perturbed gradients. Experimental results shows that the proposed framework can ensure privacy preservation while maintaining the same convergence rate. #### V-B2 Privacy-preserving inference We have seen that model partition technologies can extract abstract features of raw video data on the edge layer, while further analytics tasks are completed on the cloud. However, input recovery attacks can occur during inference, aiming to recover raw image data from image features. _Privacy-preserving inference_ pays more attention to resisting input recovery attacks during the model inference phase. Chi _et al._[182] proposed a framework to ensures user data privacy during the model inference stage, when users utilize their data to obtain classification results. This framework addresses the potential for unauthorized access to privacy-sensitive information like encoding information about previous layers, which may occur due to the presence of multiple intermediate layers in the output of a deep learning neural network. Osia _et al._[183] introduced a method to manipulate the extracted features by altering the training phase when applying the Siamese network [184] and a noise addition mechanism for improving privacy protection. Moreover, they applied transfer learning and deep visualization techniques to quantify the privacy guarantees of their approach. Similarly, Osia _et al._[174] proposed a novel approach that leverages an edge device to run the initial layers of a neural network for protecting user privacy. The output is then sent to the cloud to process the remaining layers and produce the final results. To further improve privacy protection, they employed Siamese [184] fine-tuning to ensure that only necessary information is contained on the user's device for the main task, thus preventing any secondary inference on the data. Xu _et al._[173] presented a lightweight and unobtrusive approach to obfuscating the inference data at user devices. The edge servers only need to execute a lightweight neural network to obfuscate the inference data implying that thus the neural network can be easily deployed on a resource constrained edge server or device introducing light compute overhead. ### _Privacy-preserving Video Storage_ Reliable data storage is important for video analytics tasks, especially for video retrieval applications. Privacy concerns come from sending sensitive data directly to the cloud with a lack of user control. As we know, more and more end devices have the ability to collect high-definition video content with a large data size, which inevitably causes the storage problem. Unlike mobile devices, edge and cloud servers have more powerful storage and computing capabilities. However, the pure video storage on the remote servers will cause the leakage of data privacy. Davies _et al._[176] proposed a software solution called Privacy Mediator, which is in the same administrative domain with end devices. Therefore, Privacy Mediator can provide a reliable data storage service. OpenFace [158] ensures a stronger privacy protection with a similar way by applying a trusted edge server to perform video denaturing and provide edge-based data storage, which keeps privacy data away from unreliable network transmission. To protect privacy, Neff _et al._[185] proposed REVAMP2T, which does not store or transfer any image data across the network. The edge server will destroy the image as soon as the image is processed. Instead, it works on an encoded feature representation of an individual, which has no meaning outside of the REVAMP2T system and cannot be interpreted by humans. However, it is important to utilize the computing power on the cloud side when the task suffers from a high computational complexity. Wang _et al._[177] proposed a three-layer storage architecture, in which edge servers can offer a computing and storage service while the rest of data is transmitted to the cloud. With the proposed storage architecture, privacy data cannot be retrieved even when we use a cloud storage service. Similarly, Xiao _et al._[178] fully utilized the storage space of edge and cloud servers by proposing a hierarchical edge computing architecture and they divide video frames into three parts. To be specific, the most significant bits of key frames are stored in local with full control and the least significant bits are encrypted before sending to the edge servers. Finally, non-key frames are compressed and encrypted before they are transmitted to the cloud. For one thing, the above architecture makes full use of different levels of storage space; for another, it also provides a more fine-grained level of privacy protection. ## VII Issues and Future Research Directions This section presents the essential issues and future research directions in the area of edge-based video analytics. ### _Efficient Video Compression Methods_ In the design of video transmission, it has been proved that an efficient video compression can tremendously reduce the bandwidth consumption. However, it has not been effectively revealed how to describe the relationship between the video transmission efficiency and the video analytics accuracy. For example, when the video is coded via the super resolution technology that can be transmitted without consuming much bandwidth, how we can quantitatively obtain the video analytics performance, e.g., accuracy. ### _5G/6G Empowered Video Analytics_ In the video analytics applications, the focused parts are always the mobile objects on the captured frames. However, it has not been given enough attention on the use cases that the cameras themselves are in motion. Although some preliminary works focused on the scenarios that cameras are installed on the drones, challenges still remain, especially for the emerging 6G technologies that support satellite services. In the future 6G service scenario, the video analytics applications will not only process the video streaming captured by the cameras deployed at stores, crossroads, and places all around the cities, but also by the cameras installed on the satellite. Denby _et al._[186] proposed an orbital edge computing system to enable on-board edge computing at each nano-satellite with camera, allowing for local processing of sensed data when downlinking is not feasible. It is urgently needed to take as a coupled consideration with all orbit parameters, physical models, and ground station positions to trigger data collection, predict energy availability, and conduct task offloading, and execute video analytics tasks. Current works generally focused on the video analytics use cases and service scenarios for the civil use. Most researchers developed the systems by balancing the trade off between accuracy and latency. However, some dedicated industries and applications put high requirements for accuracy and latency, e.g., the video analytics of track safety monitoring for high-speed railway. Thus, it is urgently required to design a dedicated edge-based video analytics architecture for these kinds of applications. ### _Interactive Video Analytics System_ Augmented reality technology allows for interactive elements to be added over real-world views for specific purposes. However, AR overlays digital is overlaid onto the real world views and the digital content is not directly anchored to real-world elements, which means that it cannot interact with real-world elements. To overcome this limitation, AR researchers must develop techniques that allow digital content to interpret and respond to users' head movements and body gestures dynamically, enabling a more interactive and immersive AR experience. ## VIII Conclusion Edge-based video analytics have emerged as one of the most widely adapted implementations for various smart services, entertainment, safety and security. This survey has made an overarching taxonomy that covers key aspects of edge-based video analytics with respect to use cases, architectures, techniques, resource management, and security and privacy. Besides, some recommendations on future research issues and directions are provided. In particular, we have identified edge-based solutions have the following main advantages. _Firstly_, a low response time is urgently required for most video analytics applications. However, local execution (executed on end devices) efficiency is harsh restricted by the computation resources. Differently, the tasks executed on the cloud are rarely affected by the computation resources. Nevertheless, the video content transmission requires high bandwidth, which is hard to achieve by the cloud computing. Edge computing brings computation and communication closer to the task source, and improves response times. _Secondly_, located between end devices and remote cloud, edge servers provide a new dimension of dynamical resource allocation that can promote system optimization. The resource provisioning scalability from edge servers provides greater potential to achieve better service performance. _Thirdly_, edge servers can provide more security and privacy guarantee. Due to the storage limitation, the video contents cannot be all locally stored, and the edge servers can be a good candidate providing privacy-preserving storage solutions. Compared to the video collection, analytics, storage at the cloud, the edge-based solution can provision better privacy preservation due to its distributed properties. As a result, edge-based video analytics solutions can be leveraged as a general framework to support several privacy-preserving video analytics applications.
2301.12121
Self-driven Hybrid Atomic Spin Oscillator
A self-driven hybrid atomic spin oscillator is demonstrated in theory and experiment with a vapor Rb-Xe dual-spin system. The raw signal of Rb spin oscillation is amplified, phase-shifted and sent back to drive the Xe spins coherently. By fine tuning the driving field strength and phase, a self-sustaining spin oscillation signal with zero frequency shift is obtained. The effective coherence time is infinitely prolonged beyond the intrinsic coherence time of Xe spins, forming a hybrid atomic spin oscillator. Spectral analysis indicates that a frequency resolution of 13.1 nHz is achieved, enhancing the detection sensitivity for magnetic field. Allan deviation analysis shows that the spin oscillator can operate in continuous wave mode like a spin maser. The prototype spin oscillator can be easily implanted into other hybrid spin systems and enhance the detection sensitivity of alkali metal-noble gas comagnetometers.
Erwei Li, Qianjin Ma, Guobin Liu, Peter Yun, Shougang Zhang
2023-01-28T08:17:36Z
http://arxiv.org/abs/2301.12121v2
# Self-driven Hybrid Atom Spin Oscillator ###### Abstract A self-driven hybrid atom spin oscillator is demonstrated in theory and experiment with a vapor Rb-Xe dual-spin system. The raw signal of Rb spin oscillation is amplified, phase-shifted and sent back to drive the Xe spins coherently. By fine tuning the driving field strength and phase, a self-sustaining spin oscillation signal with zero frequency shift is obtained. The effective coherence time is infinitely prolonged beyond the intrinsic coherence time of Xe spins, forming a hybrid atom spin oscillator. Spectral analysis indicates that a frequency resolution of 13.1 nHz is achieved, enhancing the detection sensitivity for magnetic field. Allan deviation analysis shows that the spin oscillator can operate in continuous wave mode like a spin maser. The prototype spin oscillator can be easily implanted into other hybrid spin systems and enhance the detection sensitivity of alkali metal-noble gas magnetometers. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote † †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote † †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote † †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote † †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote † †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote † †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote † † †: These authors contributed equally to this work and should be regarded as co-first authors. + Footnote is amplified, phase shifted and sent back to drive the Xe spins as an ac magnetic field. The driving field strength is determined by the gain factor \(G\) and the driving field is in-phase or out of phase depending on the phase shift \(\theta\). The comagnetometer works in two modes: open-loop mode (_G_=0) and close-loop mode (_G_\(\neq\)0). In close-loop mode, the self-driving comagnetometer can work in two different states depending on the magnitude of \(G\). In classical oscillator electronics, a close loop system can be made self-oscillating when product of the gain and feedback approaches one. In simulation, we found a similar critical condition for the close loop comagnetometer to become self-oscillating, which in this case is described as \[10GM_{0}^{\mathrm{Rb}}=M_{0}^{\mathrm{Xe}}, \tag{2}\] meaning the self-oscillating can be triggered when the driving field strength is about one tenth of the initial Xe spin magnetization. This is easier to be realized compared to the classical self-oscillating oscillator, which can be attributed to the long intrinsic coherence time of Xe spins. Generally, the self-driving field strength can be divided into two regions: 1) When \(10\,GM_{0}^{\mathrm{Rb}}\)\(\leq\)\(M_{0}^{\mathrm{Xe}}\), the self-driving field is said to be weak since the Xe spin oscillation decays exponentially as usual; 2) When \(10GM_{0}^{\mathrm{Rb}}\)\(\geq\)\(M_{0}^{\mathrm{Xe}}\), the self-driving effect becomes strong and a self-sustaining spin oscillation emerges, with an effective coherence time far longer than the intrinsic spin relaxation time of \({}^{129}\)Xe spins in typical temperature and buffer gas conditions, as shown by the two insets in Fig.1. As the alkali metal atomic spin magnetization is \(\sim 10^{3}\) times weaker than the noble gas atomic spin magnetization in typical experimental conditions [22], the critical \(G\) value is around a few hundreds. The phase is also critically important in realization of the self-sustaining spin oscillation. Simulation results show that the self-sustaining oscillation can persist for a phase range determined by \[\theta_{0}<\theta<180^{\circ}-\theta_{0}, \tag{3}\] as indicated by the two sharp structures in the frequency-phase (dispersions) and amplitude-phase (steps) diagrams in Fig.1. The greater the \(G\), the smaller the \(\theta_{0}\), meaning a wider phase range for self-sustaining oscillation exists given stronger self-driving field. It is also shown in Fig.1(b), the frequency shifts almost linearly with \(\theta\) in most of the self-sustaining oscillation phase range given by Eq.3 except at around the critical phase points at \(\theta_{0}\) and \(180^{\circ}-\theta_{0}\). The shift can be explained as follows, due to the well known Bloch-Siegert shift effect in NMR spectra [25], the mismatch of the initial phase values between the driving field signal and open loop signal can lead to an accumulation effect, which may keep changing gradually the close loop frequency. Therefore one therefore may consider the self-driving comagnetometer not in favor of precision measurement purpose albeit the potential gain in signal amplification and coherence time. However, by numerically solving the Eq.1, we find a phase point where the spin oscillation frequency shift vanishes. As shown in Fig.1(a), the frequency of close-loop (blue circles) crosses with that of open-loop (red line), implying that frequency shift vanishes at certain phase value. Due to Bloch-Siegert shift effect, this zero frequency shift (ZFS) phase \(\theta_{\mathrm{ZFS}}\) is about several degrees below \(90^{\circ}\), which is the theoretically in-phase driving phase value consider the \(y\)-axis excitation and \(x\)-axis detection configuration. The position of \(\theta_{\mathrm{ZFS}}\) depends also on \(G\). The larger the \(G\), the further the \(\theta_{\mathrm{ZFS}}\) away from \(90^{\circ}\). The existence of the ZFS phase is important as we have to rule out any possible non-systematic frequency shift sources [8] to find out the fundamentally unknown spin-dependent interactions. To testify above simulation results, we have constructed a Rb-Xe comagnetometer setup and specially designed the driving electronics with tunable gain and phase parameters, as depicted in Fig.2. The main part is a typical Rb-Xe comagnetometer, with a vacuum atomic vapor cell containing Rb-Xe mixture gas at high temperature (\(\sim\)120 \({}^{\circ}\)C) as the atom spins media. A circularly polarized 795 nm laser along the \(z\) direction shines into the cell to align the Rb atom spins. Figure 1: (color online) Simulated self-driving spin oscillation signal (a) in close loop mode and its frequency response as a function of the self-driving phase shift \(\theta\) at a fixed strong gain \(G\)=1000 (b). The cross points between the open-loop curve (red line) and the close-loop curve (blue circles) indicate there are several phase points where zero frequency shift (ZFS) occurs albeit the presence of Bloch-Siegert shift effect under strong off-resonance self-driving field. Then the Rb spin polarization was then transferred to Xe atoms spins via rapid spin-exchange collisions [26]. The polarized Xe spins drive Rb spins in a classical way [24] and at last a linearly polarized 780 nm laser along the \(x\) direction reads out the Rb spin dynamics over time as the comagnetometer original output signal. The driving electronics is basically a combination of a band-pass amplifier, a phase shifter and a current driver. Two factors affects the design of driving electronics. First, the Rb spins have larger response to Xe spins in lower frequency. Secondly, for typical applications including magnetic field and rotation rate measurements, the Rb-Xe comagnetometer works usually in ultra-low frequency range. Both factors urge the target spin oscillation to be at the range from several hertz to a few tens of hertz. Unfortunately, the \(1/f\) law of noise spectra indicate the spin oscillation signal may be easily disturbed by loud amplitude and phase noises at the frequency range. While the gain \(G\) has high tunning resolution, the phase resolution \(\Delta\theta\) is limited to a few degrees although the driving electronics is custom made with special design in noise reduction. For one typical experimental cycle, we first break the link between the Rb-Xe comagnetometer and driving electronics and record an open-loop spin oscillation signal, as shown in Fig.3(a). Then we restore the link, set the electronics driving output at a fix gain and change the phase point by point with an accuracy of a few degrees, recording the spin oscillation signals accordingly. At last, we fix the phase to the ZFS point where the close-loop spin oscillation frequency coincides with the open-loop one and record a long-time spin oscillation signal, as shown in Fig.3(b). With standard Fourier analysis and data fitting process, we extract the spin oscillation frequency versus phase and find it agree with the simulation results in Fig.1(b). As shown in Fig.3(c), a spectral linewidth of 0.04 Hz with SNR\(\sim\)1200 and a spectral linewidth of 0.53 mHz with SNR\(\sim\)40600 are obtained for the open loop and close loop signals, respectively. Compared to the open loop operation, the frequency resolution of the \({}^{129}\)Xe spin resonance frequency was improved by a factor of \(\sim\)2540, from 33.3 \(\mu\)Hz down to 13.1 nHz, approaching the state of the art accuracy [3]. For \({}^{129}\)Xe with gyromagnetic ratio 11.78 MHz/T, this level of frequency resolution leads to a magnetic field resolution of \(\approx\)1.11 fT, sufficient for applications such as the detection of human brain magnetic field [27]. It shall be noted that the center frequency of close-loop and open-loop signals does not coincide exactly with each other due to the limited tuning resolution of phase control by rheostats in present experiment. This shall be easily improved with higher phase tuning techniques, such as a direct digital synthesis (DDS). We have been observing the spin oscillation signal with a real-time oscilloscope for hours and found no sign of decaying at all, indicating the effective coherence time of spin oscillation probably being infinite. Once the loop is open, the spin oscillation starts to decay exponentially again within the intrinsic coherence time of noble gas spin. In this sense, the self-driving comagnetometer can be taken as a hybrid atomic spin oscillator, preferably working in continuous wave mode like conventional laser or maser. To test the performance of hybrid spin oscillator in long term operation, we recorded continuously the spin oscillation for 10000 seconds in the close loop operation and Figure 2: Experimental schematic of the self-driving Rb-Xe spin oscillator. It consists of a typical pump-probe Rb-Xe comagnetometer (left) and a driving electronics with tunable gain and phase parameters (right). The pump and probe laser powers are 54 mW and 3 mW, respectively. The static field \(B_{z}\) is \(\sim\)30 mG, corresponding to a \({}^{129}\)Xe spin oscillation frequency \(\nu_{0}\)\(\sim\)35 Hz. Figure 3: (color online) Spin oscillation signals of the self-driving Rb-Xe spin oscillator in open-loop (a) and close-loop (b) modes, and the corresponding Fourier spectra comparison (c). A linewidth narrowing by a factor of 75 and SNR enhancement by a factor of 34 is achieved when switching from the open loop to close loop modes. The open loop Fourier spectra in (c) was amplified by ten times for increasing the visibility. excuted a standard Allan deviation analysis for the last 9000 seconds, as shown in Fig.4. The frequency instability of spin oscillation reaches 2.99 \(\mu\)Hz at 2048 seconds averaging time, equivalent to a bias instability of \(\sim\)3.87 \({}^{\circ}\)/h for gyroscopic measurement. With respect to the 13.1 nHz frequency resolution, the 2.99 \(\mu\)Hz frequency instability is relatively high. We attributed this deterioration to various frequency drift sources. For example, we observed for two days a correlation between the drift of spin oscillation frequency (at 10\({}^{-4}\) level) and the drift of heater power for vapor cell, which was a result of the slowly varying residual magnetic field produced by the leaky current of the heater wires. Besides, the pump laser power is in free running mode, whose fluctuation can cause significant fluctuation (at 1% level) of the optical pumping rate thus the fluctuation of alkali atom spin magnetization, which finally cause the frequency shift of the noble gas spin oscillation. There are other factors affecting the middle to long term frequency instability, such as the drift of current feeding the bias field coils. As indicated by Fig.1(a), the fluctuation of unlocked phase can also lead to a frequency shift with a slope of \(\delta\nu/\delta\theta\)\(\sim\)10\({}^{-4}\) Hz/\({}^{\circ}\). A possible improvement is to use phase-lock method [28] to lock the driving phase at the ZFS point. Considering the infinite effective coherence time, the self-driving spin oscillator has the potential to reach a frequency instability at nHz level at long term running. As shown by Fig.4, the \(\sigma_{\tau}\) continues going down at 2048 seconds, indicating a better frequency instability can be reached given longer running time. During the process of preparing the manuscript, we noticed the work by Jiang et al.[29], which presented similar phenomena. We have to emphasize the following important differences between their work and ours. First, we built the spin oscillator with a simpler prototype setup, neither external driving field nor parametric modulation (together with lock-in detection) is used in experiment and no significant SNR loss observed. Secondly, we derived the self-oscillating conditions for Rb-Xe comagnetometer by developing a different theoretical framework, especially the introduction of \(G\) and \(\theta\) parameters, which are conceptually convenient for understanding and easier to be used for guiding experiment. At last, we found the existence of zero frequency shift phase in theory and experiment, which is important for various applications in precision measurement physics. In principle, the demonstrated self-driving spin oscillator scheme can be also applied to other dual-spin system, such as the K-\({}^{3}\)He comagnetometer, and also to the trispin magnetometer, such as the Rb-\({}^{3}\)He-\({}^{129}\)Xe or Rb-\({}^{129}\)Xe-\({}^{131}\)Xe configuration [3; 13] with careful design of dual-channel driving electronics with respect to the two noble gas spin oscillation frequencies. In conclusion, we have demonstrated theoretically and experimentally a self-driving spin oscillator based on the Rb-Xe comagnetometer. A self-sustaining oscillator with prolonged coherence time can be realized at strong self-driving condition. The spin oscillation frequency shift can vanish at certain phase points albeit the Bloch-Siegert shift effect. The frequency resolution of the hybrid spin oscillator reaches several nHz level, potentially enhancing the detection sensitivity for magnetic or gyroscopic measurements with a simple apparatus. With further improvement on the frequency instability, the hybrid atomic spin oscillator can work like laser or maser for long term operation, promising for various scientific or practical applications, such as the searching for new spin-dependent interactions and earth rotation monitoring. The author Guobin Liu would like to thank Dong Sheng and Min Jiang from the University of Science and Technology of China for helpful discussions. The authors also appreciate the financial support by the Chinese Academy of Sciences under grant no E209YC1101 and by the National Time Service Center under grant no E024DK1S01.
2308.05156
Discovery of a Split Stellar Stream In the Periphery of the Small Magellanic Cloud
I report the discovery of a stellar stream (Sutlej) using Gaia DR3 proper motions and XP metallicities located ~15 degrees north of the Small Magellanic Cloud (SMC). The stream is composed of two parallel linear components ("branches") approximately ~8 x 0.6 degrees in size and separated by 2.5 degrees. The stars have a mean proper motion of (pmra,pmdec)=(+0.08 mas/yr,-1.41 mas/yr) which is quite similar to the proper motion of stars on the western side of the SMC. The color magnitude diagram of the stream stars has a clear red giant branch, horizontal branch, and main sequence turnoff that is well-matched by a PARSEC isochrone of 10 Gyr, [Fe/H]=-1.8 at 32 kpc and a total stellar mass of ~33,000 Msun. The stream is spread out over an area of 9.6 square degrees and has a surface brightness of 32.5 mag/arcsec^2. The metallicity of the stream stars from Gaia XP spectra extend over -2.5 < [M/H] < -1.0 with a median of [M/H]=-1.8. The tangential velocity of the stream stars is 214 km/s compared to the values of 448 km/s for the Large Magellanic Cloud and 428 km/s for the SMC. While the radial velocity of the stream is not yet known, a comparison of the space velocities using a range of assumed radial velocities, shows that the stream is unlikely to be associated with the Magellanic Clouds. The tangential velocity vector is misaligned with the stream by ~25 degrees which might indicate an important gravitational influence from the nearby Magellanic Clouds.
David L. Nidever
2023-08-09T18:00:04Z
http://arxiv.org/abs/2308.05156v1
# Discovery of a Split Stellar Stream In the Periphery of the Small Magellanic Cloud ###### Abstract I report the discovery of a stellar stream (Sutlej) using _Gaia_ DR3 proper motions and XP metallicities located \(\sim\)15\({}^{\circ}\) north of the Small Magellanic Cloud (SMC). The stream is composed of two parallel linear components ("branches") approximately \(\sim\)8\({}^{\circ}\times 0.6^{\circ}\) in size and separated by 2.5\({}^{\circ}\). The stars have a mean proper motion of (\(\mu_{\rm RA,\mu DEC}\))=(+0.08 mas yr\({}^{-1}\),\(-\)1.41 mas yr\({}^{-1}\)) which is quite similar to the proper motion of stars on the western side of the SMC. The color magnitude diagram of the stream stars has a clear red giant branch, horizontal branch, and main sequence turnoff that is well-matched by a PARSEC isochrone of 10 Gyr, [Fe/H]=\(-\)1.8 at 32 kpc and a total stellar mass of \(\sim\)33,000 M\({}_{\odot}\). The stream is spread out over an area of 9.6 square degrees and has a surface brightness of 32.5 mag arcsec\({}^{-2}\). The metallicity of the stream stars from _Gaia_ XP spectra extend over \(-\)2.5 \(\leq\) [M/H] \(\leq-\)1.0 with a median of [M/H]\(=-\)1.8. The tangential velocity of the stream stars is 214 km s\({}^{-1}\) compared to the values of 448 km s\({}^{-1}\) for the Large Magellanic Cloud and 428 km s\({}^{-1}\) for the SMC. While the radial velocity of the stream is not yet known, a comparison of the space velocities using a range of assumed radial velocities, shows that the stream is unlikely to be associated with the Magellanic Clouds. The tangential velocity vector is misaligned with the stream by \(\sim\)25\({}^{\circ}\) which might indicate an important gravitational influence from the nearby Magellanic Clouds. keywords: Galaxy: structure - Galaxy: halo - Local Group ## 1 Introduction According to the currently-favored hierarchical galaxy formation paradigm (e.g., Peebles, 1965; Press and Schechter, 1974), galaxies started small and grew through merger events and accretion of smaller systems most of which were tidally stripped apart. Starting over three decades ago, mounting evidence of stellar streams in the Milky Way's (MW) halo were discovered, with the Sagittarius stream (e.g., Ibata et al., 2001; Newberg et al., 2002) being the most prominent with its two tidal tails wrapping around the MW. With the advent of deep, wide-field, multi-band photometric surveys, the number of discovered stellar streams rose quickly with the Sloan Digital Sky Survey (SDSS; York, 2000) leading the way with the "Field of Streams" that included the Orphan stream, Anticenter stream, and others (Bellokurov et al., 2006). Some of the most impressive streams are those produced by disrupted globular clusters that are extremely thin but can stretch over many tens of degrees (e.g., Grillmair and Dionatos, 2006a,b). See Newberg and Carlin (2016) for a more detailed review of stellar streams. Not only are observed stellar streams a striking confirmation of the violent origin of galaxies through mergers and accretion events, but stellar streams can be used as tracers to probe the Galaxy's mass and constrain the 3-D structure of the gravitational potential (e.g., Johnston et al., 2005; Koposov et al., 2010). One of the most effective search algorithms is the "matched-filter" method that selects all stars lying close to an old isochrone in the color-magnitude diagram (CMD) at a certain distance. A range of distances are search and the resulting on-sky stellar density plots inspected for linear features. Often the filters are heavily weighted towards the blue, main-sequence turnoff portion of the isochrone which has a large number of stars compared to the MW foreground. Until recently, most stellar stream work was confined to the northern hemisphere due to the predominance of large surveys like SDSS, PS1 (Pan-STARRS 1, Chambers et al., 2016) and ATLAS (Tonry et al., 2018) that covered that region of the sky. However, with the advent of the Dark Energy Camera (DECam; Flaugher et al., 2015), the situation changed dramatically. Using the deep, multi-band DES photometric catalog (Dark Energy Survey Collaboration et al., 2016), Shipp et al. (2018) discovered 11 new stellar streams in a southern "tour-de-force" much like the northern SDSS "Field of Streams". The DECam Local Volume Exploration Survey (DELVE; Drlica-Wagner et al., 2021) is systematically covering the entire southern sky with DECam to search for dwarf galaxies and stellar streams and was recently used to discover the Jet stream (Ferguson et al., 2022). Even though deep, multi-band photometry has been the main-stay of stellar stream searches for decades, other techniques can also be extremely effective. The second data release of _Gaia_ (DR2; Gaia Collaboration et al., 2018) produced precise proper motions for over a billion stars. This allowed for a kinematic selection of stellar streams. Ibata et al. (2019) used a new systematic search method (STREAMFINDER; Malhan and Ibata, 2018) that takes advantage of the kinematics to discover eight new stellar streams throughout the MW, including in the MW mid-plane, which has historically been avoided by stream searches due to the high number of MW disk stars that can confuse search algorithms and generate many false positives. In the third data release of _Gaia_ (DR3; Gaia Collaboration et al., 2022), the low-resolution \(BP\)/\(RP\) (XP) spectra of 220 million stars were released. While the released stellar parameters (Andrae et al., 2022) were not as reliable as originally anticipated, Andrae et al. (2023) determined precise metallicities as well as effective temperatures and surface gravities for 175 million stars using XGBoost trained on APOGEE (Majewski et al., 2017) spectra and AllWise photometry (Cutri et al., 2021). In this paper, I report on the discovery of a new stellar stream near the Small Magellanic Cloud using the _Gaia_ DR3 proper motions and XP metallicities. This paper is structured as follows. Section 2 discusses the data and catalogs while Section 3 outlines the discovery and characterizes the main stream properties. The main results are presented in Section 4 and the implications of the results are discussed in Section 5. Finally, the main conclusions are summarized in Section 6. ## 2 Data For this project, I used solely the _Gaia_ DR3 dataset. DR3 contains astrometric information, including proper motions, for 1.46 billion sources and three-band photometry (\(G\), \(BP\), and \(RP\)) for 1.54 billion sources. In addition, it contains object classification from the \(BP\)/\(RP\) (XP) spectra for 470 million sources, although the stellar parameters (T\({}_{\rm eff}\), \(\log g\), and [M/H]) in the official data release have some systematics that make the values unsuitable for most scientific analyses. Instead, I use the stellar parameters from the Andrae et al. (2023) catalog that used a machine-learning model (XGBoost; Chen & Guestrin, 2016) trained on APOGEE (Majewski et al., 2017; Abdurro'uf et al., 2022) and other data to derive primarily [M/H], but also T\({}_{\rm eff}\) and \(\log g\) along the way, for 175 million stars from the average XP spectra that were released in _Gaia_ DR3. Note that Zhang et al. (2023) released a similar catalog but used a generative model to derive T\({}_{\rm eff}\), \(\log g\), [M/H], and extinction for 220 million stars from the XP spectra. ## 3 Discovery and Characterization While investigating the _Gaia_ DR3 data for substructure in the periphery of the Magellanic Clouds (MCs) and looking through a variety of longitude versus proper motion figures in narrow ranges of metallicity, I serendipitously discovered an overdensity of stars. Figure 1 shows an example of the type of figure that I was inspecting. It shows the distribution of stars in the Andrae et al. catalog with [M/H]\(<-1.7\) in the MC region in the \(\mu_{\rm B,MS}\) vs. \(L_{\rm MS}\)1 color-coded by the metallicity. While the figure is quite "messy", a linear feature is visible in the bottom left-hand corner highlighted by the green ellipse (\(-40^{\circ}\leq L_{\rm MS}\leq-20^{\circ}\), \(\mu_{\rm B,MS}\approx-0.3\) mas yr\({}^{-1}\)). Further inspection showed that this feature is not spurious, but rather a real structure elongated spatially not far from the Small Magellanic Cloud (SMC). Footnote 1: Where \(L_{\rm MS}\)/\(B_{\rm MS}\) refer to the Magellanic Stream coordinate system defined in Nidever et al. (2008), and \(\mu_{\rm L,MS}\)/\(\mu_{\rm B,MS}\) are the corresponding proper motions. Figure 2 shows the proper motion distribution of the stars in that spatial region (\(-40^{\circ}<L_{\rm MS}<-23^{\circ},-18^{\circ}<B_{\rm MS}<-5^{\circ}\)) exhibiting an even stronger overdensity at (\(\mu_{\rm L,MS}\), \(\mu_{\rm B,MS}\)) = (+1.4 mas yr\({}^{-1}\), \(-1.3\) mas yr\({}^{-1}\)). The stars selected by the red ellipse are plotted on the sky in Figure 3 as red and blue filled circles. The background image shows the density of MC stars selected via proper motion, XP T\({}_{\rm eff}\)/\(\log g\), and a red giant branch (RGB) box in the color magnitude diagram (CMD). The new stellar structure is composed of two nearly-parallel stellar streams ("branches") elongated on the sky by roughly \(\sim\)8 \(\times\) 0.5-1\({}^{\circ}\) and separated from each other by \(\sim\)2.5\({}^{\circ}\). At the closest point, the stream is only \(\sim\)1.5\({}^{\circ}\) from the SMC Northern Overdensity (SMCNOD; Pieres et al., 2017) and \(\sim\)11\({}^{\circ}\) from the center of the SMC. This obviously begs the question of whether the new stream is associated with the SMC, which I explore in depth in Section 5. I will follow the naming convention of Shipp et al. (2018) who named their stellar streams in this region of the sky after rivers in Pakistan and India, in particular after the Indus river and its tributaries Jhelum, Cheneb, and Ravi (in geographical order from northwest to southeast). I shall name the new stream "Suttlej" (\(f\)satlod3/) after the remaining main tributary of the Indus river which continues the geographical progression of the tributaries in northern India to the Figure 1: Density of Magellanic _Gaia_ DR3 XP stars with [M/H]\(\leq\)\(-1.7\) in the \(\mu_{\rm B,MS}\) versus \(L_{\rm MS}\). The overdensity of stars with \(-35^{\circ}\leq L_{\rm MS}\leq-20^{\circ}\) and \(\mu_{\rm B,MS}\approx-0.3\) mas yr\({}^{-1}\) is highlighted with the green ellipse. Figure 2: Density of Magellanic giant stars in \(\mu_{\rm L,MS}\) versus \(\mu_{\rm B,MS}\). The stream stars are indicated by the red ellipse. southeast and is nicely mirrored on the sky as the new stream is to the east of Ravi and Cheneb (see Fig. 10). ## 4 Results The mean proper motion of the stream stars is (\(\mu_{\rm L,MS},\mu_{\rm L,MS}\)) = (+1.4 mas yr\({}^{-1}\), \(-\)0.322 mas yr\({}^{-1}\)). As can be seen in Figure 7, the tangential velocity vector is _not_ aligned with the stream. In fact, it is \(\sim\)25\({}^{\circ}\) misaligned. This might be due to the influence of the nearly LMC and SMC, which has been shown to produce such a misalignment in the Orphan stream (Erkal et al., 2019). Figure 4 shows the color-magnitude diagram (CMD) of all _Gaia_ DR3 stars in the vicinity of the two stream branches. A well-defined red giant branch, blue horizontal branch (BHB), and main-sequence turnoff are apparent. Both stellar branches are well-represented in these features with no large difference visible between them (except for the BHB; see below). The black line shows a [Fe/H]=\(-\)1.8, 10 Gyr at 32 kpc PARSEC isochrone fit by-eye to the data. While the CMDs of the two branches look nearly identical, Branch 1 (red) has a horizontal branch that extends 0.25 mag bluer than Branch 2 (blue) does. The distances of the BHB stars can be directly estimated and contrasted with the isochrone distance and the distances of the two branches can be compared as well. I converted the _Gaia_ photometry to the SDSS system using the photometric transformation equations in the _Gaia_ release documentation.2 The Barbosa et al. (2022) relation was used to compute the BHB absolute magnitude \(M_{\rm g}\) as a function of \(g-r\) color and the distance calculated by comparing to the apparent magnitudes. Figure 5 shows the BHB distances of the stream stars with a mean distance of 32.4 kpc for all stars and 33.0/31.8 kpc for Branch 1/2. This is in excellent agreement with the isochrone-fitting distance of 32 kpc. While Branch 1 shows almost no variation in distance with \(L_{\rm MS}\), the Branch 2 distances increase the farther away they are from the SMC at a rate of \(\sim\)0.9 kpc per degree. Footnote 2: [https://gea.esac.esa.int/archive/documentation/GDR2/Data_processing/chap_cu5pho/sec_cu5pho_calibr/ssec_cu5pho_PhotTransf.html](https://gea.esac.esa.int/archive/documentation/GDR2/Data_processing/chap_cu5pho/sec_cu5pho_calibr/ssec_cu5pho_PhotTransf.html) We can also investigate the metallicities of the brighter stream stars (\(G<17.65\)) using the _Gaia_ DR3 XP metallicity provided by Andrae et al. (2023). Figure 6 shows the metallicity distribution function (MDF) of RGB stars from the stream (blue and red) as well as the SMC (purple) and SMCNOD (green). To provide a smoother representation of the data (similar to a kernel density estimate), each star is presented as a Gaussian with unity amplitude and a FWHM of 0.2 dex. The curve from each group is then divided by the number of stars to produce a density curve which makes the comparison between the stellar populations easier. While the MDFs of the two Figure 3: Density of Magellanic giant stars as selected from _Gaia_ DR3 XP. The position of the two stellar stream branches are indicated by the red (Branch 1) and blue (Branch 2) filled circles. stream branches look similar to each other (given the small number statistics), they are significantly more metal-poor than the majority of the SMC and SMCNOD distributions. This alone is a strong indication that the stream did not originate from the SMC (but see SS5 for more on the origin). In addition, the broad stream MDF with a FWHM width of \(\sim\)1 dex suggests that the progenitor was a dwarf galaxy rather than a globular cluster. I estimated the Great Circle Pole for each branch by searching a grid of pole coordinate values 0\({}^{\circ}\)\(\leq\alpha\leq\) 360\({}^{\circ}\), and 0 \(\leq\delta\leq\) +90\({}^{\circ}\) in steps of 1\({}^{\circ}\) and calculated the _rms_ (root-mean-square) of the transformed latitude for each pole. I found the points coordinates with the lowest _rms_ values and refined the search for more precision. The pole for Branch 1 is (\(\alpha\),\(\delta\))=(68.7\({}^{\circ}\),16.3\({}^{\circ}\)) and for Branch 2 is (\(\alpha\),\(\delta\))=(73.5\({}^{\circ}\),10.7\({}^{\circ}\)). The FWHM widths in latitude are 0.56\({}^{\circ}\)/0.68\({}^{\circ}\) for Branch 1/2 which at a distance of 32 kpc corresponds to 0.31 kpc/0.38 kpc. A summary of the stream values are shown in Table 1. The total number of member stars in _Gaia_ is low in the Sublei stream. There are only 34 stream stars in the _Gaia_ XP sample, and 80 in the full _Gaia_ DR3 sample down to \(G\)=20.0 (RGB and BHB stars). It is, therefore, worth estimating the stellar mass in the stream. I estimated the total stellar population mass by creating synthetic photometry from a 10 Gyr, [Fe/H]=\(-\)1.8 PARSEC isochrone at 32 kpc with a total stellar mass of 10\({}^{6}\) M\({}_{\odot}\). All RGB and BHB synthetic stars down to a magnitude of \(G\)=20.0 were selected and the same operation performed on the data. This resulted in 80 _Gaia_ stars and 6056 synthetic stars. Scaling the input isochrone mass of 10\({}^{6}\) M\({}_{\odot}\) by 80/6056 gives 13,210 M\({}_{\odot}\). However, by inspecting the two luminosity functions it became clear that they do that match well at the faint end; the observed number of stars does not increase as quickly as expected from the isochrone likely due to incompleteness. A more Figure 4: The color magnitude diagram of the region around the new stellar streams. A clear red giant branch, horizontal branch, and main sequence turnoff are visible. A PARSEC isochrone with [Fe/H]=\(-\)1.8, 10 Gyr and a distance of 32 kpc is shown in black. Figure 5: The distance (kpc) of the BHB stars versus \(L_{\rm{MS}}\) for the two stream branches. The BHB _Gaia_\(G\) magnitude is shown on the right hand side. The average BHB distance of Branch 1 is 33.0 kpc and for Branch 2 is 31.8 kpc. Branch 1 is \(\sim\)1.0 kpc farther away than Branch 2. While Branch 1 does not show much of a distance gradient, the distance of the Branch 2 BHB stars grow larger with increasing angular distance from the SMC at a rate of \(\sim\)0.9 kpc per degree. Figure 6: The metallicity distribution function (MDF) of the SMC (purple), the SMCNOD (green) and the stream stars (red and blue) using Andrae et al. (2023) _Gaia_ DR3 XP metallicities. The majority of the stream stars are more metal-poor than both the SMC (green) and SMCNOD (purple) stars with an average of [M/H]=\(-\)1.65 and peak around [M/H]=\(-\)1.9. realistic estimate of 33,333 M\({}_{\odot}\) was found by finding the best scaling of the luminosity functions. Finally, I calculate the surface brightness by summing up the flux from all synthetic stars and scaling to the total mass of 33,333 M\({}_{\odot}\). The area of each stream branch is 0.6\({}^{\circ}\times 8^{\circ}\) or 9.6 square degrees combined. The total flux is divided by the area in arcsec squared and converted back to magnitude to obtain 32.3 mag arcsec\({}^{2}\). Another way to calculate the surface brightness is to sum up the flux of the observed stars and then correct for incompleteness. Using the same procedure with the 80 RGB and BHB stars down to \(G\)=20.0 gives a surface brightness of 33.1 mag arcsec\({}^{2}\). However, this is incomplete because of the stars that we are not seeing. We can use the theoretical isochrone to calculate a good estimate for this completeness by calculating the cumulative fraction of total flux as a function of \(G\). At 19 \(\leq G\leq\) 20 mag this is fairly constant with a value of \(\sim\)55%. Applying this correction to the total observed flux gives a surface brightness of 32.5 mag arcsec\({}^{2}\) which is quite close to the 32.3 mag arcsec\({}^{2}\) calculated with the ischrone method above. ## 5 Discussion As previously mentioned, an obvious question, due to its proximity, is whether Suttlej is related to the Magellanic Clouds. The edge of the stream is only a couple of degrees away from the SMCNOD and the two stream branches are elongated almost parallel to the LMC's and SMC's tangential velocity vectors (see Figure 7). In addition, the stream distance of 32 kpc is not too dissimilar to the Magellanic Clouds' distance of 50 kpc (LMC) and 60 (SMC). This is, however, where the similarities end. As the MDF shows (Fig. 6), the stream stars are much more metal-poor than the MCs metallicity distributions (LMC is \(\sim\)0.4 dex more metal-rich than the SMC). The tangential velocity vector is also misaligned with the LMC/SMC's by \(\sim\)25\({}^{\circ}\) (Fig. 7) and its total tangential velocity of 214 km s\({}^{-1}\) is almost a factor of two smaller than the MCs (\(\sim\)430 km s\({}^{-1}\)). And although we do not know Suttlej's radial velocity (RV) yet, we can compare space velocities with the MCs. Figure 8 shows the 3-D space velocities of the stream stars compared to the APOGEE MC stars assuming a wide range of stream RVs. No matter the stream RV, the space velocities are always offset from the MC space velocities by more than \(\gtrsim\)200 km s\({}^{-1}\). Therefore, a Magellanic origin seems unlikely. However, I must point out one interesting spatial pattern in the two Suttlej branches that reminds me of features in the two filaments of the Magellanic Stream (MS) that are quite nearby in the sky. Figure 9 shows the GASS (McClure-Griffiths et al., 2009) HI column density map of the Magellanic System including the Stream and the Suttlej stars (red filled circles). The two MS filaments show structures Figure 7: The tangential motion of the LMC, SMC and the stream stars. The length of the red arrows indicates the magnitude of the velocity. While Suttlej’s tangential velocity vector points roughly in the direction of the MCs, they are different by \(\sim\)25\({}^{\circ}\). In addition, Suttlej’s tangential velocity vector is misaligned with its stream track by \(-\)25\({}^{\circ}\) indicating that it might have been gravitationally perturbed by the MCs. that are roughly \(2^{\circ}\times 10^{\circ}\) in size and offset from each other by a few degrees. This pattern looks very similar to the Sutlej branches. In fact, I have fit lines to the Sutlej branches and offset them both by \(+6^{\circ}\) in \(L_{\rm MS}\) and \(+6^{\circ}\) in \(B_{\rm MB}\) (thin orange lines). They match the length and the orientation of the MS features quite well, although one of them is offset by roughly a degree. Even if Sutlej and these MS features were related, it does not seem realistic that the gas is _leading_ the stars in their orbit. The Price-Whelan 1 star cluster (Price-Whelan et al., 2019; Nidever et al., 2019) was born in the Magellanic Stream Leading Arm 117 Myr ago and has since separated from and is leading the gas by \(\sim\)10\({}^{\circ}\). This is understandable and expected due to the ram pressure effects of the MW's hot halo gas on the Leading Arm gas. In fact, this can be used to constrain the density of the hot halo (see SS4.5 of Nidever et al., 2019). However, if the Sutlej branches and the MS linear features are somehow causally related, then it is unlikely that the stars would be trailing the gas. In addition, figuring in all of the discrepancies of an MC origin of Sutlej mentioned above, this linear feature in Sutlej and the MS is likely a curious coincidence. Could the Sutlej stream be related to any of the streams or dwarf galaxies discovered in the MC region by DES, DELVE (Drlica-Wagner et al., 2021) and others? Figure 10 shows the streams (as tabulated by Mateu, 2023) and dwarf galaxies near the MCs with distances of 10 to 65 kpc color-coded by distance. None of the nearby stellar streams are aligned with Sutlej. Moreover, although there are dwarf galaxies nearby they are either at larger distances (Phoenix II at 84 kpc, Bechtol et al., 2015; Tucana IV at 47 kpc and Tucana V at 55 kpc, Drlica-Wagner et al., 2015) or are moving in a different direction (Tucana III; Drlica-Wagner et al., 2015). The only exception is Hydrus 1 (Koposov et al., 2018) which is at a distance of 28 kpc and on the opposite side of the SMC from Sutlej but close to an extrapolation of the Sutlej branch tracks. Selecting Gaia DR3 stars near Hydrus 1, I was able to determine a significant overdensity of 17 stars in proper motion space corresponding to (\(\mu_{\rm LMS}\),\(\mu_{\rm B,MS}\))=(+3.787 mas yr\({}^{-1}\), +1.68 mas yr\({}^{-1}\)). The CMD of these stars indicate that these are the Hydrus 1 RGB and BHB stars. Using the \(V_{\rm helio}\) = +80.4 km s\({}^{-1}\) and 28 kpc distance from Koposov et al. (2018), we can calculate the galactocentric space velocity of Hydrus 1 to be (\(V_{x}\),\(V_{y}\),\(V_{z}\)) = (\(-435.0\) km s\({}^{-1}\), \(-82.8\) km s\({}^{-1}\), \(-11.1\) km s\({}^{-1}\)). Comparing these values to Figure 8, it is clear that Hydrus 1 has a significantly different space velocity from the LMC, SMC and Sutlej by \(\gtrsim\)200 km s\({}^{-1}\). In addition, while they are both metal-poor, Hydrus 1 is more metal-poor ([Fe/H]= \(-2.5\)) than Sutlej ([Fe/H] = \(-1.9\)) and comparing the Sutlej MDF in Figure 6 with the Hydrus 1 MDF in Figure 19 from Koposov et al. (2018) indicates that the metallicity distributions are quite different. Another curious feature of Sutlej is the two parallel split branches which is quite uncommon in stellar streams. Some other examples are the Sagittarius stream (e.g., Majewski et al., 2003; Koposov et al., 2012) that has bifurcated branches both in the leading and trailing arms and the Anticenter/Monoceros stream (e.g., Grillmair, 2006). The Sagittarius stream wraps around the MW due to the multiple passages its host galaxy made, and it is quite likely that the multiple pericentric passages created the bifurcation of the two tidal arms. After much debate over the origin of Monoceros (Martin et al., 2006; Conn et al., 2008), it is now thought that this broad feature was produced by a perturbation of the outer MW disk by a large satellite like Sagittarius dwarf spheroidal galaxy or the LMC (Slater et al., 2014; Morganson et al., 2016; Hayes et al., 2018). Since the two Sutlej branches have nearly identical metallicity, distance, age, and space velocity, it seems quite likely that they have the same progenitor galaxy. How exactly a small dwarf spheroidal galaxy could produce two parallax stellar streams offset by \(\sim\)2.5\({}^{\circ}\) or 1.4 kpc (at 32 kpc) perpendicular to their orbit remains unclear. However, this feature of Sutlej should put tight constraints on simulations trying to reproduce it. Erkal et al. (2019) used the Orphan stream to constrain the LMC's total mass to \(1.38\times 10^{11}\) M\({}_{\odot}\) by using the perturbations that the LMC made on the Orphan stream's proper motions which are misaligned Figure 8: The galactocentric space velocities of the LMC (gray), SMC (pink) and the Sutlej stream stars (colored points). The colored points show the space velocities of the stream stars assuming a range of radial velocity: \(-200\) km s\({}^{-1}\) (purple), \(-100\) km s\({}^{-1}\) (red), \(0\) km s\({}^{-1}\) (green), \(+100\) km s\({}^{-1}\) (orange), and \(+200\) km s\({}^{-1}\) (red). For all radial velocities, the space velocities of the stream stars are significantly offset from the LMC and SMC making a Magellanic origin unlikely. with the stream's track. This is the most accurate mass estimate of the LMC to date. The SMC's mass, on the other hand, is not well constrained and it is not unusual to scale the LMC's total mass by the ratio of the SMC and LMC's stellar masses (\(\sim\)1/9.6) as done by Besla et al. (2012). Having a more accurate SMC total mass would help improve simulations of the Magellanic system. The Sutlej stream provides a tantalizing possibility of being able to constrain the SMC's total mass since the Sutlej proper motion vector is misaligned with its stream track. Orbit modeling of the Gaia kinematics and follow-up spectroscopic radial velocities should be able to determine if this feat is possible. ## 6 Summary I report the discovery of a split stellar stream, Sutlej, near the Small Magellanic Cloud using _Gaia_ DR3 proper motions and metallicities from the low-resolution XP spectra. The main conclusions are: * Sutlej has two nearly-parallel branches that are roughly \(\sim\)8\({}^{\circ}\times\) 0.6\({}^{\circ}\) in shape and separated by \(\sim\)2.5\({}^{\circ}\). They are situated \(\sim\)15\({}^{\circ}\) north of the SMC. * The _Gaia_ CMD shows a clear signature of a simple stellar population (with RGB, BHB and main-sequence turnoff) that is well-fit with an isochrone of age 10 Gyr, [Fe/H]=\(-\)1.8 and distance of 32 kpc. * Sutlej has a prominent blue horizontal branch. Measured distances of these standard candles give a mean distance of 32.4 kpc for all Sutlej stars and 33.0/31.8 kpc for Branch 1/2. While Branch 1 shows little distance variation, Branch 2 has a distance gradient of \(\sim\)0.9 kpc deg\({}^{-1}\) where the distance increases as the angular distance from the SMC grows larger. * The _Gaia_ XP metallicites show that Sutlej has a broad MDF stretching from [Fe/H]= \(-\)2.5 to \(-\)1.0 with a median of [Fe/H]= \(-\)1.9. The broad MDF strongly suggests that the progenitor was a dwarf galaxy rather than a globular cluster. * The total stellar mass of Sutlej is 33,333 M\({}_{\odot}\) and its surface brightness is 32.5 mag arcsec\({}^{-2}\). * Sutlej's tangential velocity vector is misaligned with its stream track by \(\sim\)25\({}^{\circ}\) providing evidence that it has likely been gravitational perturbed by the nearby MCs. * Sutlej is likely not associated with the SMC because no matter what the radial velocity is, the 3-D space velocity of Sutlej is significantly offset from the SMC by at least \(\sim\)200 km s\({}^{-1}\). In addition, Sutlej's MDF is much more metal-poor than the MCs' and the tangential velocity vectors are misaligned by \(\sim\)25\({}^{\circ}\). Follow-up spectroscopic observations will to measure the radial velocity and chemical abundances should help resolve the origin of Sutlej. Orbital modeling of Sutlej has the potential of constraining the mass of the SMC. ## Acknowledgements I want to thank Andrew Pace for sharing with me his table of Milky Way dwarf galaxy properties. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. _Software:_ Astropy (Astropy Collaboration et al., 2018, 2013), Matplotlib (Hunter, 2007), NumPy (Harris et al., 2020), SciPy (Virtanen et al., 2020) ## Data Availability All _Gaia_ DR3 data are available from the _Gaia_ Archive [https://gea.esac.esa.int/archive/](https://gea.esac.esa.int/archive/). The Andrae et al. (2023) catalog of _Gaia_ DR3 XP stellar parameters and metallicities is available from [https://zenodo.org/record/7945154](https://zenodo.org/record/7945154). Figure 10: The density of Magellanic giant stars from _Gaia_ DR3 with the known stellar streams (Mateu, 2023) and dwarf spheroidal galaxies between 10 to 65 kpc away color-coded by their distance. The new stellar stream is indicated by purpied filled circles Figure 9: Column density map of HI from the GASS survey (McClure-Griffiths et al., 2009). The new stellar stream is shown in filled red circles. Orange lines indicate the new stellar stream offset by +6\({}^{\circ}\) in \(L_{\rm{MS}}\) and +6\({}^{\circ}\) in \(B_{\rm{MS}}\).
2304.12927
The Li + CaF $\to$ Ca + LiF chemical reaction under cold conditions
The calcium monofluoride (CaF) molecule has emerged as a promising candidate for precision measurements, quantum simulation, and ultracold chemistry experiments. Inelastic and reactive collisions of laser cooled CaF molecules in optical tweezers have recently been reported and collisions of cold Li atoms with CaF are of current experimental interest. In this paper, we report ab initio electronic structure and full-dimensional quantum dynamical calculations of the Li + CaF $\to$ LiF + Ca chemical reaction. The electronic structure calculations are performed using the internally contracted multi-reference configuration-interaction method with Davidson correction (MRCI+Q). An analytic fit of the interaction energies is obtained using a many-body expansion method. A coupled-channel quantum reactive scattering approach implemented in hyperspherical coordinates is adopted for the scattering calculations under cold conditions. Results show that the Li + CaF reaction populates several low-lying vibrational levels and many rotational levels of the product LiF molecule and that the reaction is inefficient in the 1-100 mK regime allowing sympathetic cooling of CaF by collisions with cold Li atoms.
Humberto da Silva Jr., Qian Yao, Masato Morita, Brian K. Kendrick, Hua Guo, Naduvalath Balakrishnan
2023-04-25T15:40:20Z
http://arxiv.org/abs/2304.12927v1
# The Li + CaF \(\longrightarrow\) Ca + LiF chemical reaction under cold conditions ###### Abstract The calcium monofluoride (CaF) molecule has emerged as a promising candidate for precision measurements, quantum simulation, and ultracold chemistry experiments. Inelastic and reactive collisions of laser cooled CaF molecules in optical tweezers have recently been reported and collisions of cold Li atoms with CaF are of current experimental interest. In this paper, we report ab initio electronic structure and full-dimensional quantum dynamical calculations of the Li + CaF \(\rightarrow\) LiF + Ca chemical reaction. The electronic structure calculations are performed using the internally contracted multi-reference configuration-interaction method with Davidson correction (MRCI+Q). An analytic fit of the interaction energies is obtained using a many-body expansion method. A coupled-channel quantum reactive scattering approach implemented in hyperspherical coordinates is adopted for the scattering calculations under cold conditions. Results show that the Li + CaF reaction populates several low-lying vibrational levels and many rotational levels of the product LiF molecule and that the reaction is inefficient in the 1-100 mK regime allowing sympathetic cooling of CaF by collisions with cold Li atoms. + Footnote †: E-mail: [email protected] + Footnote †: E-mail: [email protected] + Footnote †: E-mail: [email protected] + Footnote †: E-mail: [email protected] + Footnote †: E-mail: [email protected] ## 1 Introduction The rich internal structure of ultracold molecules compared to ultracold atoms lend themselves to many applications in emerging areas of quantum science. Ultracold paramagnetic molecules such as Calcium monofluoride, CaF, whose electronic ground state is characterized by a \({}^{2}\Sigma^{+}\) term, have long been considered as a promising candidate for a number of applications, in particular, quantum simulation [1, 2, 3, 4], quantum information [5, 6, 7, 8, 9], and precision spectroscopy [10]. This is mostly due to the presence of an unpaired electron as its resultant non-zero electric and magnetic moment serves as a convenient experimental handle for extra control [11, 12, 13], by means of external fields (_e.g._ Stark and Zeeman effects). Additionally, these systems also provide a unique opportunity to improve upon the fundamental understanding of atom-molecule [14] and molecule-molecule interactions [15], dipolar interactions [16, 17] and collision-induced chemistry at the ultra-low range of kinetic energies [18, 19, 20, 21, 22]. In particular, experimental explorations of collision-induced trap-loss rate of molecules with singlet and triplet spin multiplicities, in ultracold conditions, have been available for a while (for systems such as Rb\({}_{2}\), NaRb, KRb, CsRb, NaK, LiNa, NaRb). However, such studies are less prevalent for doublet molecules [23]. Sloving the translational motion of CaF molecules down to the capture velocity of a 800 mK deep magneto-optical trap (MOT) has been recently achieved by Doyle and co-workers [24]. This follows similar success with SrF, to our knowledge, the first such molecule to be trapped in a MOT [25, 26, 27, 28]. The original work of Lu _et al._, since then improved to sub-Doppler temperatures [29, 30, 31, 32, 33], represents an important milestone after the seminal work of Di Rosa [11], the first to observe that molecules such as CaF, CaH, CaOH, SrF, SrOH, YbF, may possess a rovibrational internal structure with a large one-photon oscillator strength and highly diagonal Franck-Condon factors. This, in turn, unlocks the possibility of light-assisted closed cycling transitions, similar to the laser cooling techniques applied to atoms and atomic ions [34]. Once CaF molecules in the electronic ground state are properly trapped, as demonstrated by Lu _et al._, a natural next step is the design and implementation of cold collisions between CaF molecules and, say, co-trapped laser cooled atoms or another CaF molecule. The latter case has been recently realized in a pioneering experiment, in which CaF molecules are loaded from a MOT into optical tweezers and, by varying the relative position of two tweezers, CaF + CaF ultracold collisions have been observed to produce two-body loss, most likely due to yet undetermined chemical reactions, with magnitude comparable to a theoretical universal loss rate [23, 35]. The former case of CaF collisions with laser cooled atoms is yet an open prospect and, as we shall see below, one of the underlying motivation of the present work. Among the Alkali metal candidates, whose laser cooling and trapping techniques are nowadays routine procedures, only Li(\({}^{2}S\)) combined with the electronic ground state of CaF provides an exothermicity of about -4440 cm\({}^{-1}\)[36]. Other atomic species such as Na, K, Rb and Cs would require a few thousand wavenumber of collision-induced excitation in order to trigger chemical events [36]. However, due to the low (\(<1\) K) kinetic energies involved in such experiments, in general, these collision-induced excitation are all but forbidden energetically. Thus, the prospects of Li(\({}^{2}S\)) + CaF(\({}^{2}\Sigma^{+}\)) ultracold reactive collisions are highly regarded as an opportunity to study cold chemistry as well as collision-induced trap losses due to chemical events. To understand and to establish the limits for sympathetic cooling of CaF(\({}^{2}\Sigma^{+}\)) toward even lower temperatures by means of soft collisions with a Li(\({}^{2}S\)) coolant buffer [37] as well as collisional shielding [38, 39] a detailed investigation of Li + CaF collisions is needed. Prior studies of Li(\({}^{2}S\)) + CaF(\({}^{2}\Sigma^{+}\)) collisions explored only elastic and (non-reactive) inelastic collisions using model potentials or interaction potentials that do not describe the reactive regions. Foreseeing an upcoming demand for more theoretical support in regard to this matter, in this work, we tackle the challenging task of describing a new LiCaF global potential energy surface (PES) and to perform the first description of the Li(\({}^{2}S\)) + CaF(\({}^{2}\Sigma^{+}\)) \(\longrightarrow\) Ca(\({}^{1}S\)) + LiF(\({}^{1}\Sigma^{+}\)) collisions resorting to state-of-the-art quantum reactive scattering, _i.e._ a coupled-channel (CC) method. It is worthwhile to note that a novel full six-dimensional PES intended to describe the even more complicated CaF + CaF \(\longrightarrow\) CaF\({}_{2}\) + Ca chemical reaction has been constructed by Sardar and co-workers [40]. Until very recently a proper quantum description of the title reaction was not feasible. Today, by employing unprecedented computational resources, it remains a very hard numerical task due to several reasons, namely: (i) the system lacks symmetries that could otherwise be used to ease parts of the computational overload; (ii) it is a somewhat heavy system with small diatomic rotation constants (_e.g._ the CaF constant is about 177 times smaller than that of H\({}_{2}\)) and, as we shall see below, possesses a relatively deep potential well at short range, all of which translates into a large amount of spatially delocalized internal states required to properly describe the collision; (iii) it is known to be a very anisotropic system characterized by strong couplings between collisional channels that would be negligible otherwise; and (iv), typical of atom-molecule collisions within the cold domain of kinetic energies, the radial solution of the Schrodinger equation is required to be propagated to unusually large atom-molecule separations, due in part to the extremely long de Broglie wavelength of the colliding partners. Therefore, within the limitations imposed by such aspects, we provide below a first investigation on the optimal parameters required to extract accurate scattering characteristics for these collisions, in a time-independent quantum reactive scattering formalism, and discuss the predicted features of the collisional cross sections as functions of the incident energy. To this end, the adiabatically adjusting principal axis hyperspherical (APH) quantum reactive scattering suite of programs (hereafter referred to as APH3D), that has been used to describe a diverse array of reactive collisional problems in our group [41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51], is also utilized below. While formalisms based on the solution of the time-dependent Schrodinger equation are computationally more efficient they are slow to converge at low collision energies [52, 53]. Methods based upon statistical quantum approaches [54, 55, 56] have also been applied to complex-forming ultracold chemical reactions but their accuracy for state-to-state transitions is not fully established. The paper is organized as follows: Section 2 provides a brief description of the theoretical approach with details of the electronic structure calculations presented in subsection 2.1. A brief outline of the quantum scattering formalism using the APH3D code is presented in subsection 2.2. Section 3 presents the results and section 4 provides a summary of our findings. ## 2 Theoretical Approach ### Potential energy surface The Li(\({}^{2}S\)) + CaF(\({}^{2}\Sigma^{+}\)) reactants asymptotically correlate with the triatomic electronic states \({}^{1}A^{\prime}\) and \({}^{3}A^{\prime}\). In what follows we describe the computation of the ground electronic state, \(X^{1}A^{\prime}\), of the LiCaF complex using the internally contracted multi-reference configuration-interaction method with the Davidson correction (MRCI+O) [57, 58, 59], as implemented in the MOLPRO package [60]. The augmented correlation-consistent polarized valence quadruple-zeta basis set (aug-cc-pVQZ) of Dunning [61, 62, 63] was used for the Li and F atoms, whereas the cc-pwCVQZ-PP basis, in which the core electrons are described with a pseudopotential, was used for the Ca atom [64]. Calculations with full valence active space utilizing a state-averaged (1\({}^{1}\)A\({}^{\prime}\), 1\({}^{3}\)A\({}^{\prime}\) and 1\({}^{1}\)A\({}^{\prime\prime}\)) complete active space (10 active electrons in 9 active orbitals) self-consistent field wavefunction (SA-CASSCF) [65, 66] were performed. The active space included the \(2s\), \(2s2p\), \(4s4p\) orbitals from Li, F, and Ca atoms, whereas the \(1s\) orbitals for Li and F, along with the \(3s3p\) orbitals of Ca were closed in the CASSCF calculations and further cored in the MRCI calculations. A total of about 11000 geometries below 4.5 eV relative to the global minimum were selected and fitted using a many-body expansion method [67] \[V_{abc}\left(r_{ab},r_{ac},r_{bc}\right)=\sum_{a}V_{a}^{(1)}+\sum_{ab}V_{ab}^{( 2)}\left(r_{ab}\right)+V_{abc}^{(3)}\left(r_{ab},r_{ac},r_{bc}\right), \tag{1}\] in which \(r_{xy}\) is the internuclear distance between \(x\) and \(y\) (\(=a\), \(b\), or \(c\)); \(V_{a}^{(1)}\), \(V_{ab}^{(2)}\) and \(V_{abc}^{(3)}\) are the one-, two-, and three-body terms, respectively. The one-body terms in Eq. (1) are set to zero. The two-body terms correspond to the diatomic potential energy curves (PECs). The three-body energy becomes zero at all the dissociation limits. The two-body terms, \(V_{\rm CaF}^{(2)}\) and \(V_{\rm LiF}^{(2)}\), are spline-interpolated in the ranges of 3.2 \(a.u.\leq r_{\rm CaF}\leq 7\)\(a.u.\) and 2.4 \(a.u.\leq r_{\rm LiF}\leq 5.6\) _a.u._, respectively. Outside the interpolated regions, the PECs are approximated by the Morse form, \[V_{\rm{mose}}^{(2)}\left(r_{xy}\right)=D_{e}\left[\left(1-e^{-\alpha_{x}\left(r _{xy}-r_{e}\right)}\right)^{2}-1\right], \tag{2}\] where \(D_{e}\) is the dissociation energy, \(r_{e}\) is the equilibrium distance of the diatoms, and \(\alpha\) is a parameter. In the CaF case, \(\left(D_{e}=5.45\,\mathrm{eV},\alpha=0.51\,a.u.^{-1},r_{e}=3.92\,a.u.\right)\) are used for \(r<3.2\)_a.u._, whereas \(\left(D_{e}=5.45\,\mathrm{eV},\alpha=0.44\,a.u.^{-1},r_{e}=3.4\,a.u.\right)\) are for \(r>7\)_a.u._ Similarly, in the LiF case, \(\left(D_{e}=5.95\,\mathrm{eV},\alpha=0.47\,a.u.^{-1},r_{e}=3.23\,a.u.\right)\) for \(r<2.4\)_a.u._ and \(\left(D_{e}=5.95\,\mathrm{eV},\alpha=0.38\,a.u.^{-1},r_{e}=2.6\,a.u.\right)\) for \(r>5.6\)_a.u._ The three-body term is expressed as a polynomial of order M, \[V_{abc}^{(3)}\left(r_{ab},r_{ac},r_{bc}\right)=\sum_{jkl}^{M}d_{jk}\rho_{ab}^ {j}\rho_{ac}^{k}\rho_{bc}^{k}, \tag{3}\] where \(\rho_{xy}=r_{xy}e^{-\beta_{y}r_{yz}}\). The linear parameters, \(d_{jkl}\), can be obtained by the linear least squared method and the nonlinear parameters, \(\beta_{xy}\), are set to 0.5_a.u._\({}^{-1}\). Moreover, the constraints \(j+k+l\neq j\neq k\neq l\) and \(j+k+l\leq M\) are employed to ensure the three-body term \(V_{abc}^{(3)}\) is going to zero at all dissociation limits. In this work, the value of \(M=8\) is used, which leads to a total of 140 \(d_{jl}\) linear coefficients. The root mean squared deviation (RMSE) of the three-body short-range fit is 22.7 meV. The _ab initio_ calculation yielded an exothermicity of -0.37 eV (-2984.2 cm\({}^{-1}\)) for the Li\(\left({}^{2}S\right)\) + CaF\(\left({}^{2}\Sigma^{+}\right)\) reaction, which is 0.13 eV (1048.5 cm\({}^{-1}\)) higher than the experimental value of -0.5 eV (-4033 cm\({}^{-1}\)). This error is corrected in the two-body terms which are adjusted to reproduce the experimental exothermicity. The long-range interaction potential, \(V_{\rm{LR}}\), in each arrangement is fitted with the following expression: \[V_{\rm{LR}}=\sum_{ml}C_{mlml^{\prime}}J^{B_{n}^{m}\left(\theta \right)}_{R}, \tag{4}\] where \(V_{LR}=V_{abc}-V_{a}^{(1)}-V_{bc}^{(2)}\) and \(R\) is the distance between the Li (Ca) atom and the center of mass of the CaF (LiF) molecule. The parameters \(l\) and \(n\) range from -3 to 3 and 4 to 7, respectively. For \(n=4\) and \(m=1\), \(B_{4}^{1}\left(\theta\right)=\cos\theta\); for \(n=5\) and \(m=1\), \(B_{3}^{1}\left(\theta\right)=3\cos^{2}\theta-1\); for \(n=6\) and \(m=4\), \(B_{6}^{1}\left(\theta\right)=1\), \(B_{6}^{2}\left(\theta\right)=3\cos^{2}\theta-1\), \(B_{6}^{3}\left(\theta\right)=3\cos^{2}\theta+1\) and \(B_{6}^{4}\left(\theta\right)=9\cos^{2}\theta-1\); and, for \(n=7\) and \(m=4\), \(B_{7}^{1}\left(\theta\right)=\cos^{2}\theta\), \(B_{7}^{2}\left(\theta\right)=\cos^{2}\theta-1\), \(B_{7}^{3}\left(\theta\right)=\cos^{3}\theta\) and \(B_{7}^{4}\left(\theta\right)=3\cos\theta-2\cos^{3}\theta\)[68]. The errors in the long-range potential fitting for the Li + CaF and Ca + LiF arrangements are 2.55 and 1.86 cm\({}^{-1}\), respectively. In addition, the long-range and short-range potentials are connected smoothly with a switch function. Specifically, we have \[V_{\rm{pes}}=s_{abc}V_{abc}+\left(1-s_{abc}\right)\left(V_{a}^{ (1)}+V_{bc}^{(2)}+V_{\rm{LR}}\right), \tag{5}\] where the arrangement-dependent switching function, \(s_{abc}\), is defined as \[s_{abc}\left(r_{ac}\right)=\frac{1-\tanh\left[\gamma_{x}\left(r_{ac}-r_{s} \right)\right]}{2}. \tag{6}\] When \(bc=\) CaF, \(\gamma_{x}=1\)_a.u._\({}^{-1}\) and \(r_{s}=18\)_a.u._ are used in the interval \(0^{\circ}\leq\theta\leq 45^{\circ}\); \(\gamma_{x}=1\)_a.u._\({}^{-1}\) and \(r_{s}=13\)_a.u._ within \(45^{\circ}<\theta\leq 75^{\circ}\); \(\gamma_{x}=2\)_a.u._\({}^{-1}\) and \(r_{s}=11\)_a.u._ in \(75^{\circ}<\theta\leq 180^{\circ}\). Likewise, for \(bc=\) LiF, \(\gamma_{x}=0.8\)_a.u._\({}^{-1}\) and \(r_{s}=14\)_a.u._ are used within \(0^{\circ}\leq\theta\leq 180^{\circ}\). Fig. 1: Global PES, in cm\({}^{-1}\), as a function of the internuclear distances of LiF and CaF, in _a.u._, at the fixed angle \(\theta=180^{\circ}\) centered at the F atom. Isolines varying every 2000 cm\({}^{-1}\) from -9000 cm\({}^{-1}\) (innermost) to 11000 cm\({}^{-1}\) (outermost). Figure (1) depicts the contour plot of the PES, produced by the fitting procedure described above, as a function of the respective internuclear distances of LiF and CaF at the fixed bond-bond angle of \(\theta=180^{\circ}\) centered at the F atom. The red-shaded regions are associated with the lowest (attractive) values of the potential whereas the blue areas correspond to higher energies. For the purpose of the scattering calculations presented below, in what follows, the zero-energy of the PES is shifted to correspond to the energy of LiF at the equilibrium position of \(r_{\rm LiF}=3.0\)\(a.u.\), yellow area in Fig. (1). The light-green region is for the CaF potential well with equilibrium position of \(r_{\rm CaF}=3.8\)\(a.u.\), about 4040 cm\({}^{-1}\) above zero, excluding zero point energy (ZPE). The LiCal potential well, where the three atoms are in close proximity, reaches a minimum value (about -10931.7 cm\({}^{-1}\)) at slightly displaced diatomic distances, namely \(r_{\rm LiF}=3.18\) and \(r_{\rm CaF}=4.06\)\(a.u.\), in a near T-shape geometry at \(\theta=104.5^{\circ}\) - not shown but very similar to those contours of Fig. (1). In addition, Fig. (2) illustrate the strong anisotropic character of the system in which the PES is mostly repulsive for a collinear approach of the Li atom towards Ca (\(\theta=0^{\circ}\)) in contrast with the attractive character whenever approaching on the F side as shown in Fig. (1). ### Adiabatically adjusting principal axis hyperspherical (APH) method As mentioned above, the APH3D code is utilized to model the title reaction using the PES described in the previous section. A somewhat detailed description of the numerical aspects and convergence criteria is provided in the next section whereas, for the sake of completeness, a brief overview on the implementation of APH3D is given here. The description provided below is along the lines of that given in our recent work on the H + D\({}_{2}\) chemical reaction [69], however, an in-depth discussion of the hyperspherical coupled-channel equations, as implemented on APH3D, has been given by Kendrick and co-workers in many occasions [41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]. The LiCaF Hamiltonian is written in APH coordinates of Pack and Parker [41]. The hyperradius, \(\rho\), describing the radial atom-diatom relative motion is partitioned into an inner region, using Smith-Johnson hyperspherical coordinates [42, 70], where collision-induced re-arrangement is more likely to occur. In the outer region, where the different atom-diatom arrangement channels are largely decoupled, Delves hyperspherical coordinates [71, 72, 73] are employed. The six-dimensional three-body problem is reduced to a set of coupled equations along the scattering coordinate, \(\rho\), with \(\rho\) discretized in a grid of \(N\) sectors. The eigenvalues associated to the remaining five internal degrees of freedom are used as the effective set of coupled potentials driving the relative motion along \(\rho\). The 5D eigenvalue problem is solved in the APH region by means of an implicitly restarted Lanczos method [74, 75], whereas the corresponding eigenvalues within the Delves region are evaluated using a 1D Numerov propagator [76]. Once a sufficiently large set of coupled potentials are evaluated in both regions for all sectors, as well as all sector-to-sector overlap matrices, the resulting set of radial coupled equations is solved using Johnson's log-derivative method [77], first from \(\rho_{\rm min}\) to \(\rho_{\rm match}\). At \(\rho_{\rm match}\) the numerical solutions from the outermost sector of the APH region are projected onto solutions at the innermost sector of the Delves region. The propagation is continued from \(\rho_{\rm match}\) to \(\rho_{\rm max}\), a sufficiently large value of \(\rho\) where the interaction potential is negligible. At \(\rho_{\rm max}\) all channels (from all arrangements) are numerically decoupled, scattering boundary conditions are applied, the log-derivative solutions are projected onto solutions associated with each asymptotic diatomic state, written in ordinary Jacobi coordinates, yielding a scattering matrix [41]. The procedure is repeated independently for each value of the total angular momentum quantum number \(J\) and its parities, good quantum numbers in the absence of external fields. However, as explained below, for the present work, only the (even) \(J=0\) case is addressed. Moreover, the basis sets for both APH and Delves regions are independent of collision energy and, therefore, evaluated only once. Due to the fact that low-lying collisional channels, with relatively higher kinetic energies, are associated with highly oscillatory components of the scattering wavefunction, particular attention is given below to the number of sectors, \(N\), and the grid step size used, \(\Delta\rho_{\rm apth}\) and \(\Delta\rho_{\rm edves}\). In addition, combined with the usual outward sector-to-sector integration of the Schrodinger equation, an intra-sector subdivision of the grid is employed in both APH and Delves regions. Thus, for the \(n^{\rm th}\) sector, of length \(\Delta\rho\), defined within the \(\rho^{\rm th}_{\rm left}\) and \(\rho^{\rm th}_{\rm right}\) boundaries, with \(\rho^{0}_{\rm left}=\rho_{\rm min}\), \(\rho^{n}_{\rm left}=\rho^{n-1}_{\rm right}\), \(\rho^{n}_{\rm right}=\rho^{n+1}_{\rm left}\), \(\rho^{N-1}_{\rm right}=\rho_{\rm max}\) and \(n=0,1,2,\ldots,N-1\), the grid is further subdivided into \(N_{\rm steps}\) per wavelength, \(\lambda_{\rm max}=\nicefrac{{2\pi}}{{\lambda_{\rm max}}}\), where \(k_{\rm max}\) is the maximum value of the wave vector considered, \(i.e.\) \[\frac{\hbar^{2}k_{\rm max}^{2}}{2\mu}=E_{\rm max}. \tag{7}\] Fig. 2: The same as in Fig. (1) but for \(\theta=0^{\circ}\). In Eq. (7) \(\mu\) is the atom-diatom reduced mass and \(E_{\rm max}\) is a fixed parameter whose value is set as high as the asymptotic energy of the highest closed channel included in the set of coupled equations, such that all channels are well described. Therefore, in what follows, we shall also determine the optimal value of \(N_{\rm steps}\) such that the grid-dependent physical description of the problem remains unaltered, _i.e._ a proper description of the smallest periods of oscillation of the wavefunction is included. ### Numerical considerations Despite a proper time-independent quantum formalism that takes into consideration the doublet (or higher) spin multiplicity is available in the domain of inelastic collisions [78; 79; 80; 81; 82; 83; 84], an implementation of the reactive counterpart of the problem is not. Therefore, we shall use the formalism for collisions between a \({}^{1}\Sigma^{+}\) molecule and a structureless atom. Such assumptions have been proven valid in certain context for inelastic collisions [83]. In what follows we make a few considerations for the case-study at hand. Due to the null projection of the electronic orbital angular momentum of CaF on its internuclear axis, \(\Lambda(\Sigma^{+})=0\), and the absence of a nearby electronic \({}^{2}\Pi\) state, only electrostatic interactions are expected to play a significant role on the internal structure of the molecule. Thus, the CaF effective (angular) Hamiltonian (neglecting vibrational and Stark terms) could be approximated as [85; 86] \[H_{\rm CaF}\approx B_{e}N^{2}+\gamma({\bf S}\cdot{\bf N})+b({\bf S}\cdot{\bf I })+c\left(S_{z}I_{z}\right)+f\left({\bf I}\cdot{\bf N}\right), \tag{8}\] where \({\bf N}\) is the diatomic rotational angular momentum, \({\bf S}\) is the electronic spin angular momentum with \(S_{z}=\nicefrac{{1}}{{2}}\) being the spin component along a given \(z\)-axis parallel to the internuclear axis, \({\bf I}\) is the nuclear spin with an \(I_{z}=\nicefrac{{1}}{{2}}\) component (due to the \({}^{19}\)F isotope), and \(B_{e}\) is the diatomic rotation constant. The parameters \(\gamma\), \(b\), \(c\) and \(f\) are the strength coefficients for the electronic-spin-rotation, isotropic and anisotropic electronic-spin-nuclear-spin, and nuclear-spin-rotation couplings, respectively. For convenience, the strength coefficients for the \((\upsilon=0,N=0)\) manifold, as measured by Childs and co-workers [87], are given in Table (1). As expected, the diatomic rotation constant is much larger than the remaining coupling parameters (about 257 times larger than \(\gamma\) and \(c\), 94 times larger than \(b\)) and, therefore, the dominant term. As a consequence, the \(N=1\) rotational structure, within the \(\upsilon=0\) manifold, is predicted to lie about \(2B_{e}\approx 0.69\) cm\({}^{-1}\) (or 900 mK, neglecting higher-order centrifugal distortion contributions) above the \(N=0\) structure. Using either a Hund's case (a) [78] or (b) [80] notation, both \({\bf N}\) and \({\bf S}\) are generally well accepted to be weakly coupled. As a consequence, collision-induced changes in either the magnitude or the direction of the electronic spin, \({\bf S}\), are unlikely to happen. However, a collision may induce sudden changes in \({\bf N}\) and, due to the subsequent recoupling between \({\bf N}\) and \({\bf S}\), changes between the resultant parallel (\(e\) parity) and anti-parallel (\(f\) parity) coupling schemes may occur [78]. Since we are not properly describing the diatomic rotational structure, we shall address collision energies well below the 900 mK threshold, such that the \(N=1\) rotational state will remain as a closed channel. In regard to the \(N=0\) fine/hyperfine structure, for \(\Lambda=0,I_{z}=\nicefrac{{1}}{{2}}\), the predicted sublevels of \(H_{\rm CaF}\) are associated to \(j=\nicefrac{{1}}{{2}}\) and \(F=0\) or \(1\) quantum numbers, where \({\bf j}={\bf N}+{\bf S}\) (fine structure) and \({\bf F}={\bf j}+{\bf I}\) (hyperfine structure) [87]. However, as the collisions treated below are explored in the absence of external fields and, given the equally small electronic-spin-nuclear-spin interactions, \(b\) and \(c\), alongside the negligible nuclear-spin-rotation interaction, \(f\), the multiplet structure of the entrance channel is not considered henceforth. Even if external fields were taken into account, a collision-induced Zeeman relaxation of the \(N=0\) rotational structure of CaF is expected to vanish at first-order, being mostly a second-order process [88] and, therefore, it appears reasonable to neglect. However, it is worthwhile to note that, as neither the doublet spin multiplicity of CaF nor that of Li is included, a resultant magnetic dipole-dipole interaction is also disregarded. As such interactions are more prevalent in the ultracold regime of kinetic energies, our assumption of a pseudo \({}^{1}S+\,^{1}\Sigma\) colliding system implies a lower limit on the range of collision energies that can be studied here without compromises. As we shall see below, 1 mK is the minimal energy treated in this work. Thus, from now on, we drop the use of the typical Hund's case (b) labeling of \(N_{j}\) quantum numbers for the diatomic rotational structure in favor of the \(j\) rotational level (an integer, \(j=0,1,\ldots\)), as used in the literature of singlet molecules. Another aspect that we shall not describe in this work is higher values of the total angular momentum, _i.e._\(J>0\). The amount of computational resources required to perform a \(J=0\) calculation is already substantial and the inclusion of higher \(J\) values would increase it drastically, as we would be now required to handle both even and odd parities of each non-zero \(J\) case. Despite the relatively low collision energies intended, yet well above the \(s\)-wave regime, it is likely that a few \(J\) values are still required to secure convergence. As a consequence, the \(J=0\) calculation presented below may not be suited for a direct quantitative comparison with experimental results. However, it is worthwhile to stress that \(J=0\) calculations have been proven to provide an insightful and accurate qualitative description of collisional problems in the past besides providing also the foundation for the optimization of certain key numerical parameters. \begin{table} \begin{tabular}{c c} \hline \hline & \(X^{2}\Sigma^{+},\upsilon=0,N=0\) \\ \cline{2-3} \(B_{e}\) & 0.343704 \\ \(\gamma\) & 1.323\(\times 10^{-3}\) \\ \(b\) & 3.642\(\times 10^{-3}\) \\ \(c\) & 1.338\(\times 10^{-3}\) \\ \(f\) & 9.593\(\times 10^{-7}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental values, in cm\({}^{-1}\), of the molecular parameters of CaF for the \(\upsilon=0,N=0\) rovibrational manifold [87]. Thus, in the context of a pseudo-\({}^{1}\Sigma^{+}\) CaF molecule colliding with a structureless Li atom for \(J=0\), we then proceed to determine the optimal parameters in order to describe the scattering wavefunction in both APH and Delves regions. Figure (3) shows the final set of 2500 coupled potentials used in the APH region, evaluated between \(\rho_{\rm min}=4.5\,a.u.\) and \(\rho_{\rm much}\approx 19.9\,a.u.\) with 76 sectors varying logarithmically with a step size of \(\Delta\rho_{\rm aph}=0.02\) atomic units. For easy visualization every 100th level is shown in blue. The bold red line represents the \((\upsilon=0,\ j=0)\) entrance channel of CaF. Despite many high-lying blue curves being asymptotically correlated with internal states a few thousand wavenumber above the entrance channel, they are strongly interacting at short range, \(\rho\approx 6\)-7 \(a.u.\), and their inclusion is needed to achieve converged results. However, the inclusion of more channels increases the time-complexity of solving the Schrodinger equation by a few factors of \(O\left(N_{\rm channels}^{3}\right)\), where \(N_{\rm channels}\) is the number of channels to be included in the basis set of the scattering wavefunction. Thus, considering the number of calculations required to probe other convergence aspects (as shown below) and energy-dependent calculations, increasing the number of channels would quickly become impractical. If higher collision energies than those addressed here are of interest, it is desirable to include more channels, mainly in the range between \(\rho_{\rm min}\) and \(\rho\approx 13.5\,a.u.\), then projecting the solutions into a smaller basis set, as the one used here, and resuming the propagation toward large distances. The choice of \(\Delta\rho_{\rm aph}=0.02\,a.u.\) will remain fixed in the remainder of this work. This is mostly due the inherent higher computational overhead of optimizing it as evaluation of different sets of coupled potentials would be required. However, it is worth noting that similar values have been used for converged calculations of systems as heavy as the one treated here and for somewhat similar grid parameters, _e.g._\(\Delta\rho_{\rm aph}=0.01\,a.u.\) used for Rb + K\({}_{2}\) by Croft _et al._[89] and \(\Delta\rho_{\rm aph}=0.012\,a.u.\) used for Li + LiNa by Kendrick _et al._[90] (logarithmic scales used in all cases). Beyond \(\rho_{\rm match}\), now within the Delves region, the additional concern of how the scattering characteristics may vary with respect to \(\rho_{\rm max}\), due to the lower collision energies, should be addressed. Fortunately, by inspecting Fig. (3) again, the set of coupled potential curves is seen to present a somewhat parallel behavior with respect to one another, at \(\rho_{\rm match}\), mostly due to the smaller couplings, as \(\rho\rightarrow\rho_{\rm max}\). This aspect in particular suggests that a much smaller number of channels may be included in the basis set of the Delves region. In the present work, 600 channels are utilized to solve the Schrodinger equation along the Delves part, of which 396 are closed channels. Possibly, the number of basis functions used in the Delves region exceeds the requirements to obtain well converged numerical results at \(J=0\) and can be utilized also to describe collisions at higher energies. Asymptotically, at \(\rho_{\rm max}\), the diatomic eigenstates used as basis set comprises up to \(\upsilon_{\rm max}=7\) and \(j_{\rm max}=68\)\((\upsilon=0)\) for LiF, from which \((\upsilon=0,j=55),\ (\upsilon=3,j=33)\) and \((\upsilon=4,j=21)\) are the highest open rovibrational manifolds. Whereas for CaF, \(\upsilon_{\rm max}=3\) and \(j_{\rm max}=77\)\((\upsilon=0)\) are utilized with all but the entrance channel energetically closed. This smaller basis set, when compared with the one used in the APH region, allows us to explore the convergence criteria with respect to \(N_{\rm steps}\) (APH and Delves regions) and \(\rho_{\rm max}\) (Delves region only) in great detail as described below. Fig. 3: Set of the lowest 2500 APH coupled potential curves (\(\rm cm^{-1}\)), in which every 100 curves are highlighted in blue, as functions of the hyperradius, \(\rho\) (\(a.u.\)). The bold red line tags the CaF\((\upsilon=0,j=0)\) diatomic rovibrational level as a pseudo-\({}^{1}\Sigma^{+}\) molecule. We have used a reference collision energy of 1 mK relative to the \((v=0,j=0)\) entrance channel of CaF to compute both elastic and reactive cross sections for sets of \(N_{\rm steps}=\) 10, 50, 100, 200, 400, 600 and 800 combined with \(\rho_{\rm max}\approx\) 85, 125, 155, 185, 305, 405, 805 and 1250 atomic units. We note that, all parameters except \(N_{\rm steps}\) are held fixed in the APH region, whereas in the Delves region, each \(N_{\rm steps}\) choice is combined with an increasing number of sectors varying linearly at fixed steps of \(\Delta\rho_{\rm delves}=0.2\) atomic units. The result is presented in Fig. (4), where each curve corresponds to a given value of \(N_{\rm steps}\) and the cross section is plotted as a function of \(\rho_{\rm max}\). An inspection of the upper panel (elastic component) suggests a somewhat strong dependence on both parameters, as expected, and the cross section converges from below to its optimal value between \(\rho_{\rm max}\approx\) 250 and 350 _a.u._ with values of \(N_{\rm steps}>\) 600 yielding comparable results. The reactive cross sections for the LiF\((v^{\prime}=0,j^{\prime}=0)\) exit channel are presented in the lower panel of Fig. (4) where a similar convergence pattern (now from above) is evident, except that the set of calculations with \(N_{\rm steps}=\) 10 and 50 are completely unable to describe the reaction. Thus, \(N_{\rm steps}>\) 200 and, ideally, 600 is recommended. For the sake of simplicity, the convergence behavior of cross sections for other choices of \(v^{\prime}\) and \(j^{\prime}\) of LiF are not shown but they possess virtually identical patterns as those observed in the lower panel of Fig. (4). Instead, in Fig. (5), the reactive cross section for all open \(v^{\prime}\) (panels) and \(j^{\prime}\) (abscissa) exit channels at the fixed values of \(\rho_{\rm max}=145,305,405\)_a.u._ and \(N_{\rm steps}=\) 600 is presented. A qualitative description of Fig. (5) is given in the next section whereas, for now, suffice to observe that each independent calculation (blue, red and brown) captures virtually identical branching ratios over both \(v^{\prime}\) and \(j^{\prime}\) implying that the calculations are numerically stable in order to infer the actual physical aspects of these collisions. ## 3 Results and Discussion First, we address how well the PES can reproduce the asymptotic PECs for the CaF\((X^{2}\Sigma^{+})\) and LiF\((X^{1}\Sigma^{+})\) subsystems at large atom-diatom separations. The bottom of each PEC is presented in Fig. (6), in which case the global dissociation limit corresponding to Li\((^{2}S)+\) Ca\((^{1}S)+\) F\((^{2}P)\), \(\approx\) 47986 cm\({}^{-1}\), is not shown. Due to our earlier choice of using the LiF energy at the equilibrium position as the zero-energy in the scattering calculations, the 47986 cm\({}^{-1}\) limit also corresponds to the relative dissociation energy of LiF, \(D_{e}\). For comparison purposes a list of a few selected values of equilibrium positions and dissociation energies, for both LiF and CaF electronic ground states, are collected in Table (2). In the particular case of CaF, electronic structure data is somewhat scarce and/or dated. Yet, by inspection of Table (2), we Fig. 4: Upper panel: Elastic component of the cross section (_a.u._) for the Li + CaF collisions at 1 mK as a function of \(\rho_{\rm max}\) (_a.u._) for \(N_{\rm steps}\) = 10 (black curve), 50 (red curve), 100 (green curve), 200 (blue curve), 400 (orange curve), 600 (magenta curve) and 800 (brown curve). Lower panel: Reactive component of the cross section (_a.u._) for the Li + CaF\((v=0,j=0)\longrightarrow\) Ca + LiF\((v^{\prime}=0,j^{\prime}=0)\) chemical reaction at 1 mK as a function of \(\rho_{\rm max}\) (_a.u._) using the same color code as in the upper panel. Circles are raw CC calculations, curves are Akima splines to enhance visualization. Fig. 5: Reactive cross sections for the Li + CaF\((v=0,j=0)\longrightarrow\) Ca + LiF\((v^{\prime},j^{\prime})\) collision at 1 mK (\(N_{\rm steps}=600\)) as functions of \(j^{\prime}\). Brown bars are used for \(\rho_{\rm max}\approx\) 145 _a.u._, blue bars for \(\rho_{\rm max}\approx\) 305 _a.u._ and red bars for \(\rho_{\rm max}\approx\) 405 _a.u._ with panels (a)-(e) corresponding to \(v^{\prime}=4\), 3, 2, 1 and 0, respectively. do observe a reasonably good agreement between our calculations and those from literature, in particular, the recent results of Sardar and co-workers [40]. As expected, the dissociation energies appear to vary more broadly, within \(\approx 2000\) cm\({}^{-1}\) among the various studies, with our result well within that range. Despite that the error appears to be relatively small if the actual total depth of the potential is taken into consideration (\(\approx 47986\) cm\({}^{-1}\) for LiF), the data collected in Table (2) seem to suggest that it is relatively harder to properly reproduce the LiF well depth than that of CaF. As investigated in great detail by Varandas [91], the LiF electronic ground state is not trivial, manifesting a predominant ionic character at the equilibrium position and avoid crossing the \(2^{1}\Sigma^{+}\) excited state (whose nature is essentially covalent) at relatively short ranges (\(r_{\rm{LIF}}\approx 14\) atomic units). Moreover, it is asymptotically correlated with an additional \({}^{1}\Pi\) state. Using the PECs shown in Fig. (6), an energy splitting of about 0.65 cm\({}^{-1}\) between the first two rotational levels of CaF is predicted, in the \(\upsilon=0\) vibrational manifold, which suggests an effective diatomic rotational constant of \(B_{e}=0.65/2=0.325\) cm\({}^{-1}\) and, thus, is within 0.02 cm\({}^{-1}\) from the value measured by Childs _et al._ - see Table (1). Similarly, an effective diatomic rotational constant of \(B_{e}=1.29\) cm\({}^{-1}\) is predicted for LiF, which agrees reasonably well with the measured value of 1.3452576 cm\({}^{-1}\)[92], but overall these evidence seems to suggest that the shape of the PECs (and their bound states) is equally satisfactory. The energy levels for the vibrational states utilized in the scattering calculations are tagged with horizontal lines in Fig. (6) and, for the sake of clarity, only \(j=0\) cases are displayed, except for \(j=37\) (\(\upsilon=1\)) and \(j=38\) (\(\upsilon=5\), the highest basis function taken into account). By including the respective ZPEs of each molecule a total exothermicity of about 3880.3 cm\({}^{-1}\) is expected and, therefore, it is a few hundred wavenumbers below the earlier prediction of 4440 cm\({}^{-1}\) by Kosicki and co-workers [36]. In the remainder of the paper we describe the Li + CaF \(\longrightarrow\) Ca + LiF chemical reaction with those parameters described in the previous section. We will perform a scan on collision energy from 1 mK to 200 mK for the \((\upsilon=0,j=0)\) entrance channel of CaF, in a grid of 128 points varying linearly, using \(N_{\rm{steps}}=600\) and \(\rho_{\rm{max}}=305\)\(a.u.\); the result of which is presented in Fig. (7). The choice of the ground rovibrational state of CaF as the entrance channel for these collisions, at sufficiently small collision energies, rule out the occurrence of inelastic processes, such that the only non-elastic pathway is the chemical reaction. In addition, as the PES described above does not take into account the nearby \({}^{3}A^{\prime}\) electronic state (degenerate asymptotically), the influence of singlet-triplet nonadiabatic transitions and/or spin-exchange effects on the reaction presented below, if any, is disregarded. As no actual comparison with a measurement and/or other calculations is possible for now, those results presented below are not scaled by the typical \({}^{1}\!/\!4\) statistical weight factor of singlet entrance channels with respect to their triplet counterparts. That implies a hypothetical scenario in which 100% of the colliding partners are prepared in the electronic ground state of the complex. In an actual experimental scenario, with no control of the initial spin, it is expected that up to 75% of the collisions would undergo elastic and inelastic processess along the triplet PES whereas 25% would Figure 6: Diatomic potential energy curves for CaF\((N^{2}\Sigma^{+})\) (black dashed curve) and LiF\((N^{2}\Sigma^{+})\) (black solid curve), both in cm\({}^{-1}\), as functions of \(r\) with the respective third atoms (Li and Ca) at a distance of 100 a.u. (\(\theta=0\)); Horizontal bold (solid and dashed) lines tag the diatomic vibrational levels (\(j=0\)) for the respective quantum number displayed. Horizontal bold dotted lines tag the vibrational levels corresponding to rotational states \(j=37\) (\(\upsilon=1\)) and \(j=38\) (\(\upsilon=5\)). \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & & LiF & & CaF \\ \cline{3-6} Ref. & Method & \(r_{\rm LiF}\) & \(D_{e}\) & \(r_{\rm CaF}\) & \(D_{e}\) \\ \hline This work & MRCI+Q/CASSCF & 3.0 & 47985.4 & 3.8 & 43944.6 \\ 40 & MRCI+Q/CASSCF & & & 3.7 & 43672.0 \\ 93 & Semi-empirical & & & 3.69 & 44203.5 \\ 40 & Empirical & & & 3.71 & \\ 94 & Empirical & & & 44111.3 \\ 95 & MRCI, CBS & & & 3.78 (2.0005) & \\ 96 & B3LYP/BS3, HP & & & 3.68 (1.9485) & 45752.6 (5.6726) \\ 97 & HF/STO & 2.929 (1.5500) & 49200.0 (6.1000) & 3.74 (1.9800) & 43957.0 (5.4500) \\ 91 & CAS-A7/XZ, CBS & 2.983 (1.5788) & 42940.1 (5.3239) & & \\ 91 & MRCI-C3\({}_{2}\)/XZ, CBS & 2.985 (1.5795) & 47823.8 (5.9294) & & \\ 91 & MRCI-C3\({}_{0}\)/cXZ, CBS & 2.933 (1.5524) & 49003.8 (6.0757) & & \\ 91 & MRCI-C0/cXZ, CBS & 2.952 (1.5622) & 48953.8 (6.0695) & & \\ 98 & PMP4/6-311+G(\(2df\)) & 3.014 (1.5950) & 47496.7 (5.8888)\({}^{a}\) & & \\ 91 & Empirical & 2.955 (1.5638) & 48393.3 (6.0000)\({}^{b}\) & & \\ 91 & Empirical & & 48554.6 (6.0200) & & \\ \hline \end{tabular} \end{table} Table 2: Equilibrium positions (\(r_{\rm LiF}\) and \(r_{\rm CaF}\), in \(a.u.\)) and dissociation limits (\(D_{e}\), in cm\({}^{-1}\)) along with the method utilized (see the respective references for details). The original values, in Å and eV, are given between parentheses with the factors 1 \(a.u.\) = 1 a\({}_{0}\) = 0.52917721092 Å and 1 \(a.u.\) = 27.211385 eV = 219474.63137054 cm\({}^{-1}\) applied. \({}^{a}\)The original \(D_{e}\) value is converted from kcal/mol (135.8) and the geometry optimization is made at the MP2(full)/6311+G* level. \({}^{b}\)Within \(\pm\) 0.3 eV (\(\pm\) 2419.7 cm\({}^{-1}\)). undergo the reactive process described here. In the discussion presented below, the absolute value of the cross section is less relevant and we shall address aspects of the relative quantities such as (branching) ratios. In the upper panel of Fig. (7), the energy-dependence of the cross sections, summed over \(j^{\prime}\), is presented for each open manifold associated to \(\upsilon^{\prime}=0\)-4 of the LiF product (cyan, red, green, blue and orange curves), whereas the total, summed over \(j^{\prime}\) and \(\upsilon^{\prime}\), is denoted by the solid black curve. For comparison purposes the elastic component of the cross section is also shown as the black dashed curve. As seen in Fig. (7) the reactive cross sections present a somewhat flat behavior whereas the elastic component is suppressed in the vicinity of 100 mK to 200 mK mostly due to the presence of a resonant feature. However, it is worth noting that it may be premature to consider this resonant structure as an actual observable feature due in parts to the fact that our calculation only represents the \(J=0\) case. The incoherent summation of contributions associated to higher \(J\) values may (and are likely to) wash out these features observed in Fig. (7). Thus, the question whether the resonant structure predicted here shall survive the addition of higher \(J\) values will remain open for further theoretical explorations. Likewise, the characterization of the resonance, in terms of angular momentum partial waves, width and lifetime is outside the scope of this work. However, as the entrance channel is associated to \(j=0\) and \(J=0\), and therefore only incoming \(\ell=0\) partial wave contributes \((J=j+\ell)\), there is no centrifugal term associated to the entrance channel potential curve, whose behavior is of an ordinary attractive potential. As a consequence, it is possible that the resonance-like feature shown in Fig. (7) is associated to a triatomic bound state belonging to another channel, _i.e._ a Feshbach resonance. This hypothesis is reinforced by the somewhat high density of states that may exist in the vicinity of the entrance channel - see the red bold line in Fig. (3). The hierarchy of the \(j^{\prime}\)-summed cross sections for a given \(\upsilon^{\prime}\) level may be understood from the following considerations. The \(\upsilon^{\prime}=4\) manifold of LiF possess only 22 rotational levels that are open with respect to the \((\upsilon=0,j=0)\) entrance channel of CaF. This fact is illustrated, at 1 mK, in panel (a) of Fig. (5). As a result, the summation over \(j^{\prime}\) yields the smallest cross sections overall - see the magnitude of the orange curve in Fig. (7). Similarly, the \(\upsilon^{\prime}=3\) and 2 cases possess the second and third smallest amount of open rotational states (34 and 42), and thus, provides the second and third smallest total cross section, _i.e._ blue and green curves of Fig. (7). Overall the rotational levels of LiF belonging to the \(\upsilon^{\prime}=4\)-2 cases are predicted to be poorly populated by the collision in the energy range described here. In contrast, the production of LiF in the \(\upsilon^{\prime}=0\) and 1 manifolds, the largest in terms of open rotational states, are the chemical events with higher likelihood to occur, mostly populating \(j^{\prime}=0\)-20, with a smaller but substantial probability of populating also highly rotationally excited states. Another noteworthy aspect, as the collision energy increases, is the somewhat strong suppression of the elastic cross section that reaches a minimum value in the resonant region, at about 182 mK, as shown in the upper panel of Fig. (7). As a consequence, in the range of energies studied here, the Li + CaF collision may become predominantly reactive at collision energies in the vicinity of 200 mK. This fact is illustrated by the elastic-to-reactive ratio of the cross section presented in the lower panel of Fig. (7), where there are up to 1700 collisions for every chemical event at about 1 mK, remaining somewhat constant for about 100 mK, and quickly dropping to a minimum of 1:1 (or smaller) in the vicinity of 182 mK. For the sake of comparison, the elastic-to-inelastic ratio for a collision-induced Zeeman relaxation of CaF by collisions with He atoms, at much higher temperatures (2 K), has been measured by Maussang and co-workers to be about \(10^{4}\)[13]. Likewise, in the cases of spin-polarized Li + CaH and Mg + CaH inelastic collisions, investigated by Tscherbul _et al._[99], for which chemical reaction is energetically forbidden, the elastic-to-inelastic ratio is predicted to be about \(10^{5}\) at 1 mK, _i.e._ nearly 60 times larger than the case considered here. In addition, CaH is known to be more amenable for magnetic traps, mostly due to its higher rotational constant compared to CaF [24]. This raises concerns on the prospects of sympathetic cooling of CaF by means of cold colli Figure 7: Upper panel: Reactive cross section, in \(\AA^{2}\), for the Li + CaF(\(\upsilon=0,j=0)\longrightarrow\) Ca + LiF(\(\upsilon^{\prime}\)) chemical reaction, summed over \(j^{\prime}\), as a function of the collision energy, in mK; where, \(J=0\), \(N_{\rm step}=600\), \(\rho_{\rm max}=305\) a.u., \(\upsilon^{\prime}=0\) (cyan curve), \(\upsilon^{\prime}=1\) (red curve), \(\upsilon^{\prime}=2\) (green curve), \(\upsilon^{\prime}=3\) (blue curve), \(\upsilon^{\prime}=4\) (orange curve), the total summed over \(\upsilon^{\prime}\) (black curve) and elastic component (dashed black curve). Lower panel: The elastic-to-total reactive cross section ratio as a function of the collision energy. sions with Li atoms above 100 mK due to potential trap losses induced by the formation of LiF. From an experimental point of view, our choice of using CaF in its lowest internal state as the entrance channel could be realized by producing the molecule with either a MOT or a Stark decelerator or a combination of these with a microwave trap. A modern MOT implementation is capable of producing molecules for collisions at energies as low as the Doppler limit whereas a Stark deceleration method is likely to produce molecules with a temperature of a few dozens of mK, being therefore more problematic for the case at hand. In either case, however, typical procedures, such as compressing the molecular cloud in order to improve its overlap with the buffer gas coolant may eventually raise the temperature by a few extra dozens of mK and, therefore, also trigger losses due to LiF formation. Those experimental implementation already reaching sub-Doppler temperatures should not be concerned by losses due to chemical reaction but sub-Doppler heating effects, as those demonstrated by Devlin and Tarbutt [100], may occur for certain kind of MOTs. A detailed single-arrangement Lennard-Jones-based, and thus disregarding reactivity, simulation of the thermalization (no inelasticity either) of CaF in the presence of Li and Rb cold buffer gases has been carried out by Lim _et al._[37]. Their numerical experiment assumed a practical experimental scenario similar to that given above and found a somewhat strong dependency between the cooling rate and the \(s\)-wave scattering amplitude for those scenarios with ultracold Li atoms, when compared to Rb, mostly due to the relatively small reduced masses for the Li + CaF combination. In addition, they have also predicted a slowdown of the collision process, and thus the cooling rate, for the Li + CaF case, due to a minimum in the cross sections in the range of 1-10 mK, similar to that found in the present work at higher energies, about 100-200 mK. In contrast, a similar minimum was predicted by Lim _et al._ within the \(\mu\)K range of collision energies when ultracold Rb atoms were used. As a consequence, the cooling rate when using Li was found to be an order of magnitude slower than that for the Rb case [37]. Alternatively, as also pointed out by Lim _et al._[37], the use of a light colliding partner such as Li for sympathetic cooling when CaF is produced in an excited rotational state may be favorable due to potentially higher centrifugal barriers that could, as a consequence, suppress losses due to collision-induced inelastic processes. This scenario is yet to be investigated but it is now possible using the PES we have presented here. Moreover, it may be worthwhile to explore collisions driven by the \({}^{3}A^{\prime}\) electronic state of the LiCaF complex and explore the possibility to control the reaction by means of external magnetic fields as in the case of the Li (Mg) + CaH systems [20, 99, 101]. Chemical reactions are likely to be suppressed in spin-polarized collisions of \({}^{2}S+^{2}\Sigma\) systems on the \({}^{3}A^{\prime}\) PES, due to the less attractive character of the \({}^{3}A^{\prime}\) electronic state and overall endothermicity. Moreover, there is evidence suggesting that the spin-orbit-induced triplet-to-singlet transition, that could trigger the formation of LiF in the \({}^{1}A^{\prime}\) PES, as shown here, may be either small or negligible [101]. An overview of the \({}^{3}A^{\prime}\) electronic state of the LiCaF complex has been given by Frye and co-workers [102]. Overall, our results show that cold collisions of Li and CaF favor elastic scattering in the 1-100 mK regime but a sharp decrease in the elastic cross section in the vicinity of 200 mK, possibly due to a Feshbach resonance, makes the elastic/reactive cross section ratio \(<1\), limiting the efficacy of sympathetic cooling of CaF by collisions with cold Li atoms. Despite the high density of asymptotic diatomic states and bound triatomic states that are involved in the collisions - see Fig. (3) -, our calculations predict a somewhat low density of resonances. This is probably due to the downhill nature of the reaction and presumably the short lifetimes of the LiCaF complexes formed. This aspect, the effect of rotational and vibrational excitation of the CaF molecule, and a proper characterization of the resonance will be addressed in future work. Indeed, a recent quantum close-coupling study of Ca + BaCl\({}^{+}\) system has shown strong vibrational quenching rates for BaCl\({}^{+}\) that exceeds rotational quenching rates for low-lying rotational levels [103]. ## 4 Conclusions In this work we have applied state-of-the-art quantum chemistry and quantum reactive scattering to study both the interaction and dynamics of Li(\({}^{2}S\)) + CaF(\({}^{3}\Sigma^{+}\)), in the context of cold collisions. To this end we have produced a global potential energy surface for the ground electronic state of the LiCaF system, \(X^{1}A^{\prime}\), capable of describing both atom-diatom arrangements, Li + CaF and Ca + LiF. The electronic structure calculations were carried out using an internally contracted multi-reference configuration-interaction method with a state-averaged (1\({}^{1}\)A\({}^{\prime}\), 1\({}^{3}\)A\({}^{\prime}\) and 1\({}^{1}\)A\({}^{\prime\prime}\)) complete active space (10 active electrons in 9 active orbitals) self-consistent field electronic wavefunction. A total of about 11000 geometries were evaluated and used to produce the final potential energy surface fit with a many-body expansion method augmented with _ab initio_ parameterized long-range potentials. Scattering calculations for the Li + CaF(\(v=0,j=0\)) entrance channel were performed between 1 and 200 mK of collision energy. At 1 mK the collision-induced formation of rovibrationally excited LiF(\(v^{\prime}=0\)-1, \(j^{\prime}\approx 0\)-20) molecules is predicted to be the most likely collisional outcome, with a total energy release that could reach up to 3880 K. In the vicinity of 100-200 mK a quantum resonance, likely to be a Feshbach resonance, appears to strongly suppress the elastic component. The reactive cross sections, however, remain largely unaffected in this regime, presumably due to its small magnitude compared to its elastic counterpart. The overall effect is that the elastic-to-reactive cross section ratio falls well below the lower limit of one hundred for collision energies above 100 mK suggesting a somewhat poor cooling rate for sympathetic cooling of CaF by Li and a strong trap loss due to the formation of LiF. At the resonance energy of 182 mK nearly every collision is predicted to be reactive (1:1 ratio or smaller). It is worthwhile to emphasize, however, that the calculations presented here are not yet accurate for direct comparisons with future experimental observations as most likely a single PES, and the single partial wave (\(J=0\)) used in the scattering calculations are insufficient. However, we believe it will serve as a benchmark for further theoretical works as we provided a detailed description of the potential energy surface and of those numerical aspects re quired to obtain reasonably well converged scattering characteristics, a substantial improvement upon previous studies that were limited to model potentials and elastic/inelastic collisions, oftentimes considering only the equilibrium geometry of the triatomic complex, and equally limited dynamical models. ## Author Contributions Electronic structure calculations were primarily carried out by Q.Y. and H.G. Scattering calculations were carried out by H.S. with assistance from M.M., N.B. and B.K.K. All authors contributed to manuscript preparation and editing. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements This work is supported in part by NSF grant No. PHY-2110227 (N.B.) and by a MURI grant from Army Office of Research (Grant No. W911NF-19-1-0283 to H.G. and N.B.). The computation was performed in part at the Center for Advanced Research Computing (CARC) at UNM, and used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by the National Science Foundation (Grant No. ACI-1548562). Specifically, it used the Bridges-2 system, which is supported by the NSF (Award No. PHY-200034) (N.B.) at the Pittsburgh Supercomputing Center (PSC). B.K.K. acknowledges that part of this work was performed under the auspices of the US Department of Energy under Project No. 20170221 ER of the Laboratory Directed Research and Development Program at Los Alamos National Laboratory. This work used resources provided by the Los Alamos National Laboratory Institutional Computing Program. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218CNA000001).
2301.06154
The Nambu-Goto string in QCD: Dipole interactions, scattering and entanglement
We revisit some aspects of the stringy approach to dipole-dipole interactions, scattering and entanglement in QCD, using the Nambu-Goto (NG) string, without recourse to holography. We first show that the potential between two static dipoles exchanging closed NG strings is attractive at all separations. Underlining the exchange there is an emergent entropy, that is dominated by the tachyon at large separation, and vanishes at short separation, a measure of the confinement-deconfinement transition. The same tachyon is dominant in the scattering amplitude, as a correlator of two Wilson loops for two fixed dipole-like hadrons separated at large rapidity gap, where the contribution of the worldsheet fermions is included. While the tachyon causes the mean string bit density to grow exponentially with the rapidity, the total scattering cross section still satisfies the Froissart bound by quantum shadowing. The stringy scattering exchange also carries an entanglement entropy, that saturates when the bound is reached. For hadrons with varying dipole sizes, the tachyon exchange takes place in hyperbolic space in the conformal limit. The result for the full S-matrix is reminiscent of the one from Mueller$^\prime$s evolved dipole wavefunction, for the total dipole-dipole cross section in perturbative QCD.
Yizhuang Liu, Maciej A. Nowak, Ismail Zahed
2023-01-15T18:33:35Z
http://arxiv.org/abs/2301.06154v2
# The Nambu-Goto string in QCD: ###### Abstract We revisit some aspects of the stringy approach to dipole-dipole interactions, scattering and entanglement in QCD, using the Nambu-Goto (NG) string, without recourse to holography. We first show that the potential between two static dipoles exchanging closed NG strings is attractive at all separations. Underlining the exchange there is an emergent entropy, that is dominated by the tachyon at large separation, and vanishes at short separation, a measure of the confinement-deconfinement transition. The same tachyon is dominant in the scattering amplitude, as a correlator of two Wilson loops for two fixed dipole-like hadrons separated at large rapidity gap, where the contribution of the worldsheet fermions is included. While the tachyon causes the mean string bit density to grow exponentially with the rapidity, the total scattering cross section still satisfies the Froissart bound by quantum shadowing. The stringy scattering exchange also carries an entanglement entropy, that saturates when the bound is reached. For hadrons with varying dipole sizes, the tachyon exchange takes place in hyperbolic space in the conformal limit. The result for the full S-matrix is reminiscent of the one from Mueller's evolved dipole wavefunction, for the total dipole-dipole cross section in perturbative QCD. ## I Introduction String theory has been hailed as a possible consistent theory of quantum gravity. By pertinent compactifications, it has the potential to lead to various extensions of the standard model at higher energies. But perhaps the most compelling application of string theory may still be in strong interactions, where it originated from. It is plausible that QCD in the large number of colors \(N_{c}\) limit, may be dual to an effective string theory with a weak string coupling \(g_{s}\sim\frac{1}{N_{c}}\), as suggested by holography in higher dimensions [1] (and references therein). In so far, the compelling arguments for a QCD string stem from lattice QCD simulations [2] (and references therein). Indeed, detailed QCD lattice simulations in 1+2 dimensions have shown that the closed flux tube in gauge theories with various \(N_{c}\), are well described by a Nambu-Goto (NG) string in flat space dimensions [3]. Detailed studies of the string mass on its length for various \(N_{c}\), yield results that are in good agreement with the NG string even for short lengths, well beyond the contribution of the Luscher term. Although fuzzy, the string is well approximated by a fundamental NG string. The extension of the lattice analysis to 1+3-dimensions for \(N_{c}=3,5\) for the closed string spectrum, has also shown convincing agreement with the NG string [4]. The description of the heavy quark-antiquark potential for fixed separation \(R\), using a fully quantized NG string in 2+D\({}_{\perp}\)-dimensions, was carried by Arvis with the result [5] \[V(R)=\left(\sigma_{T}^{2}R^{2}-\frac{D_{\perp}}{24\alpha^{\prime}}\right)^{ \frac{1}{2}}\,,\] with the string tension \(\sigma_{T}=1/2\pi\alpha^{\prime}\) and \(\alpha^{\prime}=l_{s}^{2}\). The large distance expansion of the NG potential, yields \[V(R)\approx\sigma_{T}R-\frac{\pi D_{\perp}}{24R}-\frac{\pi^{2}}{2\sigma_{T}R^ {3}}\bigg{(}\frac{D_{\perp}}{24}\bigg{)}^{2}\;, \tag{1}\] the linear confining potential, plus the universal Luscher correction [6] and the Luscher-Weisz correction [7]. These three contributions are well reproduced by the high precision QCD lattice analysis of the inter-quark potential in [8] (see Fig. 3 therein). Yet, the full NG potential becomes imaginary below a critical distance \[R_{c}=\pi\sqrt{\frac{D_{\perp}\alpha^{\prime}}{6}}=\frac{1}{2}\frac{|M_{0}|}{ \sigma_{T}}\rightarrow\frac{1}{3}\,\mbox{fm}\.\] The rightmost numerical estimate following from \(D_{\perp}=2\), with \(\alpha^{\prime}=1/2m_{\rho}^{2}\) the Regge rho meson slope. This behavior is tied to the tachyon with negative squared mass \(M_{0}^{2}<0\) in the dual spectrum, and doomed the NG string as a fundamental string in 4-dimensions. Yet the tachyon mass is also at the origin of the measured large distance \(1/R\) and \(1/R^{3}\) corrections exhibited by the QCD string, which is fuzzy and not fundamental. The inter-quark potential below the critical \(R_{c}\) in QCD, is non-confining. Hence, the use of the NG string to model QCD interactions, should prove useful away from criticality. In what follows, we will make use of these observations. More specifically, we will use the NG string to analyse the interaction potential and scattering amplitude in the Regge limit (\(s\gg-t\sim\Lambda_{\rm QCD}^{2}\)), between two QCD dipoles using the exchange of a NG string in flat 4-dimensional space, without recourse to holography. When formulated in Euclidean signature, the two calculations parallel each other with the scattering following from the potential with an Euclidean-angle valued rotation, followed by a pertinent analytical continuation to the hyperbolic angle or rapidity. This construction was initially suggested for a pair of quark scattering in [9]. For completeness, we recall that this stringy approach to scattering in QCD, was initially proposed in higher dimensions using the gauge-gravity duality [10; 11; 12], and since then discussed by many e.g. [13; 14; 15; 16] (and references therein). The present work relies on preceding constructions [17; 18], to derive a number of new results: 1/ The static dipole-dipole potential is dominated by the exchange of NG closed strings, with the tachyon dominant at large distances; 2/ The NG exchanges are entangled, with a quantum entropy that undergoes a phase-like transition with varying separation; 3/ The scattering of two dipoles at large rapidity, is dominated by the two-particle irreducible (2PI) NG surface of genus 2, with a total cross section that is in good agreement with the recently reported data at the LHC; 4/ The NG estimate of the rapidity and parton-\(x\) at saturation; 5/ The contribution of the NG worldsheet fermions to both the Pomeron intercept, and quantum entanglement at low parton-\(x\); 6/ A new entanglement entropy for multiple NG exchanges in the process of shadowing, that asymptotes a single qubit at the Froissart bound; 7/ The generalization of the NG tachyon diffusion from flat to curved hyperbolic and confining space, to include evolution in the size of the probing dipoles. The organization of the paper is as follows: In section II we detail the construction of the potential between two static dipoles of fixed and equal size, via the exchange of closed NG strings. The potential is found to be attractive at all separations, with the NG tachyon dominant at large separations. In section III we extend the potential analysis to the scattering amplitude, through a simple rotation in Euclidean signature, followed by an analytical continuation to rapidity in Minkowski signature. The NG tachyon is shown to dominate the scattering amplitude at large rapidities. Both the potential and scattering amplitude exponentiate through 2PI "webs" that are identified with the exchange of a NG string, in leading order in \(1/N_{c}\). This resummation in the scattering channel, yields to saturation of the total cross section by quantum shadowing, eventhough the string bits density keeps increasing exponentially at large rapidities. In section IV we review and extend the string results for quantum entanglement for hadron-hadron in the Regge limit, and DIS scattering in the low-x regime. We suggest that the quantum entanglement entropy saturates at the Froissart bound. In section V we show how to extend the NG tachyon diffusion in curved transverse space, by including the size of the probe dipoles and enforcing conformal symmetry. The result is reminiscent of the BFKL evolution result following from Mueller\({}^{\prime}\)s wavefunction evolution for the total cross section in perturbative QCD. Our conclusions are in section VI. A short parallel between the NG tachyon results and pQCD in the Regge limit, is discussed in the Appendix. Stringy Dipole-dipole Interaction There are many indications from lattice simulations that flux tubes in quenched QCD, can be described by an effective theory of strings of which the Nambu-Goto (NG) action is the leading contribution [2]. Remarkably, the NG string appears to describe remarkably well the fuzzy QCD string, even for relatively short distances. We now use this lattice observation, to analyze the potential between a pair of static dipoles, in quenched QCD. This potential is amenable to a measurement on the lattice. This construction parallels closely the analysis of the scattering of a pair of light-like dipoles as we will detail below. Consider the static correlators of two identical Wilson loops \[{\bf W}{\bf W}=\frac{\langle{\bf W}(a,{\bf b}_{\perp}){\bf W}(a,{\bf 0}_{ \perp})\rangle}{\langle{\bf W}\rangle\langle{\bf W}\rangle}\, \tag{2}\] with \[{\bf W}(a,{\bf x}_{\perp})=\frac{1}{N_{c}}Tr\bigg{(}{\bf P}{\rm exp }\bigg{(}ig_{Y}\int_{{\cal C}_{a}}d\tau{\bf A}.{\bf v}\bigg{)}\bigg{)}. \tag{3}\] The contour \({\cal C}_{a}\) runs along the rectangular loop of side \(a\), located at \({\bf x}_{\perp}\), with infinite extent along the temporal direction in the v-direction. As usual, the Wilson-loop correlator exponentiates through 2PI "webs" [19; 20], which we identify to leading order as a closed NG string with genus 2. More specifically, \[\ln{\bf W}{\bf W}\equiv{\bf W}{\bf W}_{\rm 2PI}=g_{s}^{2}\int\frac{dT}{2T}\,{ \bf K}(T)\, \tag{4}\] with \(g_{s}\sim\frac{1}{N_{c}}\) the string coupling, and \[{\bf K}(T)=\int_{T}D[x]\,e^{-S[x]+{\rm ghost}}\, \tag{5}\] the NG string partition function with cylindrical topology and modulus \(T\). The NG action in conformal gauge is [21] \[S[x]=\frac{\sigma_{T}}{2}\int_{0}^{T}d\tau\int_{0}^{1}d\sigma\ (\dot{x}^{\mu}\dot{x}_{\mu}+{x^{\prime}}^{\mu}{x^{\prime}}_{\mu}). \tag{6}\] Here \(\dot{x}=\partial_{\tau}x\) and \(x^{\prime}=\partial_{\sigma}x\). The string tension is \(\sigma_{T}=1/(2\pi\alpha^{\prime})\). The evaluation of (4) in string theory, is in general difficult due to the finite dipole sizes, and we need to make reasonable approximations. For small size dipoles, we will assume that the cylindrical boundaries are highly pinched, and approximate them by straight lines. The exchanged closed string forms a funnel, with linear end-points, much like the exchange between static D0 branes, as detailed for the twisted dipoles in [17]. However, funnels with higher windings or N-ality are not suppressed between D0 branes, but are suppressed between dipoles of finite transverse size. A physical interpretation of the final result, will allow for a simple extraction of the dipole-dipole potential from this approximation below. ### Static dipole-dipole potential With this in mind, we now decompose the string embedding coordinates using the worldsheet normal modes in \(2+D_{\perp}\) flat space, with linear and periodic boundary conditions in the affine time with period \(T\)[22] \[x^{0}(\tau,\sigma) =X+\frac{cW}{\sigma_{T}}\tau+\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}x^{0}_{m,n}\exp\left(i2\pi m\frac{\tau}{T}\right)\cos(\pi n\sigma)\,\] \[x^{1}(\tau,\sigma) =\sum_{m=-\infty}^{\infty}\sum_{n=1}^{\infty}x^{1}_{m,n}\exp \left(i2\pi m\frac{\tau}{T}\right)\sin(\pi n\sigma)\,\] \[x_{\perp}(\tau,\sigma) =\bigg{(}\sigma-\frac{1}{2}\bigg{)}b_{\perp}+\sum_{m=-\infty}^{ \infty}\sum_{n=1}^{\infty}x^{\perp}_{m,n}\exp\left(i2\pi m\frac{\tau}{T}\right) \sin(\pi n\sigma). \tag{7}\] with \(c=1/l_{s}\). Since \[x^{0}(\tau+T,\sigma)-x^{0}(\tau,\sigma)=W\bigg{(}\frac{T}{\sigma_{T}l_{s}} \bigg{)}\, \tag{8}\] we interpret \(W=0,\pm 1,\pm 2,...\) as a winding number, with \(2\pi l_{s}T\) the circumference of the cylindrical funnel. Below we will show that this is at the origin of an effective temperature for the exchanged closed strings, in the dipole-dipole potential. \(X,W\) plays the role of collective coordinates. In terms of (7), all integrals in (5) are Gaussian with the result \[\int dX\sum_{W}{\bf K}(T,W)=\frac{a^{2}X}{l_{s}^{3}}\sum_{W}\exp\left(-\frac{T }{2}\bigg{[}\sigma_{T}b^{2}+\frac{c^{2}W^{2}}{\sigma_{T}T^{2}}\bigg{]}\right) \left[\prod_{n=1}^{\infty}2\sinh\left(\frac{n\pi T}{2}\right)\right]^{-D_{ \perp}}. \tag{9}\] The diverging products can be regularized by standard zeta function regularization, using the representation \[\sinh(\pi x)=\pi x\prod_{m=1}^{\infty}\left(1+\frac{x^{2}}{m^{2}}\right). \tag{10}\] With this in mind, and trading the re-summation over the windings using the Poisson summation formula, we obtain \[\int dX\sum_{W}{\bf K}(T,W)=\sqrt{\frac{1}{c^{2}\alpha^{\prime}T}}\frac{a^{2} X}{l_{s}^{3}}\sum_{k}\exp\left(-\frac{T}{2}\bigg{[}\sigma_{T}b^{2}+\frac{2\pi k ^{2}}{c^{2}\alpha^{\prime}T^{2}}\bigg{]}\right)\eta^{-D_{\perp}}\left(i\frac{T }{2}\right)\, \tag{11}\] where \(\eta(x)\) is Dedekind eta function \[\eta(\tau)=q^{\frac{1}{24}}\prod_{n=1}^{\infty}(1-q^{n})\qquad q=e^{2i\pi\tau} \tag{12}\] Note that (12) satisfies \(\eta(ix)=\eta(i/x)/\sqrt{x}\), and relates to the string density of modes [23] \[\eta^{-D_{\perp}}\big{(}ix\big{)}=x^{\frac{D_{\perp}}{2}}e^{\frac{\pi D_{\perp}} {12x}}\sum_{n=0}^{\infty}d(n)e^{-n\frac{2\pi}{x}}\, \tag{13}\] with \(d(n)\) being the string density of states normalized to \(d(0)=1\), with asymptotically \[d(n)\approx C\,n^{-\frac{D_{\perp}}{4}}\,e^{2\pi\sqrt{D_{\perp}n/6}}. \tag{14}\] The Poisson re-summation trades the sum over the windings \(W\) of the closed string exchanges, with the dual sum over the N-alities or \(k\) fluxes [17]. For source dipoles as Wilson loops in the fundamental representation, \(k=1,..,[\frac{N_{c}}{2}]\) which runs to infinity in the large \(N_{c}\) limit. For QCD with \([\frac{3}{2}]=1\), only the \(k=1\) N-ality is to be retained. The leading contribution to the static potential for two parallel dipoles of size \(a\), is given by the 2-particle irreducible (2PI) string exchange \[V_{DD}(b)=-\frac{\ln\mathbf{W}\mathbf{W}}{X}=-\frac{g_{s}^{2}a^{2}\sqrt{\pi \sigma_{T}}}{\alpha^{\prime}l_{s}c}\sum_{n=0}^{\infty}d(n)\left(\frac{\pi \alpha^{\prime}m_{n}}{b}\right)^{\frac{D_{\perp}-1}{2}}\mathbf{K}_{\frac{D_{ \perp}-1}{2}}\left(m_{n}b\right). \tag{15}\] Here \(\mathbf{K}_{\alpha}(x)\) is the modified Bessel function, and \(d(n)\) is the canonical string density of states with \(d(0)=1\), and \(g_{s}\) the string coupling. (15) amounts to a tower of closed string exchanges or glueballs, with radial masses \[m_{n}=\frac{1}{c\alpha^{\prime}}\ \left(1-\frac{D_{\perp}c^{2}}{12\pi\sigma_{T} }+\frac{2n\pi c^{2}}{\pi^{2}\sigma_{T}}\right)^{\frac{1}{2}}=\sigma_{T}\beta \left(1-\frac{\beta_{H}^{2}}{\beta^{2}}+\frac{8\pi n}{\sigma_{T}\beta^{2}} \right)^{\frac{1}{2}}\, \tag{16}\] and with the inverse Hagedorn temperature \(\beta_{H}=\sqrt{\pi D_{\perp}/3\sigma_{T}}\). \(\beta=2\pi/c=2\pi l_{s}\) is the circumference of the exchanged cylindrical worldsheet. For large \(b\), the exchange in (15) is dominated by the tachyon mode with \(n=0\) \[m_{0}=\sigma_{T}\beta\left(1-\frac{\beta_{H}^{2}}{\beta^{2}}\right)^{\frac{1} {2}}=\frac{1}{l_{s}}\left(1-\frac{D_{\perp}}{6}\right)^{\frac{1}{2}}\, \tag{17}\] which is still real positive for \(D_{\perp}<6\). For \(D_{\perp}=2\) and using half the rho meson Regge slope \(\alpha^{\prime}=1/4m_{\rho}^{2}\) for a closed string, we have \(m_{0}\approx 1257\) MeV, which is close to the \(m_{0^{++}}=1475\) MeV glueball reported on the lattice [24]. The attractive and static dipole-dipole potential follows as \[V_{DD}(b)\approx-\frac{g_{s}^{2}a^{2}\sqrt{\pi\sigma_{T}}}{\alpha^{\prime}} \left(\frac{\pi\alpha^{\prime}m_{0}}{b}\right)^{\frac{D_{\perp}-1}{2}}\mathbf{ K}_{\frac{D_{\perp}-1}{2}}\left(m_{0}b\right). \tag{18}\] For short separations, the exchange is also attractive \[V_{DD}(b)\approx-\bigg{(}\frac{g_{s}^{2}a^{2}\sqrt{\pi\sigma_{T}}}{2\alpha^{ \prime}}\sum_{n}d(n)\bigg{)}\Gamma\bigg{(}\frac{D_{\perp}-1}{2}\bigg{)} \left(\frac{1}{\sigma_{T}b^{2}}\right)^{\frac{D_{\perp}-1}{2}}\,, \tag{19}\] and Coulombic for \(D_{\perp}=2\), i.e. \(V_{DD}(b)\sim-g_{s}^{2}/b\). However, this contribution signals the onset of a critical NG string, with a diverging mode sum for the overall coefficient. At short separations, the exchange is not confining. It is dominated by 2-gluon Coulomb exchange in the \(0^{++}\) channel. The Casimir-Polder contribution characterizes the fully non-confining potential at large separations [25] (and references therein). Recall that the string coupling is \(g_{s}=f(\lambda)/N_{c}\), with \(f(\lambda)\) a non-universal function of the large \({}^{\prime}\)t Hooft coupling \(\lambda=g_{Y}^{2}N_{c}\). For instance, in holographic models, \(f(\lambda)=\lambda/4\pi\) (\(\mathcal{N}\)=4 SUSY) and \(f(\lambda)=(\lambda/3)^{\frac{3}{2}}/\pi\) (Witten model). This observation shows that the 2-PI contribution (15) is dominant in large \(N_{c}\), as the higher \(\#\)-PI contributions are suppressed by \((1/N_{c}^{2})^{1+\#}\). ### Entanglement in dipole-dipole interaction The circumference \(\beta=2\pi l_{s}\) in (15), plays the role of an inverse effective temperature, associated with the spatial exchange of closed strings or glueballs, in the transverse b-direction. Remarkably, \(T_{R}=1/\beta=1/2\pi l_{s}\) is identical to the Rindler temperature of falling matter on the stretched horizon, a membrane a string length away from the event horizon of a stationary black-hole. With this in mind, we may interpret the potential in (15) as a _free energy_. As a result, the stringy exchange carries a thermal entropy, which is likely to be quantum [26; 27]. This entropy is solely due to the exchanged NG tachyon at large separations \[S_{EV}(b)=\beta^{2}\bigg{(}\frac{\partial V_{DD}(b)}{\partial\beta}\bigg{)} \approx\frac{2\pi}{1-\frac{\beta_{H}^{2}}{\beta^{2}}}\bigg{(}\frac{\partial V_ {DD}(b)}{\partial m_{0}}\bigg{)}. \tag{20}\] The denominator in (20) reflects on the Hagedorn behavior, with a diverging entropy for \(\beta\to\beta_{H}\), when a thermal string reaches the Hagedorn temperature [28]. This is not the case here. We note that for shorter separations, the potential is given by (19), and independent of \(\beta\). The entanglement entropy vanishes, as the dipole-dipole potential switches from string-like at large separations, to Coulomb like at short separations. For the dipole-dipole potential, the entanglement entropy undergoes a phase transition with varying separation \(b\). This illustrates how entanglement can be used as a probe of confinement for two QCD dipoles, as suggested in holography [29]. In retrospect this is not surprising, given the geometrical phase transition observed for the correlators of circular Wilson loops, in the holographic dual construction [30]. ## III Stringy dipole-dipole scattering At large center of mass energy with a large rapidity gap, the hadron-hadron scattering amplitude is universal. The amplitude is that of two fixed size dipoles scattering elastically. For small angle scattering, the amplitude is dominated by gluon exchanges with vacuum quantum numbers. In perturbative QCD, the exchange is captured by the BFKL resummation of rapidity ordered gluons, the so-called hard Pomeron. In non-perturbative QCD, the resummation is captured by Reggeized gluons. In the planar approximation, the exchange is string-like with the topology of a cylinder, the so-called soft Pomeron. The existence of a hard and soft Pomeron at large rapidity, was initially pointed out in [31]. It finds a natural description in the gravity dual approach to QCD [13; 14], using a critical string in 10 dimensions in the conformal limit (hard Pomeron), followed by conformal symmetry breaking (soft Pomeron). The purpose of this section is to show that the NG string which is non-critical, allows for the description of the elastic dipole-dipole scattering amplitude using \(1/N_{c}\) counting rules, already in 4 dimensions. The result is a soft Pomeron with parameters that are distinct from the holographic results. The scattering construction parallels that of the dipole-dipole interaction, showing the inter-connecteness of the potential and scattering problems. The hard Pomeron can also be retrieved, by allowing the tachyon mode in the NG string to diffuse both in transverse and longitudinal size, a point inspired by holography [18]. Since the NG string provides the closest description of the QCD string potential, it should prove relevant for the QCD scattering amplitude in the eikonal limit. In particular, a more transparent approach to the unitarization of the cross section, as well as the partonic string bit content and entanglement, are seen to emerge. ### Scattering amplitude In Euclidean signature, the scattering between two dipoles follows the same analysis as that for the potential between two static dipoles presented above, with one difference: the dipoles are not paralell but slated at an angle \(\theta\). This angle maps by analytical continuation to the rapidity or boost angle \(\chi\), thanks to Minkowski historical observation. With this in mind, a rerun of the preceding arguments gives for the twisted worldsheet propagator [17] \[\mathbf{K}(T,\theta)=\frac{a^{2}}{\alpha^{\prime}}\frac{e^{-\frac{\sigma_{T}}{ 2}Tb^{2}}}{2\sinh\left(\frac{\theta T}{2}\right)}\prod_{n=1}^{\infty}\prod_{s= \pm}\frac{\sinh\left(\frac{n\pi T}{2}\right)}{\sinh\left[\frac{T(n\pi+s\theta )}{2}\right]}\left[\prod_{n=1}^{\infty}2\sinh\left(\frac{n\pi T}{2}\right) \right]^{-D_{\perp}}. \tag{21}\] The details regarding the twisted string worldsheet mode decomposition analogous to (7), followed by the detailed mode integration leading to (21) are given in [17]. The double analytical continuation \(T\to iT\) and \(\theta\rightarrow-i\chi\), maps this twisted dipole-dipole correlator onto the scattering amplitude of two light-like Wilson loops. With this in mind, inserting (21) in the corresponding \(\mathbf{WW}\) 2PI correlator and using (12,13), yield [17] \[\mathbf{WW}_{\rm 2PI}(\chi,a,b) =\frac{g_{s}^{2}a^{2}}{4\alpha^{\prime}}\sum_{k=1}^{\infty}\frac{( -1)^{k}}{k}e^{-k\frac{\pi\sigma_{T}\mathbf{b}^{2}}{\chi}}\eta^{-D_{\perp}}\bigg{(} \frac{ik\pi}{\chi}\bigg{)}\] \[=\frac{g_{s}^{2}a^{2}}{4\alpha^{\prime}}\sum_{k=1}^{\infty}\sum_{ n=0}^{\infty}\,d(n)\,\frac{(-1)^{k}}{k}\bigg{(}\frac{k\pi}{\chi}\bigg{)}^{ \frac{D_{\perp}}{2}}\exp\bigg{(}-\frac{2\chi}{k}\bigg{[}n+\frac{\mathbf{b}^{2} }{\alpha^{\prime}(2\chi/k)^{2}}-\frac{D_{\perp}}{24}\bigg{]}\bigg{)}\, \tag{22}\] after analytical continuation, with \(\chi\approx\ln(\alpha^{\prime}s)\) identified as the rapidity, for large invariant mass \(\sqrt{s}\). The scattering amplitude in momentum space is \[\frac{1}{-2is}\mathcal{T}_{DD}(\chi,q) \approx\int d^{2}\mathbf{b}\ e^{i\mathbf{q}_{\perp}\cdot\mathbf{ b}}\,\mathbf{WW}_{\rm 2PI}(\chi,a,b)\] \[\approx\frac{\pi^{2}g_{s}^{2}a^{2}}{2}\sum_{n=0}^{\infty}\sum_{k= 1}^{\infty}d(n)\frac{(-1)^{k}}{k}\left(\frac{k\pi}{\chi}\right)^{\frac{D_{ \perp}-2}{2}}\exp\left(-\frac{2\chi}{k}\left[n+\frac{\alpha^{\prime}}{4} \mathbf{q}_{\perp}^{2}-\frac{D_{\perp}}{24}\right]\right). \tag{23}\] Again, \(k\) sums over the N-ality with \(k=1,..,[\frac{N_{c}}{2}]\) all the way to infinity at large \(N_{c}\). In our case, only the \(k=1\) term contributes to the scattering of two dipoles as twisted Wilson loops, in the fundamental representation of SU(\(3_{c}\)). With this in mind, and in the large rapidity limit, (23) simplifies \[\mathcal{T}_{DD}(\chi,q)\approx is\,(\pi g_{s}a)^{2}\,\left(\frac{\pi}{\chi} \right)^{\frac{D_{\perp}}{2}-1}\exp\left(-\chi\left[\frac{\alpha^{\prime}}{2} \bigg{(}\mathbf{q}_{\perp}^{2}+M_{0}^{2}\bigg{)}\right]\right)\, \tag{24}\] with the tachyon squared mass \(M_{0}^{2}=-\frac{D_{\perp}}{6\alpha^{\prime}}\). The closed string exchange amounts to a Pomeron exchange, with a Regge trajectory \[\alpha_{\mathbb{P}}(t)=\frac{D_{\perp}}{12}+\frac{\alpha^{\prime}}{2}t\, \tag{25}\] hence a dipole-dipole (hadron-hadron) scattering amplitude that rises as \(\sigma_{\mathbb{P}}(s)\sim s^{\alpha_{\mathbb{P}}(t)}\). In the Regge limit with \(-t\ll s\), this amplitude is dominated by a single NG string exchange, given by \[\mathcal{A}(s,t)\sim-2is\int d^{2}be^{iq\cdot b}\,\mathbf{WW}_{\rm 2PI}(s,a,b) \sim is^{1+\alpha_{\mathbb{P}}(t)}\, \tag{26}\] with \(t=-q^{2}\). ### Cross section and Froissart bound The elastic scattering amplitude (27) yields the total cross section \[\sigma(s)=\frac{1}{s}{\rm Im}\,{\cal A}(s,0)\sim-2\int d^{2}b\,{\bf W}{\bf W}_{ 2{\rm PI}}(s,a,b)\sim s^{\alpha_{\rm p}(0)}\, \tag{27}\] by the optical theorem. (27) increases with the squared invariant mass \(s\), in violation of unitarity. This shortcoming can be addressed by noting that \(\langle{\bf W}{\bf W}\rangle\) as a correlator of two Wilson loops, requires the exponentiation of all the 2PI contributions in leading order in \(\frac{1}{N_{c}}\), much like in the potential between the two static dipoles discussed earlier \[{\bf W}{\bf W}(\chi,a,b)=\frac{\langle{\bf W}_{\frac{\chi}{2}}(a,{\bf b}_{ \perp}){\bf W}_{-\frac{\chi}{2}}(a,{\bf 0}_{\perp})\rangle}{\langle{\bf W} \rangle\langle{\bf W}\rangle}=\exp\left[{\bf W}{\bf W}_{2{\rm PI}}(\chi,a,b) \right]\,, \tag{28}\] \({\bf W}{\bf W}_{2{\rm PI}}\) is the 2PI "web" contributions [19; 20], which is dominated by a string exchange with genus 2 as detailed above, with higher geni contributions suppressed by powers of \(g_{s}^{2}\sim 1/N_{c}^{2}\). Since \({\bf W}{\bf W}\equiv{\cal S}\) identifies with the full \(S\)-matrix, and using \({\cal S}=1+i{\cal T}\) as detailed in Appendix A, we obtain \[\sigma(s)=2\int d^{2}b\,\,{\rm Re}\big{(}1-{\cal S}(\chi,a,b)\big{)}=2\int d^{ 2}b\left(1-e^{{\bf W}{\bf W}_{2{\rm PI}}(\chi,a,b)}\right)\,, \tag{29}\] since \({\bf W}{\bf W}_{2{\rm PI}}(\chi,a,b)\) is real. To proceed, it is useful to recast the tachyon contribution in (22) in impact parameter space, as follows [17] \[{\bf W}{\bf W}_{2{\rm PI}}(\chi,a,b)\approx-\frac{g_{s}^{2}a^{2}}{4\alpha^{ \prime}}\bigg{(}\frac{\pi}{\chi}\bigg{)}^{\frac{D_{\perp}}{2}}\,e^{-S_{\rm cl }-S_{\rm 1loop}}\, \tag{30}\] The first contribution in the exponent of (30) \[S_{\rm cl}=\sigma_{T}\int_{0}^{T_{P}}\cos^{2}(\chi\tau)\,bd\tau\int_{0}^{1} bd\sigma=\frac{1}{2}\sigma_{T}\beta b=\frac{b^{2}}{2\alpha^{\prime}\chi}\, \tag{31}\] is identified with a semi-classical worldsheet instanton, with a tunneling time \(T_{P}=1/\beta=2\pi b/\chi\). The second contribution in the exponent of (30) \[S_{\rm 1loop}=\frac{D_{\perp}}{2}{\rm lndet}(-\partial_{\perp}^{2})=-\frac{ \pi D_{\perp}}{6}\frac{b}{\beta}=-\frac{D_{\perp}}{12}\chi\, \tag{32}\] is the 1-loop zeta regulated corrective action, around the worldsheet instanton. Using (30-32), we now note that the integrand in (29) is controlled by the exponent in (22), which is a tradeoff between the minimal worldsheet instanton action \(S_{\rm cl}\) and its 1-loop quantum correction \(S_{\rm 1loop}\). The black disc radius is reached when \[S_{\rm cl}=|S_{\rm 1loop}|\to b_{\rm max}^{2}=\frac{D_{\perp}\alpha^{\prime}}{6} \chi^{2}\, \tag{33}\] so that \(e^{\mathbf{W}\mathbf{W}_{2\text{PI}}}\to\theta(b-b_{\text{max}})\). As a result (29) yields the total cross section \[\sigma(s)\sim 2\int d^{2}b\,\theta(b_{\text{max}}-b)\sim 2\pi\,b_{\text{max}}^{2}= 2\pi\alpha^{\prime}\frac{D_{\perp}\,\chi^{2}}{6}. \tag{34}\] At large rapidity, the 2PI NG contribution saturates the Froissart bound, with a scale fixed by the string tension \(\sigma_{T}=1/2\pi\alpha^{\prime}\), and not the pion mass as suggested in [17; 34; 35]. More specifically, (29) evaluates exactly to \[\sigma(s)=2\pi\alpha^{\prime}\bigg{(}\frac{D_{\perp}\chi^{2}}{6}-\chi\ln(D_{ \perp}\chi)+\bigg{(}\frac{a^{2}g_{s}^{2}\pi}{4\alpha^{\prime}}\chi+\gamma_{E} \bigg{)}+\mathcal{O}\bigg{(}e^{-\frac{a^{2}g_{s}^{2}\pi}{4\alpha^{\prime}\chi }}e^{\frac{D_{\perp}\chi}{6}}\bigg{)}\bigg{)}\, \tag{35}\] with \(\chi=\ln(\alpha^{\prime}s)\), in agreement with the estimate (34). The new result (35) stemming from the NG exchange, is to be compared with the empirical parametrization of the \(pp\) data by the COMPETE collaboration [32] \[\sigma^{pp}(s)\sim\bigg{(}35.5+0.307\ln^{2}\bigg{(}\frac{s}{29.1\,\text{GeV}^{ 2}}\bigg{)}\bigg{)}\,\text{mb}\, \tag{36}\] after dropping the Reggeon contributions at large \(\sqrt{s}\). In Fig. 1 we show the NG result for the total cross section (35) for \(a=l_{s}\) and \(g_{s}=1\) with \(\mathcal{O}\)=30: green-solid-upper curve with \(\alpha^{\prime}=l_{s}^{2}=1/2m_{\rho}^{2}\) (the rho meson trajectory slope), and red-solid-curve with \(\alpha^{\prime}=l_{s}^{2}=1/4m_{\rho}^{2}\) (half the rho meson trajectory slope). The empirical parametrization (36) blue-solid-lower curve, has been used by the COMPETE collaboration to reproduce the compiled \(pp\) and \(p\bar{p}\) data, two decades ago. It is in good agreement with the recently reported TOTEM measurements for \(pp\) at the highest \(\sqrt{s}=13\) TeV at the LHC [33]. The NG result is mostly sensitive to the string length, and is undistinguishable from the COMPETE parametrization for \(\alpha^{\prime}=l_{s}^{2}=1/4m_{\rho}^{2}\). Figure 1: Total cross section in mb versus \(\sqrt{s}\) in GeV: solid-red-lower curve is the empirical \(pp\) cross section as parametrized by the COMPETE collaboration in [32] and quoted in (36). It is in agreement with the cross sections currently measured by the LHC [33]. The solid-blue-lower curve is the NG result (35) with \(a=l_{s}\) and \(g_{s}=1\) and \(\mathcal{O}\)=30, with \(\alpha^{\prime}=1/4m_{\rho}^{2}\), while the solid-green-upper curve is for \(\alpha^{\prime}=1/2m_{\rho}^{2}\). ### Shadowing of wee string bits We now note that the total cross section (29) amounts to \[\sigma(s)\sim 2\int d^{2}b\,N(s,a,b)\, \tag{37}\] with \[\frac{1}{2}\frac{d\sigma}{d^{2}b}=N(s,a,b)=1-S(s,a,b)=1-e^{\mathbf{WW}_{2\rm PI }(s,a,b)}. \tag{38}\] \(\frac{1}{2}\) the _effective_ number of wee-string-bits flowing through the cylindrical annulus \(2\pi bdb\). The effective number \(N\), obeys a non-linear diffusion-like equation \[\left(\partial_{\xi}+M_{0}^{2}-\nabla_{\perp}^{2}\right)\ln(1-N)=0\, \tag{39}\] with \(\xi=\frac{1}{2}\alpha^{\prime}\chi\) playing the role of an effective (Gribov) time, or \[\partial_{\xi}N-M_{0}^{2}(1-N)\ln(1-N)-\nabla_{\perp}^{2}N-\frac{(\nabla_{ \perp}N)^{2}}{1-N}=0. \tag{40}\] In the small number limit \(\ln(1-N)\sim-N\), one recovers the linear diffusion equation, with non-linear corrections for larger \(N\), that cause saturation asymptotically. For instance, in the quadratic approximation, the non-linear evolution is given by \[\left(\partial_{\xi}+M_{0}^{2}-\nabla_{\perp}^{2}\right)\!N-\frac{M_{0}^{2}}{ 2}N^{2}-(\nabla_{\perp}N)^{2}+{\cal O}(N^{3})=0\, \tag{41}\] Figure 2: Differential cross section (37) for fixed impact parameter \(b\) between two dipoles of fixed size \(a=1\) and \(g_{s}=1\), as it unitarises by shadowing, at large rapidity \(\chi\). The green-solid upper curve, red-solid middle curve and the blue-solid lower curve are for impact parameters \(b=2,4,6\), in units of the string length. The dashed line follows from the saturation condition (46). which is reminiscent of the non-linear Gribov-Levin-Ryskin equation, for the unintegrated gluon distribution [36]. In Fig. 2 we show the behavior of the integrand in the total cross section (37) versus the rapidity, for three different values of the impact parameter \(b\). The green-solid upper curve, red-solid middle curve and the blue-solid lower curve are for impact parameters \(b=2,4,6\), in units of the string length. We have set the string coupling \(g_{s}=1\), and the static dipole sizes \(a=1\) in units of the string length. The dependence on the impact parameter is mild. ### DIS view of wee string bits In the stringy approach to the Pomeron and unitarization, the picture of a hadron at large rapidities or small-x, is different from that following from pQCD, where a hadron at large rapidity \(\chi\) preserves its transverse size, and shrinks its longitudinal size by the gamma factor \(\gamma=e^{\frac{1}{2}\chi}\). In contrast, when a string is exchanged, the hadron transverse size grows logarithmically as \(|\Delta x_{\perp}|\sim\sqrt{\chi\alpha^{\prime}}\), while its light front longitudinal size grows parametrically as \(|\Delta x^{-}|\sim\chi^{0}\alpha^{\prime}/0^{+}\), with \(0^{+}\) the time resolution in the light front coordinate \(x^{+}\)[37] (note that \(\Delta x^{-}\Delta x^{+}\sim\alpha^{\prime}\) by the uncertainty principle). Parton as wee string bits do not behave as normal matter under Lorentz boost. The number of wee-string-bits grows exponentially with \(\mathbf{W}\mathbf{W}_{\rm 2PI}\sim e^{\alpha_{\rm p}\chi}\). This growth is similar to the growth of the longitudinal light front momentum \(P^{+}\sim\gamma\sim e^{\frac{1}{2}\chi}\), of the boosted hadron. The string growth persists, eventhough the total cross section saturates by quantum shadowing, with \(\rho(\chi)\) the number of string bits per light front volume \(|\Delta x_{\perp}||\Delta x^{-}|\), \[\mathbf{diffusive\ regime}:\,b_{\perp}\sim\sqrt{\alpha^{\prime} \chi} \rho(\chi)\sim\frac{e^{\alpha_{\rm p}\chi}}{\chi\alpha^{\prime 2}/0^{+}}\,\] \[\mathbf{ballistic\ regime}:\,b_{\perp}\sim\sqrt{\alpha^{\prime} \chi} \rho(\chi)\sim\frac{1}{\chi^{2}\alpha^{\prime 2}/0^{+}}. \tag{42}\] An illustration of this spatial growth under boosting of the nucleon is shown in Fig. 3. The ballistic regime dominates the total cross section. These features are accessible to DIS scattering at large \(Q^{2}/m_{H}^{2}\gg 1\) and small parton fraction \(x\ll 1\), where the virtual photon can be viewed as a small projectile dipole of size \(a_{P}\sim 1/\sqrt{Q^{2}}\), scattering off a target hadron also as a dipole of a larger size \(a_{T}\), as illustrated in Fig. 4. We recall the DIS kinematics \[s-m_{H}^{2}=Q^{2}\left(\frac{1}{x}-1\right)\] with the identification \(\chi\to\ln\frac{1}{x}\). For a fixed target size, (30) translates to \[\mathbf{W}\mathbf{W}_{\rm 2PI}(Q^{2},x,b)\sim g_{s}^{2}\frac{1}{\sqrt{ \alpha^{\prime}Q^{2}}}\frac{1}{x^{\alpha_{\rm p}(0)}}e^{-\frac{\mathbf{b}_{ \perp}^{2}}{2\alpha^{\prime}\ln\frac{1}{2}}}\,\sim\left(\frac{Q^{2}(x)}{Q^{2}} \right)^{\frac{1}{2}}e^{-\frac{\mathbf{b}_{\perp}^{2}}{2\alpha^{\prime}\ln \frac{1}{2}}}\, \tag{43}\] after re-insertion of the pre-exponent, with \[Q(x)=\frac{g_{s}^{2}}{\sqrt{\alpha^{\prime}}}\frac{1}{x^{\alpha_{\rm P}(0)}}. \tag{44}\] The arguments presented in Appendix A show that the \(F_{2}\) structure function is \[F_{2}(x,Q^{2})\sim xG_{\mathbb{P}}(x,Q^{2})\sim\left(\frac{Q^{2}(x)}{Q^{2}} \right)^{\frac{1}{2}}\, \tag{45}\] at low-x, where \(Q(x)\) maybe regarded as the stringy analogue of the so-called saturation momentum, with differences with the original proposal by Golec-Biernat-Wusthoff (GBF) [38]. The standard condition for saturation is set by the requirement that \(S(s,a,b)|_{S}=e^{-\frac{1}{2}}\) in (38) (a drop by one standard deviation in the GBF Gaussian proposal for \(S\)), or equivalently \[\frac{d\sigma}{d^{2}b}\bigg{|}_{S}=2(1-e^{-\frac{1}{2}})=0.79\qquad\longrightarrow \qquad 14<\chi_{S}=\ln\frac{1}{x_{S}}<20. \tag{46}\] The rightmost result follows numerically from the black-dashed curve in Fig. 2, for the impact parameter \(b\) in the range \(2<b/l_{s}<6\). This stringy estimate puts a lower bound on parton-x at saturation \(x_{S}>10^{-6}\), which falls outside the reach of current colliders, including the future EIC. Finally, (43) may be viewed as the number of wee string bits with parton-x, at a distance \(b_{\perp}=\sqrt{{\bf b}_{\perp}^{2}}\) in the transverse plane, surrounding a fast moving hadron sourced Figure 3: Boosted nucleon with large longitudinal momentum \(P^{+}\sim e^{\frac{1}{2}\chi}\). The confined quark-diquark pair is highly contracted, with a vanishingly small longitudinal size \(1/P^{+}\), and fixed transverse size \(\chi^{0}\). It is surrounded by a halo of partons as string bits, which extends transversely as \(\sqrt{\chi}\) (diffusive regime) and up to \(\chi\) (ballistic regime). The halo remains parametrically large longitudinally as \(\chi^{0}/0^{+}\), with \(0^{+}\) the time resolution along \(x^{+}\)[37]. Throughout, it is described by a continuous NG string with longitudinal momentum \(P^{+}\). All dimensions are in string units. by a fixed size dipole. The number is small for \(Q^{2}\gg Q^{2}(x)\), whatever \(b_{\perp}\). It is large for \(Q^{2}\ll Q^{2}(x)\), only in the disc \(b_{\perp}\sim\sqrt{\alpha^{\prime}{\rm ln}\frac{1}{x}}\), which is seen to grow diffusively in the immediate surrounding of the target dipole. It drops substantially in the much wider corona \(b_{\perp}\sim\sqrt{\alpha^{\prime}}{\rm ln}\frac{1}{x}\), where the growth is ballistic. ## IV Entanglement in scattering In the large rapidity limit, (22) is dominated by the tachyon contribution in the closed string exchange described by the Nambu-Goto string. The tachyon as a mode encodes the quantum entanglement between the projectile and the target, carried geometrically by the worldsheet. A way to quantify this, is to recast (22) in the form (32). This is readily identified as the free energy of \(D_{\perp}\) massless bosons trapped in a box of size \(b\) at temperature \(1/\beta\) as illustrated in Fig. 5, with [27] \[S_{\rm 1loop}=\beta F_{B}=D_{\perp}\int\frac{bdp}{2\pi}\,{\rm ln}\!\left(1-e^{- \beta|p|}\right)\,. \tag{47}\] As a result, the exchange in Fig. 5 carries a quantum or entanglement bosonic entropy \[S_{EB}=\beta^{2}\frac{\partial F_{B}}{\partial\beta}=D_{\perp}\int\frac{bdp}{ 2\pi}\,\frac{2\beta|p|}{e^{\beta|p|}-1}=\frac{D_{\perp}}{6}\chi=2\alpha_{ \mathbb{P}}(0)\chi. \tag{48}\] Eq. (48) clearly captures the entropy of fluctuating string bits (gluonic dipoles) on the instanton worldsheet of size \(\beta\times b\). ### Fermionic contribution to the Pomeron intercept and DIS The fermionic correction to (48) follows immediately from this physical observation, as \(n_{f}\) massless worldsheet fermions also trapped in \(\beta\times b\) as illustrated in Fig. 5, with the result \[\beta F_{F}=n_{f}\int\frac{bdp}{2\pi}\,{\rm ln}\!\left(1+e^{-\beta|p|}\right)\,, \tag{49}\] Figure 4: DIS scattering as two Wilson loops \({\bf W}_{D}\) exchanging a closed NG string, in the Regge limit. in total analogy with (47). The corresponding quantum or entanglement fermionic entropy is then \[S_{EF}=\beta^{2}\frac{\partial F_{F}}{\partial\beta}=n_{f}\int\frac{bdp}{2\pi}\; \frac{2\beta|p|}{e^{\beta|p|}+1}=\frac{1}{2}\frac{n_{f}}{6}\chi\, \tag{50}\] with a net entanglement entropy \[S_{EE}=S_{EB}+S_{EF}=\bigg{(}1+\frac{1}{2}\frac{n_{f}}{D_{\perp}}\bigg{)}\frac{ D_{\perp}}{6}\chi\, \tag{51}\] The result (51) implies that the stringy Pomeron intercept is affected by the fermionic corrections on the worldsheet. More specifically, the Pomeron contribution in hadron-hadron scattering is modified, with a shifted intercept \[\alpha_{\mathbb{P}}(t)=\frac{D_{\perp}}{12}+\frac{\alpha^{\prime}}{2}t\to \tilde{\alpha}_{\mathbb{P}}(t)=\bigg{(}1+\frac{1}{2}\frac{n_{f}}{D_{\perp}} \bigg{)}\frac{D_{\perp}}{12}+\frac{\alpha^{\prime}}{2}t. \tag{52}\] due to the fermionic contribution. Also in DIS, we can re-interpret the gluonic \(F_{2}\) structure function (45) at low-x as \[F_{2}(x,Q^{2})\sim xG_{\mathbb{P}}(x,Q^{2})\sim\frac{1}{x^{\tilde{\alpha}_{ \mathbb{P}}(0)}}\, \tag{53}\] at the resolution fixed by the probing dipole size \(Q\sim 1/a\). Note that (51) is still solely given by the gluon density (53) at low-x \[S_{EE}(x,Q^{2})\sim\ln(xG_{\mathbb{P}}(x,Q^{2})\, \tag{54}\] albeit with a fermion corrected gluonic intercept. At low-x, the partonic evolution does not follow from DGLAP, but rather BFKL (weak coupling) or surfaces (strong coupling). An alternative proposal to account for the fermionic contribution to the entanglement entropy at low-x was suggested in [39; 40; 41]. Figure 5: String worldsheet exchange \(\beta\times b\) in the Regge limit. The transverse fluctuations \(x_{\perp}^{t}\) with \(i=1,..,D_{\perp}\), and the \(n_{f}\) massless fermions \(q\), are subject to periodic boundary conditions in \(\beta\). ### Entanglement and Froissart bound Finally, we suggest that in Reggeized hadron-hadron scattering at the Froissart bound, the entanglement entropy saturates by quantum shadowing, eventhough the entanglement entropy as measured by (54) in DIS does not. For that, we interpret the 2PI stringy exchanges in the shadowing process in (29), as a net quantum free energy \[F_{\rm 2PI}=-\frac{1}{\beta}\ln\biggl{(}2\bigl{(}1-e^{\mathbf{W}\mathbf{W}_{ \rm 2PI}}\bigr{)}\biggr{)}. \tag{55}\] Note that it reduces to the stringy free energy for small \(\mathbf{W}\mathbf{W}_{\rm 2PI}\) and large rapidity. Hence the 2PI quantum or entanglement entropy \[S_{\rm 2PI}=\beta^{2}\frac{\partial F_{\rm 2PI}}{\partial\beta}=\ln\bigl{(}1-e ^{\mathbf{W}\mathbf{W}_{\rm 2PI}}\bigr{)}+\beta\frac{\partial_{\beta}\mathbf{W} \mathbf{W}_{\rm 2PI}}{1-e^{-\mathbf{W}\mathbf{W}_{\rm 2PI}}}+\ln 2\to\ln 2\, \tag{56}\] which is seen to asymptote a constant for fixed \(b\) and large rapidity \(\chi\gg 1\). In the unitarity limit, the entanglement is that of a single qubit! Recall that in the black disc limit, the scattering choice appears to be binary, as the elastic and inelastic cross sections are equal to the classical cross section (Babinet theorem). In Fig. 6 we show the entanglement entropy versus rapidity, using our proposal (55) for the 2PI contribution, for different values of the impact parameter \(b\). The green-solid lower curve, red-solid middle curve and blue-solid upper curve, are for impact parameters \(b=2,4,6\) and fixed dipole size \(a=1\), all units of the string length. We have fixed the string Figure 6: The entanglement entropy between two light-light scattering dipoles in the Regge limit, versus the rapidity \(\chi\), following from the 2PI NG string contribution to the total cross section shown in Fig. 6. The green-solid lower curve, red-solid middle curve and blue-solid lupper are for impact parameters \(b=2,4,6\), for a pair of dipoles of fixed size \(a=1\), in units of the string length. The rapid and linear rise in the entanglement entropy with rapidity, is stopped and reversed by quantum shadowing. It levels off asymptotically when the Froissart bound is reached. coupling \(g_{s}=1\). The rapid initial rise with rapidity, is caused by the NG tachyon in the single string exchange (51). This rise overshoots the unitarity line, before it is overtaken by quantum shadowing in (56), to level off at the Froissart bound. This levelling off is generic of chaotic systems in their approach to equilibrium [42; 27], although here from above and not below. ## V NG Tachyon diffusion in a confining warped space The scattering amplitude of two fixed dipoles of size \(a\) in (24), is dominated by the exchange of the NG tachyon \[\exp\left(-\chi\left[\frac{\alpha^{\prime}}{2}\bigg{(}\mathbf{q}_{\perp}^{2}+ M_{0}^{2}\bigg{)}\right]\right)\, \tag{57}\] in the transverse \(D_{\perp}\)-dimensions, where the rapidity \(\chi\) emerges as a _proper time_. This exchange is diffusive, and can be recast (57) using the contour integral \[\int_{-i\infty}^{+i\infty}\frac{dj}{2i\pi}\frac{e^{\chi j}}{j+\frac{\alpha^{ \prime}}{2}(M_{0}^{2}+\mathbf{q}_{\perp}^{2})}=\frac{2}{\alpha^{\prime}}\int_ {-i\infty}^{+i\infty}\frac{dj}{2i\pi}e^{\chi j}G(j,\mathbf{q}_{\perp}). \tag{58}\] The propagator in the complex j-plane satisfies \[(\mathbf{q}_{\perp}^{2}+m_{j}^{2})\,G(j,\mathbf{q}_{\perp})=1. \tag{59}\] with the j-dependent mass \(m_{j}^{2}=\frac{2}{\alpha^{\prime}}(j-j_{0})\) and \(j_{0}=\frac{D_{\perp}}{12}\). This stringy result captures the exchange of an emergent spin-j in the Regge limit, for fixed momentum transfer \(\mathbf{q}_{\perp}\), sourced by a projectile and a target dipole of fixed sizes (both set to \(a\)). ### Warped diffusion In QCD, the dipole sizes vary as well, with the combined change in \(a,b_{\perp}\), expected to be conformal in the UV, and stringy in the IR. To realize this, we rewrite (59) in a general coordinate space \[-\frac{1}{\sqrt{|g|}}\partial_{\mu}(\sqrt{|g|}g^{\mu\nu}\partial_{\nu}G)+m_{j }^{2}G=\frac{1}{\sqrt{|g|}}\delta(z-z^{\prime})\delta^{D-1}(\vec{x}-\vec{x}^{ \prime})\, \tag{60}\] by combining \((a,\mathbf{b}_{\perp})\rightarrow(z,\vec{x})=x^{\mu}\) in \(D=1+D_{\perp}\) space. But what is the metric \(g_{\mu\nu}\) when \(z\) is added as a coordinate? (60) can be viewed as an evolution equation in our QCD analysis of the scattering amplitude, with the evolution taking place in \(z\sim 1/\sqrt{Q^{2}}\) and rapidity \(\chi\). Hence, the metric should exhibit conformal symmetry for small \(z\). Inspired by holography, we fix \(g_{\mu\nu}\) through the line element \[ds^{2}=\frac{R^{2}}{\vec{z}^{2}}e^{\mp\kappa^{2}z^{2}}(dz^{2}+d^{2}x_{\perp}). \tag{61}\] For small size dipoles \(z\to 0\), (61) reduces to that of AdS\({}_{3}\) which is conformal. For large size dipoles \(z\to 1/\kappa\) as expected from confinement for both warping signs. Although (61) is reminiscent of the holographic analysis of the Pomeron in AdS\({}_{5}\times\)S\({}_{5}\)[13], we emphasize that the present construction is not holographic. The starting point is the NG string in flat space with a tachyon for \(D_{\perp}=2\), with no reference to type IIB string theory in 10-dimensions with no tachyon [1] (and references therein). As we noted earlier, the NG string in 4-dimension is the only effective string model currently supported by QCD lattice simulations. We define \[\frac{R^{2}}{\alpha^{\prime}}\equiv\sqrt{\lambda}\, \tag{62}\] with \(\kappa,R\) to be tied below. Since our approach is not holographic, the identification \(\lambda=g_{Y}^{2}N_{c}\) does not follow. However, it is natural to expect that \(R/l_{s}\gg 1\), since \(R\) is the radius of the hyperbolic space, where the warped evolution of the NG tachyon is justified for large transverse separations. With this in mind, and using (61) in (60), we obtain \[-\partial_{z}^{2}G(z,z^{\prime},t)+(D-2)\bigg{(}\frac{1}{z}\pm \kappa^{2}z\bigg{)}\partial_{z}G(z,z^{\prime},t)+\bigg{(}t+\frac{S}{z^{2}}e^{ \mp\kappa^{2}z^{2}}\bigg{)}G(z,z^{\prime},t)=\delta(z-z^{\prime}). \tag{63}\] with \(\partial_{\perp}^{2}\to t\) and \[S\equiv S_{j}=2\sqrt{\lambda}(j-j_{0}). \tag{64}\] To remove the first order derivative, we redefine \[G(z,z^{\prime},t)\to z^{\frac{D-2}{2}}e^{\pm\frac{D-2}{4} \kappa^{2}z^{2}}z^{\ {}^{\prime}\frac{D-2}{2}}e^{\pm\frac{D-2}{4}\kappa^{2}z^{ \prime 2}}G(z,z^{\prime},t)\, \tag{65}\] set \(u=\kappa z\), expand \(e^{\mp u^{2}}=1\mp u^{2}+\mathcal{O}(u^{4})\) (moderatly small size dipoles) to obtain \[-\frac{d^{2}}{du^{2}}G(u,u^{\prime},t)+\] \[\bigg{(}\frac{S_{j}+\frac{D(D-2)}{4}}{u^{2}}+\frac{u^{2}}{4}(D-2) ^{2}+\frac{t}{\kappa^{2}}\mp S_{j}\pm\frac{1}{2}(D-2)(D-3)\bigg{)}\ G(u,u^{ \prime},t)=\delta(u-u^{\prime}). \tag{66}\] For \(D=1+D_{\perp}=3\), we have explicitly \[-\frac{d^{2}}{du^{2}}G(u,u^{\prime},t)+\bigg{(}\frac{S_{j}+\frac {3}{4}}{u^{2}}+\frac{u^{2}}{4}+\frac{t}{\kappa^{2}}\mp S_{j}\bigg{)}G(u,u^{ \prime},t)=\delta(u-u^{\prime}). \tag{67}\] ### Repulsive warping The repulsive warping with \(e^{+\kappa^{2}z^{2}}\) acts as absolute confinement for the dipole sizes in hyperbolic space, characterizing the evolution. (In holography, it is a regulated hard wall in bulk AdS). In this case the linear and homogeneous equation (67) becomes \[-\frac{d^{2}}{du^{2}}G(u)+\bigg{(}\frac{S+\frac{D(D-2)}{4}}{u^{2}}+ \frac{u^{2}}{4}(D-2)^{2}+\frac{t}{\kappa^{2}}+S-\frac{1}{2}(D-2)(D-3)\bigg{)}G( u)=0. \tag{68}\] To simplify the equation, we consider \(u\to u^{\prime}=\sqrt{D-2}u=\sqrt{D-2}\kappa z\), for which (68) reads \[-\frac{d^{2}}{du^{2}}G(u)+\bigg{(}\frac{S+\frac{D(D-2)}{4}}{u^{2}}+ \frac{u^{2}}{4}+\tilde{t}+\tilde{S}-\frac{1}{2}(D-3)\bigg{)}G(u)=0. \tag{69}\] With \[D_{\perp}=D-1\,\ \tilde{t}=\frac{t}{(D-2)\kappa^{2}}\,\ \tilde{S}= \frac{S}{D-2}. \tag{70}\] the general solutions are of the form \[G_{1}(u) =e^{-\frac{u^{2}}{4}}u^{1-\sqrt{\frac{D_{\perp}^{2}}{4}+(D_{ \perp}-1)\tilde{S}}}\,\mathbb{M}\left(\frac{\frac{D_{\perp}}{2}+\tilde{S}+ \tilde{t}-\sqrt{\frac{D_{\perp}^{2}}{4}+(D_{\perp}-1)\tilde{S}}}{2},1-\sqrt{ \frac{D_{\perp}^{2}}{4}+(D_{\perp}-1)\tilde{S}},\frac{u^{2}}{2}\right)\,\] \[G_{2}(u) =e^{-\frac{u^{2}}{4}}u^{1+\sqrt{\frac{D_{\perp}^{2}}{4}+(D_{ \perp}-1)\tilde{S}}}\,\mathbb{U}\left(\frac{\frac{D_{\perp}}{2}+S+\tilde{t}+ \sqrt{\frac{D_{\perp}^{2}}{4}+(D_{\perp}-1)\tilde{S}}}{2},1+\sqrt{\frac{D_{ \perp}^{2}}{4}+(D_{\perp}-1)\tilde{S}},\frac{u^{2}}{2}\right)\,\] where \(\mathbb{M}(a,b,z)\) and \(\mathbb{U}(a,b,z)\) are the Kummer \(\mathbb{M}\) function and the Tricomi \(\mathbb{U}\) function, respectively. \(\mathbb{M}\) is regular at \(u=0\), while \(\mathbb{U}\) has a branch cut at \(u=0\). The solution to (67) reads \[G(u,u^{\prime}) =\mathcal{A}\,G_{2}(u)G_{1}(u^{\prime}) u>u^{\prime}\,\] \[G(u,u^{\prime}) =\mathcal{A}\,G_{1}(u)G_{2}(u^{\prime}) u<u^{\prime}. \tag{72}\] with \(\mathcal{A}^{-1}\) given by the Wronskian \[\mathcal{A}^{-1}=\mathcal{W}(G_{1},G_{2})=\Gamma\left(\frac{\tilde{t}+\tilde{ S}+\frac{D_{\perp}}{2}-\sqrt{(D_{\perp}-1)\tilde{S}+\frac{D_{\perp}^{2}}{4}}}{2} \right)\,. \tag{73}\] ### Reggeized trajectories \(\mathcal{A}\) has poles when \[\frac{\tilde{t}+\tilde{S}+\frac{D_{\perp}}{2}-\sqrt{(D_{\perp}-1)\tilde{S}+\frac{ D_{\perp}^{2}}{4}}}{2}=-n. \tag{74}\] or squared masses \(|t(j,n)|\) given by \[|\tilde{t}(j,n)|=2n+\tilde{S}+\frac{D_{\perp}}{2}-\sqrt{(D_{\perp}-1)\tilde{S}+ \frac{D_{\perp}^{2}}{4}}. \tag{75}\] By extending the tachyon diffusion to AdS\({}_{3}\) space, the original tachyon pole morphes into a multitude of Regge poles, \[\tilde{S}=\frac{2\sqrt{\lambda}(j-j_{0})}{D_{\perp}-1}=\frac{-1-4n-\frac{2t}{( D_{\perp}-1)\kappa^{2}}\pm\sqrt{1-8(D_{\perp}-1)n-\frac{4t}{\kappa^{2}}}}{2}. \tag{76}\] For \(n=0\) and small \(t\), the Regge trajectories for the \(\pm\) signs are \[j_{+} =j_{0}+\frac{\alpha^{\prime}}{2}|t|+\mathcal{O}(t^{2})\,\] \[j_{-} =j_{0}-\frac{D_{\perp}-1}{2\sqrt{\lambda}}+\frac{\alpha^{\prime}} {2}\frac{D_{\perp}-2}{D_{\perp}}|t|+\mathcal{O}(t^{2}). \tag{77}\] provided that \(\kappa^{2}R^{2}=D_{\perp}\) with \(\sqrt{\lambda}=R^{2}/\alpha^{\prime}\). For \(D_{\perp}=2\), the tachyon trajectory \(j_{+}\) is diffusive, while the shifted trajectory \(j_{-}\) is super-diffusive, with an intercept below \(j_{0}\). The higher Regge trajectory \(j_{+}\) dominates at large rapidity. ### Confining regime For large \(t\) and large \(n\) the poles in the j-plane become imaginary, but with negative real parts. To proceed with the contour integration in (58) for the tachyon propagator, we select the branch cut of \(\sqrt{S+1}\) from \(-1-i\infty\) to \(-1\). For the Regge kinematics with small \(t\), the contribution from the first pole \(n=0\) with \[\tilde{S}=-\frac{tD_{\perp}}{(D_{\perp}-1)\kappa^{2}}\, \tag{78}\] dominates. At the pole, the hypergeometric functions in (71) simplify \[\mathbb{M}\bigg{(}0,1-\frac{D_{\perp}}{2}+\frac{t}{\kappa^{2}}, \frac{u^{2}}{2}\bigg{)}=\frac{1}{\Gamma\bigg{(}1-\frac{D_{\perp}}{2}+\frac{t}{ \kappa^{2}}\bigg{)}}\,\] \[\mathbb{U}\bigg{(}\frac{D_{\perp}}{2}-\frac{t}{\kappa^{2}},1+ \frac{D_{\perp}}{2}-\frac{t}{\kappa^{2}},\frac{u^{2}}{2}\bigg{)}=\bigg{(} \frac{2}{u^{2}}\bigg{)}^{\frac{D_{\perp}}{2}-\frac{t}{\kappa^{2}}}\.\] Using (71,72,79) in (58) and carrying the j-integral gives the warped tachyon propagator \[G(z,z^{\prime},b_{\perp})\sim(zz^{\prime})^{1-\frac{D_{\perp}}{2}}\int\frac{d^{2 }q}{(2\pi)^{2}}\frac{(zz^{\prime})^{1-\frac{D_{\perp}}{2}+\frac{q^{2}}{\kappa^{ 2}}}e^{\chi j0-\chi\frac{q^{2}D_{\perp}}{2\kappa^{2}\sqrt{\lambda}}+i\vec{q} \cdot\vec{b}}}{\Gamma\bigg{(}1-\frac{D_{\perp}}{2}+\frac{q^{2}}{\kappa^{2}} \bigg{)}}\, \tag{80}\] after re-winding the re-definition (65). For large \(\chi\), the q-integral is dominated by the small-\(q\) region, hence \[G(z,z^{\prime},b_{\perp})\sim\bigg{(}\frac{1}{\chi}\bigg{)}^{\frac{D_{\perp}}{ 2}}(zz^{\prime}\tilde{\kappa}^{2})^{2-D_{\perp}}e^{\chi j_{0}}\exp\bigg{(}- \frac{\kappa^{2}b^{2}}{\frac{2\chi D_{\perp}}{\sqrt{\lambda}}+4\ln(zz^{\prime }\kappa^{2})}\bigg{)}\, \tag{81}\] which is analogous to the tachyon diffusion in flat space. In particular, for large \(\chi\) the exponential reads \[-\frac{\kappa^{2}b^{2}}{\frac{2\chi D_{\perp}}{\sqrt{\lambda}}+4\ln(zz^{\prime }\kappa^{2})}\rightarrow-\frac{b^{2}}{2\chi\alpha^{\prime}}\, \tag{82}\] after using the identifications \(\kappa^{2}R^{2}=D_{\perp}\) and \(R^{2}/\alpha^{\prime}=\sqrt{\lambda}\). In the confining regime, (81,82) reduce to our unwarped result for the exchanged NG tachyon in flat dimensions and fixed dipole sizes (57), after Fourier transform. ### Conformal regime The effects of the warping is mostly in action away from the confining regime. Indeed, in the conformal regime, the hypergeometric functions are limited to small \(u=\kappa z\) and large \(t\gg\kappa^{2}\), which we will refer to as the conformal limit. More specifically, this amounts to the limits \[\lim_{\kappa\to 0}G_{1}(u),\ G_{2}(u)\, \tag{83}\] which are not simply the \(u\to 0\) limits, because \(\frac{t}{\kappa^{2}}\) in the second argument has to go to infinity. To obtain these limits, the simplest way is to note that the differential equation (69) reduces to \[-\frac{d^{2}}{dz^{2}}G(z)+\bigg{(}\frac{S+\frac{D(D-2)}{4}}{z^{2}}+t\bigg{)}G(z )=0\, \tag{84}\] with two solutions \[\tilde{G}_{1}(z) =\sqrt{z}J\underset{-\sqrt{S+\frac{D_{\perp}^{2}}{4}}}{\sqrt{z}J }(-i\sqrt{t}z)\, \tag{85}\] \[\tilde{G}_{2}(z) =\sqrt{z}Y\underset{-\sqrt{S+\frac{D_{\perp}^{2}}{4}}}{\sqrt{z}J }(-i\sqrt{t}z). \tag{86}\] Here \(-\sqrt{S+\frac{D_{\perp}^{2}}{4}}\) is chosen to have a negative real part. With this in mind, the warped tachyon propagator in the conformal limit is of the form \[G(z,z^{\prime},b_{\perp})=\frac{\pi t}{2}\sqrt{zz^{\prime}}J\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! For \(\tilde{j}_{0}<4\mathcal{D}\), the integral in (93) can be undone for large \(\chi\), with the result for the total cross section in the conformal regime \[\sigma(s)\to 2\pi zz^{\prime}\frac{g_{s}^{2}\sqrt{\pi}}{2\sqrt{2}}e^{\, \chi\left(j_{0}-\mathcal{D}\left(\frac{D_{1}^{2}}{4}-1\right)\right)}. \tag{94}\] It grows exponentially, i.e. \(\sigma(s)\sim zz^{\prime}s^{j_{0}}\) with \(j_{0}=\frac{D_{\perp}}{12}=\frac{1}{6}\) for \(D_{\perp}=2\), much like the BFKL result in pQCD, which is also conformal at weak coupling. Modulo the NG string assignments for \(j_{0}\) and \(\mathcal{D}\), the contribution (91) to the Reggeized scattering amplitude, and the total dipole-dipole cross section, is analogous to the result following from Mueller\({}^{\prime}\)s dipole wavefunction evolution in pQCD [43]. More importantly, the results (81) (confining regime) and (91) (conformal regime), show that the general and warped NG result (71) interpolates continuously between these two regimes in Reggeized scattering. This point was originally made in the context of the gravity dual construction [13; 14], with no tachyon in bulk. ## VI Conclusions One of the most striking features of the detailed lattice studies of the fuzzy QCD string, is its description as a fundamental NG string for large and even relatively small lengths. This observation has been numerically checked in both 3- and 4-dimensions, and for different SU(N\({}_{c}\)) realizations. We have used this observation, to analyze the stringy potential between a pair of static and scattering dipoles, as well as their quantum entanglement. The derivation of the static potential of two fixed size dipoles, follows from the exchange of closed strings or glueballs in the quenched approximation (large \(N_{c}\) limit). Using the NG string, we have shown that this exchange is dominated by the tachyonic mode and attractive at large distances. The attraction persists at short distances, albeit in a power law form following from the resummation over the string states. This change in the static potential is captured by an entanglement entropy. The derivation of the scattering amplitude for two fixed size dipoles can be obtained using similar arguments, by noting that it follows from a potential between two dipoles set at an angle \(\theta\), which is then analytically continued to the rapidity \(i\chi\). In the Regge limit, the scattering amplitude is totally fixed by the tachyonic mode of the NG string, as all the excited modes are suppressed at large rapidity. There is a total parallel between the potential channel and the scattering channel, where the 2PI contribution is retained in both, in leading order in \(1/N_{c}\). This contribution yields unitarization by saturation of the Froissart bound. We have extended some results regarding the quantum entanglement as captured by the NG tachyon in dipole-dipole scattering. The inclusion of the worldsheet fermions, modify the intercept of the stringy Pomeron, thereby changing the quantum entanglement as measured by DIS through solely a modification of the gluonic density. In the presence of shadowing, the quantum entanglement entropy is depleted, and saturates at the Froissart bound. The NG tachyon contribution to the Reggeized scattering amplitude of two dipoles, captures two key aspects of the exchanged string: 1/ an exponential growth in the number of string bits at the origin of the growth of the total cross section at large rapidity prior to saturation; 2/ a diffusive spread of the string bits in the transverse plane. It is the balance between these two phenomena that yields saturation by quantum shadowing, as captured by the 2PI contribution. Perturbative QCD arguments using BFKL evolution of gluons as dipoles, have shown that the gluon sizes evolve and that the evolution is conformal in the UV. This aspect of QCD in the Regge limit, can be extended to the exchanged NG string, by considering the diffusive spreading of the string bits in the transverse plane together with the changes in the source and target dipole sizes. In other words, the NG tachyonic mode should diffuse in conformal 3-dimensional space (transverse space plus dipole size) as opposed to simply 2-dimensional space (transverse space). We have shown how to explicitly extend the diffusion of the NG tachyon mode in flat transverse space, to curved AdS\({}_{3}\) plus a repulsive wall. We have emphasized that this approach is not holographic, since no string-gauge duality is used. In the conformal limit, the modified NG tachyon diffusion in proper time and curved space, yields results for the scattering amplitude of two dipoles with evolving sizes, similar to those following from the BFKL evolution of Mueller\({}^{\prime}\)s wavefunction in QCD. In the confining regime, the NG tachyon diffusion is unchanged from its standard form in flat dimensions. **Acknowledgements** We thank Krzysztof Kutak for discussions. This work is supported by the Office of Science, U.S. Department of Energy under Contract No. DE-FG-88ER40388 and by the Priority Research Areas SciMat and DigiWorld under program Excellence Initiative - Research University at the Jagiellonian University in Krakow. ## Appendix A Relation to pQCD dipoles There are few parallels between the stringy results we have developed, and those established using pQCD. In particular, the 2PI string amplitude (29) at fixed \(b\), can be recast in the form \[1-e^{\mathbf{W}\mathbf{W}(s,a,b)}=1-\exp\!\bigg{(}-\frac{g_{s}^{2}}{4}\frac{a ^{2}}{\alpha^{\prime}}\,xG_{\mathbb{P}}(x,1/a^{2})S(b)\bigg{)}\, \tag{54}\] with \(g_{s}=f(\lambda)/N_{c}\), the Gaussian profile \[S(b)=\left(\frac{\pi}{\chi}\right)^{\frac{D_{\perp}}{2}}e^{-S_{\rm min}}=\left( \frac{\pi}{\chi}\right)^{\frac{D_{\perp}}{2}}e^{-\frac{b^{2}}{2\chi\alpha^{ \prime}}}\, \tag{10}\] and \(\chi=\ln\frac{1}{x}\) in DIS. The diffusion of the string bits in the transverse plane is manifest in (10). (10) is very similar to the Glauber-Mueller formula for multiple dipole-target interactions [44] \[N_{\rm GM}(x_{01},x,b)=1-\exp\!\bigg{(}-\frac{\lambda}{N_{c}^{2}}\frac{x_{01}^ {2}}{8R^{2}}\,xG_{\rm DGLAP}(x,4/x_{01}^{2})S(b)\bigg{)}\, \tag{11}\] where the dipole size is \(x_{01}\) (\(a\) in our case), \(R\) is the radius of the target (\(\#\sqrt{\alpha^{\prime}}\) in our case), and \(S(b)\) is identified with the dipole profile function inside the target ((10) in our case). (11) is an extension of the original Golec-Biernat-Wusthoff formula for the analysis of saturation in DIS at HERA [38], through the addition of the profile function. In general, the forward amplitude \(\mathcal{T}(b_{\perp},\chi)\) of dipole-nucleus scattering at impact parameter \(b_{\perp}\) and rapidity \(\chi\), relates to the full S-matrix \(\mathcal{S}(b_{\perp},\chi)\) as \[1-\mathcal{S}(b_{\perp},\chi)=-i\mathcal{T}(b_{\perp},\chi)\, \tag{12}\] and the total cross section is given by \[\sigma(\chi)=2\int d^{2}b{\rm Re}\left(1-\mathcal{S}(b_{\perp},\chi)\right). \tag{13}\] In our case or in the color glass condensate model (CGC), the full S-matrix is approximated by the Wilson-loop average \[\mathcal{S}=\frac{1}{N_{c}}\langle{\rm Tr}V_{\vec{x}_{0}}V_{\vec{x}_{1}}^{ \dagger}\rangle_{\rm target}\, \tag{14}\] with \(V_{\vec{x}}\) a Wilson line located at \(\vec{x}\). If the target is another Wilson-loop, it simply reduces to the \({\bf WW}\) corrector. In the CGC model, this Wilson-loop average exponentiates [45] (and references therein) \[\mathcal{S}=\exp\bigg{(}-\frac{1}{4}x_{\perp}^{2}Q_{s}^{2}(b_{\perp},Y)\ln \frac{1}{|x_{\perp}|\Lambda}\bigg{)}. \tag{15}\] Our large \(b_{\perp}\) result cuts off the growth by a delicate balance between the factors \(e^{\frac{D_{\perp}\chi}{12}}\) (growth of the string bits) and \(e^{-\frac{b^{2}}{2\chi\alpha^{\prime}}}\) (diffusion penalty).
2303.17620
Exploring Thousands of Nearby Hierarchical Systems with Gaia and Speckle Interferometry
There should be about 10,000 stellar hierarchical systems within 100 pc with primary stars more massive than 0.5 Msun, and a similar amount of less massive hierarchies. A list of 8000 candidate multiples is derived from wide binaries found in the Gaia Catalog of Nearby Stars where one or both components have excessive astrometric noise or other indicators of inner subsystems. A subset of 1243 southern candidates were observed with high angular resolution at the 4.1 m telescope, and 503 new pairs with separations from 0.03" to 1" were resolved. These data allow estimation of the inner mass ratios and periods and help to quantify the ability of Gaia to detect close pairs. Another 621 hierarchies with known inner periods come from the Gaia catalog of astrometric and spectroscopic orbits. These two non-overlapping groups, combined with existing ground-based data, bring the total number of known nearby hierarchies to 2754, reaching a completeness of ~22% for stars above 0.5 Msun. Distributions of their periods and mass ratios are briefly discussed, and the prospects of further observations are outlined.
Andrei Tokovinin
2023-03-30T17:59:13Z
http://arxiv.org/abs/2303.17620v1
# Exploring Thousands of Nearby Hierarchical Systems with Gaia and Speckle Interferometry ###### Abstract There should be about 10,000 stellar hierarchical systems within 100 pc with primary stars more massive than 0.5 \(M_{\odot}\), and a similar amount of less massive hierarchies. A list of 8000 candidate multiples is derived from wide binaries found in the Gaia Catalog of Nearby Stars where one or both components have excessive astrometric noise or other indicators of inner subsystems. A subset of 1243 southern candidates were observed with high angular resolution at the 4.1 m telescope, and 503 new pairs with separations from 0\(\farcs\)03 to 1\(\arcsec\) were resolved. These data allow estimation of the inner mass ratios and periods and help to quantify the ability of Gaia to detect close pairs. Another 621 hierarchies with known inner periods come from the Gaia catalog of astrometric and spectroscopic orbits. These two non-overlapping groups, combined with existing ground-based data, bring the total number of known nearby hierarchies to 2754, reaching a completeness of \(\sim\)22% for stars above 0.5 \(M_{\odot}\). Distributions of their periods and mass ratios are briefly discussed, and the prospects of further observations are outlined. binaries:visual 0000-0002-4810-7885]Andrei Tokovinin 0000-0002-4880-7885]Andrei Tokovinin ## 1 Introduction Stars form in groups. Almost every star has been gravitationally bound to some other star or stars in their infancy (Lee et al., 2019), and a substantial fraction of these systems have survived, as evidenced by the multiplicity statistics of mature field populations (Moe and Di Stefano, 2017; Offner et al., 2022). Statistics of stellar systems helps us to understand their formation and early evolution. Hierarchical systems are particularly informative in this regard. However, owing to the vast range of parameters (separations, mass ratios), the complete view of even the nearest population of stellar hierarchies has been difficult to grasp. The relatively well studied sample of solar-type stars within 25 pc contains only 56 hierarchical systems (Raghavan et al., 2010). The Gaia astrometric space mission (Gaia Collaboration et al., 2016, 2021) has dramatically changed the landscape of Galactic astronomy in many ways. The mission continues, and the use of its intermediate data for the study of stellar systems is a rapidly growing field (e.g. El-Badry et al., 2021; Tokovinin, 2022). The Gaia Catalog of Nearby Stars (GCNS) within 100 pc (Gaia Collaboration et al., 2021), based on the Gaia Early Data Release 3 (eDR3), gives a complete census of all stars down to the hydrogen burning limit (except for some binaries lacking parallaxes). Owing to its exquisite astrometric precision, Gaia can detect a substantial fraction of binary systems in the 100 pc volume. However, the periods and mass ratios of most candidate close binaries remain essentially unconstrained. The Non-Single Star (NSS) catalog (Gaia Collaboration et al., 2022; Pourbaix et al., 2022), part of the Gaia data release 3 (DR3), contains orbital elements only for a small fraction of astrometric and spectroscopic binaries detected by Gaia. In this work, I open the treasure trove of Gaia data to get a better view of nearby stellar hierarchies. A candidate list is created by isolating bound pairs of stars found in the GCNS and looking at those that contain signs of inner subsystems according to the Gaia binarity indicators. Naturally, some of these hierarchies are already known from prior work. A subset of the new candidates have been observed systematically by speckle interferometry at the 4.1 m Southern Astrophysical Research Telescope (SOAR) in 2021-2023, and these results are reported here. About half of the candidates were resolved, providing estimates of their mass ratios and likely periods. At the same time, these resolutions allow better understanding of the discovery potential of the Gaia binarity indicators. Complementing the known hierarchies by these new systems and by the systems with inner orbits determined by Gaia leads to a sample of 2758 main-sequence hierarchies within 100 pc with known or estimated inner and outer periods. Their primary stars are generally more massive than \(\sim\)0.7 \(M_{\odot}\). A glimpse of their statistics (still incomplete but much better than in the pre-Gaia era) is given, and directions of future research are outlined. ## 2 Gaia Hierarchies Within 100 pc ### Number of Hierarchies within 100 pc The GCNS (Gaia Collaboration et al., 2021) is a rich source of hierarchical systems in a volume-limited sample, with the potential to make a major contribution to their statistics. It contains 331,312 entries. However, the GCNS misses close binaries with components of comparable brightness that do not have parallaxes in eDR3. The fraction of missing stars was estimated at 7.4% in the GCNS, based on its data for 10 pc volume. The peak of binary separation distribution at \(\sim\)50 au corresponds to an angular separation of 0\(\farcs\)5 at 100 pc, so the bias against binaries in the complete GCNS could be larger than in its 10 pc portion. Empirical characterization of the Gaia bias against binaries based on the new SOAR observations is provided below. The distribution of absolute magnitudes in the GCNS peaks at \(M_{G}=10.5\) mag, reaching a density of 0.01 star pc\({}^{-3}\) mag\({}^{-1}\), and drops smoothly on both sides of the maximum (see Figure 16 in GCNS). The corresponding median mass is 0.32 \(M_{\odot}\) according to the PARSEC isochrone for solar metallicity (Bressan et al., 2012). Let us estimate how many hierarchical (e.g. triple and higher-order) systems are there within 100 pc. It is well known that the fraction of hierarchies increases with mass, while the density of stars declines. I approximated the fraction of hierarchies vs. mass in Figure 1 of Offner et al. (2022) by a parabola \(f_{H}\approx 0.146+0.255x+0.414x^{2}\), where \(x=\log_{10}M/M_{\odot}\), and multiplied the star counts in GCNS by this fraction (the masses are estimated from the absolute magnitudes \(M_{G}\)). The result in Figure 1 suggests a total number around 19,000 for masses above 0.2 \(M_{\odot}\) (10,400 above 0.5 \(M_{\odot}\) and 6,400 above 0.7 \(M_{\odot}\) ). Additional 4,400 hierarchies are predicted in the first 0.1-0.2 \(M_{\odot}\) bin, although \(f_{H}\) for low-mass stars is poorly known. This model yields 4,062 hierarchies with masses from 0.8 to 1.25 \(M_{\odot}\) within 100 pc, roughly matching 56 systems found in the 64\(\times\) smaller volume by Raghavan et al. (2010). ### Selection of Candidate Hierarchies The GCNS provides a list of 19,176 pairs of stars, 16,556 of which are estimated to be bound. However, inner subsystems in triples bias Gaia measurements of parallaxes and proper motions (PMs), so many wide pairs with subsystems appear as unbound or even unrelated. To avoid potential bias against triples, I use the weaker criteria for selecting outer pairs, as outlined in Tokovinin (2022): * Parallaxes equal within 1 mas. * Projected separation \(s<20\) kau. * Relative projected speed (in km s\({}^{-1}\) ) \(\Delta V<10(10^{3}/s)^{0.5}\), where \(s\) is expressed in au. This is a relaxed form of the boundness criterion which rejects optical companions but preserves the hierarchies. A similar approach was adopted by Hwang et al. (2020). A search over GCNS with these criteria returns 24,604 systems, each containing from 2 to 5 stars (50,243 stars in total). Most systems are just wide binaries, except 944 triples, 42 quadruples, and one quintuple, \(\xi\) Sco. The relaxed criteria give a larger sample of wide systems compared to the list of binaries given in the GCNS. The median mass of stars in our wide pairs is 0.44 \(M_{\odot}\) (0.60 and 0.31 \(M_{\odot}\) for the primary and secondary components, respectively). This is larger than the median mass in GCNS because binaries prefer stars more massive than average (in other words, the binary fraction increases with mass). Other catalogs of wide binaries based on Gaia have been published by Hartman & Lepine (2020); El-Badry et al. (2021); Zavada & Piska (2022), and others using a variety of approaches and selection criteria. Figure 1: Estimated number of hierarchical systems within 100 pc in 0.1 \(M_{\odot}\) mass bins. Each Gaia (and GCNS) entry contains two powerful diagnostics of unresolved binaries, namely the reduced unit weight error (RUWE) and the fraction of double transits, FDBL (IPDfmp in the Gaia terminology). I also explored another parameter, IPDgofha (an asymmetry parameter in the Gaia image analysis), but found it to be poorly correlated with RUWE and FDBL; for this reason, probably, the GCNS does not contain this parameter. So, FDBL, RUWE, and the variability of radial velocity (RVERR) are the main indicators of close binaries. Photometric variability caused by eclipses is yet another indicator, used in some studies of hierarchies (e.g. Hwang et al., 2020; Fezenko et al., 2022) but not relevant for this work. It is generally assumed that RUWE\(>\)1.4 indicates significant deviations from a single-star astrometric solution, suggesting an unresolved binary (Belokurov et al., 2020; Penoyre et al., 2022). However, a large RUWE can be caused either by the genuine motion of the photocenter (i.e. an astrometric binary) or by the influence of a faint visual companion that spoils the Gaia astrometry by its presence; both situations occur and can be illustrated by concrete examples. The FDBL parameter, ranging from 0 to 100, is an even more powerful diagnostic of a close companion than RUWE; so far, it has received little attention in the literature. However, double transits do not always pinpoint inner subsystems. In a binary of \(\sim\)1\(\arcsec\) separation, normally resolved by Gaia as two sources, double transits occur when the Gaia scans are nearly parallel to the binary. A plot of FDBL vs. binary separation \(\rho\) in Figure 2 clearly shows an elevated FDBL for close pairs. An empirical condition FDBL\(>100(2.5-\rho)/1.5\) inferred from this plot separates binaries with genuine subsystems from simple binaries with \(\rho<2\farcs 5\). For the secondaries, a more strict criterion FDBL\(>100(3.5-\rho)/1.5\) is adopted, based on a similar plot. However, as shown in the lower panel of Figure 2, many binaries closer than 2\(\farcs\)5 also have an elevated RUWE in their primary components, presumably caused by the disturbing influence of companions on the astrometric measurements. A plot of RUWE vs. the estimated speed of orbital motion, not shown here, reveals no correlation, so the non-linear orbital motion in these binaries is unlikely to be the cause of an elevated RUWE. Figure 3 illustrates the process of selecting candidate hierarchies from the GCNS. About 1000 systems containing three (or more) related GCNS stars are obvious candidates; their subset has been studied in (Tokovinin, 2022). Additional candidates are selected among wide binaries where the presence of an inner subsystem is inferred from the Gaia binarity flags: FDBL\(>10\) (with a higher threshold for close outer pairs as noted above) or RUWE\(>\)2 or RVERR\(>\)2 km s\({}^{-1}\). Application of these subjective criteria, adopted to maximize the reality of subsystems, leads to a pool of 8032 candidate hierar Figure 3: Block diagram showing selection of candidates for speckle observations. Figure 2: Top: fraction of double transits FDBL in the primary component of resolved Gaia binaries vs. their separation \(\rho\) in arcseconds. The line is FDBL=100*(2.5 - \(\rho\))/1.5. Bottom: RUWE of the primary component vs. \(\rho\). chies. In most cases, the parameters of the inner subsystems (periods and mass ratios) are unknown, so this sample by itself is not very informative for the statistics. For this reason some candidates were observed at SOAR, as reported below. The list of 24,606 systems (including 8032 candidate hierarchies) is not provided here because, given the criteria, it can be derived from the original GCNS. The criteria for selecting binaries and subsystems were chosen subjectively, and modified criteria would result in a different list. The raw list has little value, as it serves only as a starting point for follow-up observations and for additional mining of the Gaia data. The Multiple Star Catalog (MSC; Tokovinin, 2018), holds a record of known hierarchies based on the literature. This is an eclectic data collection, heavily burdened by selection effects. On the other hand, the Washington Double Star Catalog (WDS; Mason et al., 2001), holds a similarly disparate collection of resolved (traditionally called "visual") pairs, some of which are mere chance projections (optical pairs). The Gaia candidate hierarchies were matched to the WDS, and cases where the inner pairs in the Gaia candidates were actually resolved according to the WDS were singled out. Most of those triples were already present in the MSC, and 150 new ones where Gaia discovered distant tertiary components to the previously known visual binaries were added. With this increment, the MSC contained 1017 hierarchies within 100 pc. The work reported below has doubled this number. However, the completeness is still very poor in comparison with the expected number of hierarchies in Figure 1. About 370 hierarchical systems within 100 pc documented in the MSC are not present in our candidate list. The most frequent classes are tight hierarchies with one or zero associated Gaia sources, and hierarchies where one component is a visual binary without Gaia astrometry. In a small number of cases, Gaia parallaxes are available for both components but are strongly biased by the subsystems, so that the two GCNS stars appear unrelated. For example, the primary star in 01579\(-\)2851 has a parallax of 12.45 mas in DR3, and 11.24 mas after fitting its astrometric orbit in the NSS; the latter coincides with the parallax of the secondary component. Obviously, this pair is missing from the list of candidates, which imposes a maximum parallax difference of 1 mas. A few hierarchies in the MSC have outer separations exceeding the adopted limit of 20 kau. ### Data Organization Information on binary and multiple stars in various databases is often affected by confusion. Such attributes as position, parallax, photometry, etc. may refer either to the blended light of several stars or to the individual stars. The term _component_ is used here for referring to the data on astrometry and photometry of components of multiple systems, admitting that each component may host several stars and that the term _resolved_ is fuzzy. The notion of component evolves with time as observing techniques improve. Gaia provides, for the first time, resolved photometry and astrometry of the individual components of many visual binaries wider than \(\sim\)1\(\arcsec\). At the same time, the 2MASS photometry and position may still refer to the binary as a whole (a blend) because of the lower 2MASS resolution. For example, HIP 12548 is a single source in Gaia, although it contains four stars (Tokovinin, 2022). In future Gaia data releases it may be split in two components separated by 0\(\farcs\)4, each component hosting a close pair. A consistent identification scheme is implemented in the MSC (Tokovinin, 2018). Each multiple system has a common 10-character MSC code based on the J2000 coordinates of its primary component. Components are designated by letters, their accurate coordinates for the J2000 epoch and other optional identifiers (e.g. in the HD or 2MASS catalogs) are provided. Subsystems are unions of components joined by a comma. In contrast, the WDS catalog of double stars (Mason et al., 2001) designates systems, rather than components. It uses the WDS codes (10-character strings based on the J2000 positions), but they may differ from the similar MSC codes either because a different star was taken as the primary or because the WDS codes are based on inaccurate positions (e.g. for many Luyten's wide pairs). In this work, the components of multiple systems that coincide with individual Gaia sources are designated by capital letters (with a few exceptions). The Gaia own identifiers are not used because they are not stable, changing between data releases. Instead, accurate positions (for J2000 or J2016 epochs) serve to match with Gaia and with other databases. If component B was resolved into a close pair, its members become Ba and Bb, while B refers to the blended Gaia source. Each system has a unique 10-character MSC code. This scheme minimizes confusion and provides a direct link to the MSC. ## 3 The SOAR Speckle Survey ### Instrument, Observations, and Data Reduction The high-resolution camera, HRCam, is an optical speckle imager operating at SOAR since 2007 (Tokovinin & Cantarutti, 2008). Over time, the instrument got a better detector, the observing procedure has been optimized, and the pipeline for data processing and calibration has been developed and tuned, converting HRCam into a high-efficiency survey facility with a typical yield of 300 stars per night (Tokovinin, 2018). A survey of binary M-type dwarfs (Vrijmoet et al., 2022) and imaging of TESS exoplanet candidates (Ziegler et al., 2021) demonstrate the power of HRCam in this respect. The latest series of binary-star measurements and an overview of the ongoing HRCam observing programs are published in Tokovinin et al. (2022). Observations of candidate hierarchies from the GCNS were conducted since 2021 October as a filler among other observing programs, using also some engineering time. As indicated in Figure 3, speckle targets were selected from the pool of 8032 candidates using additional criteria, namely \(G>12\) mag (fainter stars require excellent seeing conditions), and \(\rho>3^{\prime\prime}\) (to avoid potentially false candidates caused by wide companions). Only the RUWE and FDBL flags were considered, and only previously unobserved stars with a decl. south of \(+20^{\circ}\) were placed on the SOAR program, resulting in 950 targets. A complementary list of 524 northern targets was produced. No speckle instruments in the Northern Hemisphere matching HRCam in productivity are available to make the northern extension of this survey a practical undertaking: it would require \(\sim\)10 nights at a 4 m telescope. The main survey started in 2022 January. It was preceded by trial observations of candidate hierarchies selected by various criteria. In the last months of 2022, the observing program has been extended by adding candidate pairs with separations from 1\({}^{\prime\prime}\) to 3\({}^{\prime\prime}\). All results are reported here jointly. The \(G<12\) mag limit corresponds to \(M>0.7M_{\odot}\) at 100 pc. The median mass of the observed stars is 0.82 \(M_{\odot}\), and 80% are comprised between 0.60 and 1.23 \(M_{\odot}\). The median mass of the resolved stars (estimated in the same manner from their combined absolute magnitude) is 0.80 \(M_{\odot}\), and 80% are between 0.59 and 1.13 \(M_{\odot}\). Thus, the resolved speckle targets are, on average, slightly fainter than the unresolved ones. The likely explanation of this difference is a better prior coverage of bright stars (previously resolved pairs were not placed on the program). In each HRCam observation, two image cubes of 200\(\times\)200 pixels and 400 frames are taken with an exposure time of 25 ms (8 s per cube) and a pixel scale of 15 mas. The observations were made in the \(I\) filter (824/170nm) to maximize the flux from red stars and the detectability of faint red companions. The classical resolution limit set by diffraction is 40 mas, but closer pairs of near-equal stars were detected down to 30 mas separation from the asymmetry of the speckle power spectrum; measurements of their positions are inaccurate. Some close pairs were reobserved to confirm the detections and to follow their expected fast orbital motion. The approximate detection limits (resolution and maximum magnitude difference vs. separation) are determined by the speckle pipeline for each observation. They depend on the seeing conditions and on the target brightness. The contrast limit at 0\(\farcs\)15 separation, \(\Delta I_{0.15}\), is increased here by 0.5 mag with respect to the original conservative estimates delivered by the pipeline to reflect better parameters of the resolved pairs. The astrometric calibration is common to all HRCam programs. ### Results Table 1, published fully electronically, presents the results of this survey. Its first columns contain the MSC code of the system (similar to the WDS code, but not always coincident), the component's identifier, and its accurate equatorial coordinates for the J2000 epoch from the GCNS. This information should uniquely identify each observed target. When the measurement involves two Gaia sources, the component identifier has two characters. The following columns contain Julian year of the observation, position angle \(\theta\), separation \(\rho\), and magnitude difference \(\Delta I\); for unresolved targets all Figure 4: New triple stars where the wide companions are present in Gaia DR3 and the inner subsystems are resolved by SOAR. The speckle ACFs are displayed with an arbitrary negative stretch to highlight the companions. Each panel has an MSC label. The peaks corresponding to the companions are marked, and the outer and inner separations in arcseconds are indicated. these numbers are zero. Note that 00092\(-\)0408 A has been resolved in 2021.75 at 0\(\farcs\)089, but unresolved on two occasions in 2022. An optional flag after \(\Delta I\) indicates cases where the magnitude difference is determined from the average image of a wide pair (*), when the quadrant is defined without 180\({}^{\circ}\) ambiguity (q), the data are noisy or below the diffraction limit (:), and a few observations of close pairs in the Stromgren \(y\) band (y). The three following columns contain the detection limits: the minimum separation \(\rho_{\rm min}\), and the maximum magnitude differences at 0\(\farcs\)15 and 1\(\arcsec\) separations, \(\Delta I_{0.15}\) and \(\Delta I_{1}\), respectively. The next column gives a code of the NSS solution, if present (see Section 4) or -- otherwise. The last column contains the WDS discoverer codes of the systems where appropriate (e.g. KPP2684Aa,Ab for the resolved primary star of the 3\(\farcs\)6 Gaia pair named KPP2684 in the WDS), otherwise designations like Ba,Bb for the newly resolved pairs, UR for unresolved sources, AB for Gaia pairs, or Aa,Ab and Aa,B for resolved triples. Table 1 contains 1384 entries corresponding to 1243 unique targets (either single components or pairs); 1058 of them have single-letter component identifiers, 503 of which (48%) are resolved. There are 47 observed targets fainter than \(G=12\) mag, eight of those are fainter than \(G=14\) mag, and the faintest one has \(G=17.3\) mag. Faint targets with strong indications of subsystems were observed as a complement to the main survey, like pairs closer than 3\(\arcsec\). The standard SOAR speckle pipeline (Tokovinin, 2018b) delivers speckle power spectra and image autocorrelation functions (ACFs). To illustrate the nature of these data and to highlight some discoveries, Figures 4 and 5 show the ACFs of targets with three stars in the HRCam field. The wide components in Figure 4 are actually found in Gaia DR3, but when they are themselves close pairs, Gaia does not have parallaxes, so these components are missed in the GCNS and, consequently, in our list of candidate hierarchies. Nevertheless, these stars have additional, wider companions in the GCNS, so these systems are at least quadruple. In the 20068\(-\)6729, the outer component C is only at 4\(\farcs\)15 and 96\(\fdg\)4 from A, so the whole quadruple (including the newly discovered component Ab) has small ratios between separations and could be dynamically unstable. At a parallax of 11.5 mas, the estimated periods in this system range from 300 yr to 5 kyr. The elevated RUWE of stars A and B (5.6 and 2.4, respectively) is likely caused by light from the new star Ab. Most inner subsystems resolved by SOAR are binaries, but some, unexpectedly, contain three stars. Six ACFs of such triplets are shown in Figure 5; they have outer separations below 1\(\arcsec\) and inner separations near the resolution limit. These compact triplets have only one component in the GCNS, but there are other, more distant Gaia components, making these systems at least \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ MSC} & Comp. & R.A. & Dec. & Date & \(\theta\) & \(\rho\) & \(\Delta I\) & Flag & \(\rho_{\rm min}\) & \(\Delta I_{0.15}\) & \(\Delta I_{1}\) & NSS & System \\ \multicolumn{1}{c}{(J2000)} & & (deg) & (deg) & (JY-2000) & (deg) & (\({}^{\prime\prime}\)) & (mas) & & (\({}^{\prime\prime}\)) & (mag) & (mag) & & \\ \hline 00025\(+\)0440 & AC & 0.613899 & 4.668529 & 21.8909 & 196.7 & 1.0596 & 0.2 & * & 0.052 & 2.44 & 3.98 & — & AC \\ 00026\(-\)2814 & A & 0.661087 & \(-\)28.236648 & 22.4418 & 0.0 & 0.0000 & 0.0 &... & 0.057 & 2.24 & 3.37 & AORB & UR \\ 00042\(-\)1008 & A & 1.051757 & \(-\)10.141044 & 22.4419 & 0.0 & 0.0000 & 0.0 &... & 0.044 & 2.44 & 4.40 & — & UR \\ 00049\(-\)1811 & A & 1.237158 & \(-\)18.178732 & 22.4419 & 0.0 & 0.0000 & 0.0 &... & 0.043 & 2.32 & 4.46 & — & UR \\ 00092\(-\)0408 & A & 2.306404 & \(-\)4.133919 & 21.7542 & 351.5 & 0.0889 & 0.3 & q & 0.051 & 2.10 & 3.79 & — & KPP2684Aa,Ab \\ 00092\(-\)0408 & A & 2.306404 & \(-\)4.133919 & 22.24446 & 0.0 & 0.0000 & 0.0 &... & 0.050 & 2.70 & 3.71 & — & KPP2684Aa,Ab \\ 00092\(-\)0408 & A & 2.306404 & \(-\)4.133919 & 22.6823 & 0.0 & 0.0000 & 0.0 &... & 0.053 & 2.74 & 3.73 & — & KPP2684Aa,Ab \\ 00100\(-\)5358 & A & 2.491435 & \(-\)53.958769 & 22.4447 & 0.0 & 0.0000 & 0.0 &... & 0.046 & 2.94 & 4.35 & ASB1 & UR \\ 00111\(-\)0008 & B & 2.725629 & \(-\)0.111976 & 22.4447 & 305.8 & 0.3546 & 2.5 & q & 0.051 & 2.65 & 3.81 & — & Ba,Bb \\ 00119\(-\)3533 & A & 2.962362 & \(-\)35.546968 & 22.8452 & 125.4 & 0.0974 & 1.9 & q & 0.043 & 2.72 & 3.47 & — & Aa,Ab \\ 00119\(-\)3533 & A & 2.962362 & \(-\)35.546968 & 23.0062 & 126.0 & 0.0963 & 1.9 & q & 0.041 & 3.07 & 5.76 & — & Aa,Ab \\ \hline \end{tabular} \end{table} Table 1: Results of the SOAR Speckle Survey (fragment) Figure 5: New compact triplets discovered by the SOAR speckle imaging of Gaia candidates. See the caption to Figure 4. quadruple. 07258\(-\)2829 and 09556+0350 are actually quintuples because their distant components are also resolved at SOAR as close pairs. The inner pair Ba,Bb in 09556+0350 is barely seen because star Bb, 6.7 mag fainter than A, is below the formal contrast detection limit. ### SOAR Resolutions vs. Gaia Binarity Flags The top panel of Figure 6 is a separation-contrast plot for the resolved subsystems. At the same time, it compares with the Gaia binarity flags (crosses with FDBL\(>\)2 are effectively resolved by Gaia, squares have single transits), while colors code the RUWE. Essentially all pairs wider than 0\(\farcs\)2 (and some closer ones) are resolved by double transits. The widest and dimmest companions have small RUWEs (blue crosses) and are not revealed by this criterion. On the other hand, brighter companions with \(\Delta I<2\) mag at separations of \(\sim\)0\(\farcs\)5 have red and green symbols, indicating that an elevated RUWE was likely caused by the perturbing light of those companions rather than by the slow orbital motion with estimated periods on the order of centuries. Pairs closer than 0\(\farcs\)1 are detected only by RUWE. The lower panel of Figure 6 gives a complementary view of the interplay between \(\rho\), RUWE, and FDBL. One can note that _all_ subsystems identified by the FDBL flag (blue crosses) are resolved at SOAR. In contrast, a large part of subsystems with elevated RUWEs remained unresolved (zero separation) either because they are too close or because the companions are too faint. Although some correlation between parameters of the resolved pairs (\(\rho,\Delta I\)) and the Gaia binarity flags is incontestable, this is not a deterministic relation because additional factors (e.g. the number of Gaia transits) influence the Gaia binarity indicators. Figure 6 (top) shows a deficit of pairs with magnitude differences below 1 mag and separations above 0\(\farcs\)2. This "Gaia hole" is caused by missing parallaxes of close binaries, as mentioned above, so these stars are not present in the GCNS. The lower envelope gives approximate limits of the hole in the (\(\rho,\Delta I\)) space. It can be described crudely by a semi-circle centered at 0\(\farcs\)4 with a logarithmic width of 2.5\(\times\) and a height of 1 mag: \[[\log(\rho/0\farcs 4)/\log(2.5)]^{2}+(\Delta I)^{2}<1 \tag{1}\] (see the gray shading in Figure 6). However, the limits are fuzzy, apparently depending on the Gaia scanning law and source location (some binaries may be lucky in getting parallaxes, while other binaries with similar parameters are not). Note the crosses near (0\(\farcs\)4, 0) -- binaries where Gaia DR3 measured parallaxes of one or both components with comparable magnitudes. Knowing the shape of the Gaia hole and the binary statistics, one can estimate the number of stars missing from the GCNS. The resolution of inner subsystems at SOAR enables estimation of masses and mass ratios (from absolute magnitudes, using standard relations for main-sequence stars), as well as periods (assuming that projected separation is statistically representative of the semimajor axis). The methods are explained in the MSC paper (Tokovinin, 2018). All new hierarchies with resolved subsystems are added to the MSC, which holds additional parameters such as estimated masses, periods, astrometry, etc. This information is not duplicated here, only the new observations are reported in Table 1. The updated MSC is publicly avail Figure 6: Comparison between SOAR resolutions of candidate subsystems and Gaia binarity flags. Top: magnitude difference \(\Delta I\) vs. separation for resolved pairs. Crosses indicate FDBL\(>\)2 (effectively resolved by Gaia), and the squares mark FDBL\(<\)2. The colors show RUWE: \(<\)2 (blue), between 2 and 5 (green), and \(>\)5 (red). The gray shade indicates the Gaia avoidance of binaries, the dashed line is the median detection limit. Bottom: RUWE vs. angular separation. The blue crosses and red squares distinguish targets by FDBL. able through Vizier (catalog J/ApJS/235/6) and at [http://www.ctio.noirlab.edu/~atokovin/stars/](http://www.ctio.noirlab.edu/~atokovin/stars/). ## 4 Hierarchies with Inner Gaia Orbits Another input to the statistics of nearby hierarchies is derived from the orbital solutions in the NSS catalog (Gaia Collaboration et al., 2022; Pourbaix et al., 2022). The NSS information is presented in 17 tables, separately for each solution type. Here, only the eight most frequent types are used, ignoring the rest (eclipsing binaries, circular orbits, etc.). The data were recovered from the Vizier catalog I/357 (Gaia Collaboration, 2022) and ingested into IDL structures. Note that the SB1 and SB2 tables in the NSS do not contain any astrometric information. The Gaia astrometry of spectroscopic binaries was recovered from the main DR3 catalog, linked to the NSS by the Gaia identifiers. Table 2 gives the short codes adopted here for the NSS solutions, their official self-explanatory names, the number \(N_{\rm mult}\) of such solutions found among wide binaries, and their total number \(N_{\rm GCNS}\) in the GCNS. The first four types provide orbital elements and thus are relevant for the statistics. Those 705 subsystems were matched to the MSC and added to it, if missing. The other half of the solutions (660) give only accelerations or radial velocity (RV) trends; they do not constrain the inner mass ratios, while the periods likely exceed 1000 days. The total number of NSS solutions for the GCNS objects is 10,388, or 3.1%. The number of solutions for the 50,243 members of wide pairs or triples is 1365, or 2.7%. By construction, the NSS catalog tried to avoid close companions, explaining its slightly lower rate of solutions for stars belonging to wide binaries. The total numbers of stars with RUWE\(>\)2 (candidates for astrometric orbits) are 40,336 and 6995 in the full GCNS and in our list of 50,243 stars, respectively. The numbers of astrometric (AORB and ASB1) orbits in Table 2, 395 and 3755, are much smaller, only 9.3% and 5.6% of stars with RUWE\(>\)2. The speckle survey suggests that half of the RUWE-selected candidates can be resolved, so that their long periods are not yet covered by the NSS. Still, the estimated \(\sim\)19% completeness of astrometric orbits for the remaining half is quite low. There are 318 matches between Nthe SS solutions and speckle targets, and 36 of those are resolved by SOAR at separations below 1\({}^{\prime\prime}\). The resolution rate of 11% is substantially lower than for the whole speckle survey (48%). Only 20 resolved subsystems have orbital elements in the NSS. However, the detailed comparison in Table 3 casts doubts on some orbits. Close companions perturb Gaia astrometry and the RVs measured by its slitless spectrograph, leading to suspicious orbits. For example, the orbit of 07530\(-\)0201 A with \(P=1.5\) days and amplitude \(K_{1}=1\) km s\({}^{-1}\), if true, would imply an unlikely substellar companion in the brown dwarf desert regime, so the period seems spurious. Suspicious orbits with periods close to a year or its harmonics could result from the companion-induced effects that vary in a regular way throughout the year, following the Gaia scanning direction. Some spectroscopic orbits with periods on the order of a month or shorter could be real, indicating that the companion resolved by SOAR (with estimated periods of a few years or decades) orbits an inner spectroscopic binary, while the resolved Gaia companion is on a still wider outer orbit (a quadruple of 3+1 \begin{table} \begin{tabular}{l l l l} \hline \hline \multicolumn{1}{c}{ Code} & \multicolumn{1}{c}{Solution} & \multicolumn{1}{c}{\(N_{\rm mult}\)} & \multicolumn{1}{c}{\(N_{\rm GCNS}\)} \\ \hline AORB & Orbital & 251 & 2355 \\ ASB1 & AstroSpectroSB1 & 144 & 1400 \\ SB1 & SB1 & 231 & 1388 \\ SB2 & SB2 & 79 & 311 \\ A7 & Acceleration7 & 331 & 2398 \\ A9 & Acceleration9 & 221 & 1970 \\ RV1 & FirstDegreeTrendSB1 & 53 & 277 \\ RV2 & SecondDegreeTrendSB1 & 55 & 289 \\ All & & 1365 & 10,388 \\ \hline \end{tabular} \end{table} Table 2: NSS Solutions \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{ MSC} & \multicolumn{1}{c}{Comp.} & \multicolumn{1}{c}{Sol.} & \multicolumn{1}{c}{\(\rho\)} & \multicolumn{1}{c}{\(\Delta I\)} & \multicolumn{1}{c}{\(P\)} & \multicolumn{1}{c}{Comment} \\ & & & (\({}^{\prime\prime}\)) & \multicolumn{1}{c}{(mag)} & \multicolumn{1}{c}{(d)} & \\ \hline 01242\(-\)2157 & A & ASB1 & 0.057 & 0.2 & 1734 & Blended \\ 04097\(-\)5256 & A & AORB & 0.176 & 3.4 & 121.9 & Suspect \\ 05130\(-\)8125 & A & SB1 & 0.040 & 1.1 & 1192 & Blended \\ 05574\(-\)2458 & A & SB1 & 0.467 & 3.1 & 378 & Suspect \\ 07058\(-\)5849 & A & SB1 & 0.052 & 2.0 & 1310 & OK \\ 07530\(-\)0201 & A & SB1 & 0.199 & 2.3 & 1.5 & Wrong \\ 07543\(+\)0232 & A & SB1 & 0.364 & 2.3 & 36.6 & Quintuple\({}^{2}\) \\ 09336\(-\)7252 & A & SB2 & 0.139 & 2.9 & 37.5 & Quadruple\({}^{2}\) \\ 10356\(-\)4715 & A & ASB1 & 0.026 & 0.0 & 547.4 & Blended \\ 12022\(-\)4844 & A & SB1 & 0.095 & 3.1 & 52.0 & Quadruple\({}^{2}\) \\ 15229\(-\)6242 & B & AORB & 0.047 & 0.7 & 1421 & OK, moves \\ 15397\(-\)4956 & A & SB1 & 0.046 & 0.8 & 44.8 & Suspect \\ 16142\(+\)0349 & A & SB1 & 0.057 & 1.3 & 1034 & Blended \\ 19221\(-\)0444 & B & AORB & 0.086 & 1.6 & 2624 & OK \\ 19369\(-\)6949 & A & ASB1 & 0.023 & 0.0 & 735 & Blended \\ 20147\(-\)7252 & A & SB1 & 0.253 & 3.5 & 305.9 & Suspect \\ 21320\(-\)0129 & B & ASB1 & 0.030 & 0.7 & 1056 & Blended \\ 21460\(-\)5233 & A & SB1 & 0.088 & 2.5 & 28.6 & Suspect \\ 22170\(+\)1824 & B & SB1 & 0.366 & 2.0 & 11.9 & Quadruple\({}^{2}\) \\ 22377\(-\)0210 & A & SB1 & 0.049 & 1.7 & 7.0 & Quadruple \\ \hline \end{tabular} \end{table} Table 3: Resolved Subsystems with NSS Orbits hierarchy). However, the possibility that some of those spectroscopic orbits are wrong still remains, and follow-up RV monitoring is needed for their verification. In eight resolved subsystems the periods estimated from the separations approximately match the NSS orbital periods. However, the amplitude of the RV variation or the astrometric semimajor axis are often reduced by blending of comparable-brightness stars, as follows from the measured magnitude differences. The mass ratios derived from the blended NSS orbits are in fact lower limits. Several speckle measurements of 10356\(-\)4715 and 19369\(-\)6949 taken during one year show rapid motion compatible with their 2 yr orbits. The uniform and impersonal coverage of Gaia orbits offers a definitive advantage for statistical studies. However, the automatic orbit calculation and the cadence imposed by the Gaia scanning law lead to a non-negligible fraction of wrong orbits, despite efforts to remove them by the NSS creators. ## 5 Discussion After the addition of the newly resolved subsystems and subsystems with NSS orbits, the MSC contains 2,754 hierarchies within 100 pc with estimated periods. For the following discussion, I select from the MSC 2,208 systems within 100 pc with primary masses from 0.5 to 1.5 \(M_{\odot}\) (excluding 77 systems containing known white dwarfs). Most of them are triples, while systems of four or more stars can be decomposed into elementary triples. Figure 7 plots the inner periods and mass ratios for this sample. The symbols distinguish the observing techniques and clearly separate the subsystems into groups. The upper right corner of the plot is occupied by the 1,280 resolved subsystems (blue crosses), including those studied here. Their lowest mass ratios depend on the period (or separation) owing to the limitation of the observing method (speckle and Gaia resolutions). The 243 double-lined spectroscopic binaries (red squares) occupy the upper-left corner, as expected. Most of the 444 single-lined spectroscopic (red triangles) and 273 astrometric (green pluses) binaries have periods shorter than 3 yr (duration of the Gaia DR3 observations), and their mass ratios are lower limits owing to blending. Orbital inclination also reduces the mass ratios of SB1s, but this is a smaller effect compared to the blending. Please, keep in mind that some NSS orbits are false. The gaps between the three islands of points in Figure 7 are almost certainly caused by the observing techniques, rather than by a real dichotomy of the underlying population. Overlapping symbols correspond to simultaneous detections by several methods, but the number of such overlaps is quite modest. The density of points at short periods of \(P_{\rm in}<1000\) days is less than in the area covered by the speckle detections, reflecting the small percentage of NSS orbits mentioned above. The plot gives a rough idea of the coverage of the parameter space and of the remaining incompleteness. For example, many subsystems with \(a_{\rm in}\sim 10\) au and \(q_{\rm in}<0.6\), below the speckle detection limit, are yet to be resolved by high-contrast imaging. Their orbital periods are longer than the duration of the Gaia mission. Figure 8: Periods in the inner and outer subsystems at adjacent hierarchical levels. The symbols correspond to the detection methods of inner subsystems, as in Figure 7. The solid and dash lines indicate the period ratios of 4.7 and 100, respectively. Figure 7: Periods and mass ratios of inner subsystems in known hierarchies within 100 pc. The SB1 and SB2 orbits are plotted by the red squares and triangles, respectively, astrometric orbits by green pluses, and resolved subsystems by blue crosses. The mass ratios of SB1s and astrometric binaries are lower limits. The upper axis gives orbital separations for a mass sum of 2 \(M_{\odot}\). Let us focus on the upper-right corner of Figure 7 (\(P_{\rm in}>10^{4}\) days, \(q_{\rm in}>0.6\)), where the detection of subsystems by imaging is uniform. The observed distribution is thus representative of the real distribution of subsystems, and it has some interesting features worthy of comment. The concentration of points near \(q_{\rm in}\approx 1\) is a well-known manifestation of twin binaries (it is even more prominent at shorter periods). A statistically significant excess of wide twin binaries with separations up to \(10^{3}\) au has been detected by El-Badry et al. (2019). The recent study by Hwang et al. (2022) shows that wide twins with separations from 400 to \(10^{3}\) au have extremely eccentric orbits, suggesting that they were formed as tighter 10-100 au pairs and later ejected to wide and eccentric orbits, presumably by dynamical interactions in unstable triples. In the light of this discovery, the abrupt decrease in the frequency of inner subsystems (including twins) at separations above \(\sim\)300 au, seen in Figure 7, appears natural (wider subsystems form rarely). This separation corresponds to 3\({}^{\prime\prime}\) at 100 pc distance, beyond the Gaia hole that ends at 1\({}^{\prime\prime}\). So, the paucity of wider inner subsystems is not caused by observational selection. Detailed examination of these data reveals that most inner pairs with \(q_{\rm in}\sim 1\) and separations from 10 to 100 au are missing from our list of candidate hierarchies because they fall in the Gaia hole (there are fewer points in this area of the plot). These hierarchies are known owing to historic ground-based efforts. One can also note a slight deficiency of wide inner subsystems with \(q_{\rm in}\approx 0.8\). If the significance of this local minimum in the distribution of \(q_{\rm in}\) is confirmed and proven not to result from observational biases, its explanation will present an interesting challenge to the theory of multiple-star formation. Figure 8 is a standard plot comparing inner and outer periods in nearby hierarchies, with symbols coding the detection techniques of inner subsystems in the same way as above. The Gaia resolution limit of \(\sim\)1\({}^{\prime\prime}\) corresponds to 100 au or a period of \(10^{5.5}\) days. The outer periods of the Gaia candidates studied here are located above this line, and one notes a reduced density of points below it, at shorter outer periods. This region of the parameter space where Gaia does not discover new hierarchies suffers from a larger incompleteness. Systematic discovery and characterization of compact hierarchies with outer separations below 100 au remains an outstanding observational challenge. Some points in Figure 8 for resolved triples (blue crosses) fall below the dynamical stability limit \(P_{\rm out}/P_{\rm in}>4.7\), depicted by the solid line. The periods are estimated only crudely from projected separations, explaining this apparent contradiction. Nevertheless, there is a substantial number of marginally stable or even unstable hierarchies. Marginally stable hierarchies with short periods \(P_{\rm in}<100\) days are especially interesting because their non-Keplerian motion can be directly observed (see an example in Borkovits et al., 2019). Unfortunately, Gaia is of little help for the study of these fascinating objects because it was not designed for such work. Figure 9 presents outer periods and mass ratios for hierarchies within 100 pc. Wide tertiary companions with \(P_{\rm out}>10^{6}\) days have separations above 300 au (3\({}^{\prime\prime}\) at a 100 pc distance), therefore their detection by Gaia is quite complete, unlike closer companions detected by high-resolution imaging. A mild decrease of the outer mass ratio with increasing outer period is thus a real feature of nearby hierarchies, rather than a selection effect. The median \(q_{\rm out}\) is 0.39 for \(P_{\rm out}\) between \(10^{6}\) and \(10^{7}\) days; it decreases to 0.36 and 0.34 in the next two decades of outer periods. At outer periods below \(10^{5.5}\) days, covered mostly by imaging techniques, large outer mass ratios are rare; the points group near \(q_{\rm out}\sim 0.5\), as expected for triples composed of three similar-mass stars that are discovered more readily. In this work, hierarchies are identified by searching for inner subsystems in wide binaries. Hwang (2023) did the opposite by looking for wide tertiary companions to close binaries with NSS orbital solutions. The wide-binary catalog of El-Badry et al. (2021) was used, and an outer separation range between 1 and 10 kau was considered. The sample was restricted to the main-sequence stars within 500 pc with masses from 0.8 to Figure 9: Periods and mass ratios in the outer subsystems of known hierarchies within 100 pc. The blue crosses denote wide (common PM) outer pairs, and the green squares correspond to other (mostly visual and speckle) discovery methods. 1.4 \(M_{\odot}\). Such field stars have a 5.35% fraction of wide companions, but this fraction was found to be larger by 2.28\(\pm\)0.10 times for eclipsing binaries and by 1.33\(\pm\)0.05 times for SBs. The enhanced frequency of tertiary companions to close solar-type binaries discovered in Gaia data by Hwang confirms the earlier results (Tokovinin et al., 2006; Tokovinin, 2018) and indicates that the formation of close and wide subsystems is somehow related. For astrometric binaries, the frequency of tertiaries found by Hwang was only a 0.65\(\pm\)0.03 fraction of their frequency in the field. However, the presence of astrometric subsystems biases parallaxes and PMs, so many wide binaries with astrometric subsystems appear unbound and get excluded from the El-Badry's catalog, which imposes the boundness condition. This pitfall is avoided here by using a relaxed criterion for wide-binary selection. Our sample of wide pairs has a relative frequency of 5.76% for the same range of masses (0.8 to 1.4 \(M_{\odot}\) ) and separations (from 1 to 10 kau) as in Hwang (2023), slightly larger than 5.35% quoted in his paper. The number of astrometric subsystems with NSS solutions (AORB and ASB1) in these pairs is 38, or 2.5\(\pm\)0.5%, and the fraction of astrometric orbits in the full GCNS for this mass range is the same, 2.4\(\pm\)0.1%. I conclude that the depletion of astrometric orbits in wide pairs found by Hwang is not real, being caused by the inaccurate Gaia astrometry of stars with astrometric subsystems. The GCNS hierarchies can clarify a long-standing issue concerning the frequency of 2+2 quadruples. Statistical modeling of the 67 pc sample of solar-type stars (Tokovinin, 2014) revealed that presence of inner subsystems in both components of wide pairs is correlated; otherwise, the number of predicted 2+2 quadruples would be less than observed. Such a conclusion had been reached earlier based on the presence of spectroscopic subsystems in wide binaries (Tokovinin & Smekhov, 2002). On the other hand, a similar study by Halbwachs et al. (2017) found no correlation between spectroscopic subsystems in components of 116 wide binaries. I selected 15,983 pairs of two stars wider than 3\({}^{\prime\prime}\) from the list of 24,606 systems described above and determined the presence of subsystems in each component using either of the Gaia binarity indicators FDBL, RUWE, and RVERR, all with thresholds of 2. The numbers of subsystems in the primary, secondary, and both components are 3542, 2562, and 653, respectively (relative frequency 0.222, 0.160, and 0.041). If the presence of subsystems in both components is uncorrelated, the expected frequency of 2+2 quadruples is the product of subsystem frequencies in primaries and secondaries, \(f_{2+2}=f_{1}f_{2}=0.0355\), while the observed frequency is 653/15,983=0.0408. The excess of 0.53%\(\pm\)0.16% is small but formally significant at the 3.3\(\sigma\) level. I repeated this test by setting a larger minimum separation or by using only two criteria, FDBL and RUWE, and obtained a comparable excess with significance above 2\(\sigma\). In a sample of 116 binaries, like the one of Halbwachs et al. (2017), such a small effect is lost in the noise. Interestingly, Fezenko et al. (2022) found that the simultaneous presence of eclipsing subsystems in both components of Gaia wide binaries is significantly enhanced compared to their frequency in the field. ## 6 Summary and Outlook Combination of Gaia DR3 data with the ground-based speckle survey doubles the number of known hierarchical systems within 100 pc, reaching now almost 3,000. The estimated total number of such hierarchies is about 20,000, so our current knowledge is still very incomplete; some insights on the parameters of missing hierarchies are given above. Several thousand candidate nearby hierarchies were extracted from Gaia, but for most of them the parameters of inner subsystems remain unknown, while the outer separations are typically above 100 au. The main results of this work are: 1. A list of 8,032 candidate hierarchical systems within 100 pc based on the GCNS has been created. 2. A subset of 1,243 candidate hierarchies brighter than \(G=12\) mag were observed by speckle interferometry at the 4.1 m telescope, and 506 close inner pairs were resolved. 3. New hierarchies are added to the MSC, doubling the number of known multiples within 100 pc. The resolved inner subsystems and those with NSS orbits occupy different regions of the period-mass ratio parameter space, with little overlap. 4. The amplitudes of Gaia SB1 and astrometric orbits are reduced by blending, hence the mass ratios derived from those orbits are lower limits. Continued speckle monitoring of the newly discovered subsystems is needed for several reasons. Orbits of the closest and fastest inner pairs with estimated periods of a few years can be determined soon, filling the gap between visual and spectroscopic/astrometric orbits; the resolved Gaia orbital pairs from Table 3 are primary candidates. Re-observation of the remaining inner pairs within several years will define the direction and speed of their orbital motion. Its comparison with the motion in the outer pairs, accurately measured by Gaia, will give precious material for the statistical study of relative orbit orientation and eccentricity distribution; an example of such analysis for resolved Gaia triples can be found in Tokovinin (2022), see also Hwang et al. (2022). Of special interest will be a dynamical study of marginally stable compact triples with comparable separations, like those in Figure 5. A large sample of hierarchies with quantified observational selection is a starting point for inferring the true underlying distributions of their parameters (e.g. Tokovinin, 2014). The limits of speckle detection are well known, but the SOAR survey covers only a tiny fraction of the GCNS population. The sensitivity of Gaia binarity indicators is still poorly understood, although speckle interferometry helps in this respect. The selection effects in the NSS catalog are even more severe. Despite these obvious difficulties, the prospect of establishing unbiased statistics of binaries and hierarchies in the nearby field population is clear. The research was funded by the NSF's NOIRLab. This work used the SIMBAD service operated by Centre des Donnees Stellaires (Strasbourg, France), bibliographic references from the Astrophysics Data System maintained by SAO/NASA, and the Washington Double Star Catalog maintained at USNO. This work has made use of data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. SOAR, Gaia
2309.02510
Geometric squeezing of rotating quantum gases into the lowest Landau level
The simulation of quantum Hall physics with rotating quantum gases is witnessing a revival due to recent experimental advances that enabled the observation of a Bose-Einstein condensate entirely contained in its lowest kinetic energy state, i.e. the lowest Landau level. We theoretically describe this experimental result, and show that it can be interpreted as a squeezing of the geometric degree of freedom of the problem, the guiding center metric. This "geometric squeezing" offers an unprecedented experimental control over the quantum geometry in Landau-level analogues, and at the same time opens a realistic path towards achieving correlated quantum phases akin to quantum Hall states with neutral atoms.
Valentin Crépel, Ruixiao Yao, Biswaroop Mukherjee, Richard J. Fletcher, Martin Zwierlein
2023-09-05T18:01:01Z
http://arxiv.org/abs/2309.02510v1
# Geometric squeezing of rotating quantum gases into the lowest Landau level ###### Abstract The simulation of quantum Hall physics with rotating quantum gases is witnessing a revival due to recent experimental advances that enabled the observation of a Bose-Einstein condensate entirely contained in its lowest kinetic energy state, _i.e._ the lowest Landau level. We theoretically describe this experimental result, and show that it can be interpreted as a squeezing of the geometric degree of freedom of the problem, the guiding center metric. This "geometric squeezing" offers an unprecedented experimental control over the quantum geometry in Landau-level analogues, and at the same time opens a realistic path towards achieving correlated quantum phases akin to quantum Hall states with neutral atoms. ## I Introduction Quantum fluids and quantum gases under rotation exhibit a rich variety of phenomena, from Abrikosov vortex lattices [1; 2; 3; 4] to quantum analogues of hydrodynamic instabilities and turbulence [5; 6; 7; 8], with strong connections to other fields of physics such as rotating nuclei [9; 10], neutron stars [11; 12], and electrons in high magnetic fields [13; 14]. At the core of this rich phenomenology is the interplay between two of the most fundamental properties of quantum matter: macroscopic quantum coherence - manifest through superfluid behaviors - and its coupling to a gauge field. Here, the gauge field is not dynamical but externally imposed by the rotation due to the identical mathematical structure of the Coriolis and Lorentz forces [15; 16]. The observation of lattices of quantized vortices in Bose-Einstein condensates (BEC) [1] and later in strongly interacting Fermi gases [4] provided a striking demonstration of superfluidity of quantum gases. Since these first demonstrations, one of the long-standing goal for these systems has been to increase the impact of the effective gauge field and reach the deeply degenerate regime, where quantum fluctuations are strong enough to coherently melt the vortex lattice [17]. This occurs when all atoms live in their lowest kinetic energy manifold, which corresponds to the lowest Landau level (LLL) in the analogy with charged particles in magnetic fields, and when the total angular momentum of the atomic ensemble becomes comparable to the number of atoms. In this regime, the neutral atom analogue of integer and fractional quantum Hall states of electrons could potentially be realized [18; 19; 20; 21]. Moving in this direction, condensates with larger angular momentum were produced at ENS [22] and JILA [23], leading to vortex arrays containing hundreds of vortices and rotation near the lowest Landau level. Both of these experiments observed a softening of the vortex lattice, either through a qualitative change in appearance of the vortex lattice [22] or by direct measurement of the Tkachenko mode frequency [23] that is related to the vortex lattice stiffness [24]. This softening provided a promising precursor to the melting of the lattice induced at zero temperature by quantum fluctuations, and a deterministic route towards achieving the quantum Hall regime by reduction of the atomic density [25]. To achieve such high angular momenta, the rotation frequency in these experiments was tuned as close as possible to the natural frequency of the underlying harmonic trapping potential [22; 23]. In fact, the physics of homogeneous electron gases is most directly realized when rotation and trapping frequency are equal [26]. In this case indeed, the centrifugal force exactly compensates the harmonic confinement and the system is effectively in "flat land" where atoms are only subject to the effective gauge field imprinted by the Coriolis force. The lack of confinement in this regime stood as the main hurdle impeding further progress in the study of rapidly rotating quantum gases near the LLL regime [5; 27]. This difficulty shifted the focus to alternative ways to imprint an effective synthetic magnetic field on quantum gases -- using dressing by laser light [28; 29], imprinting flux in optical lattices [30; 31; 32] or employing synthetic dimensions [33; 34; 35; 36]. While these ideas have already led to elegant realizations of effective magnetic fields at the single-particle level [37; 38; 39], their implementations in presence of interatomic interactions has so far been hindered by severe difficulties, such as heating in dressed or shaken optical lattices [40] or all-to-all non-local couplings along synthetic dimensions that have been observed to energetically disfavor quantum Hall states [41]. With the advent of single-atom-resolving microscopes for quantum gases [42; 43; 44; 45; 46] and the ability to imprint arbitrary confining potentials [47; 48; 49], the original idea of employing rotation as the most direct analogue of the Lorentz force on charged particles is witnessing a revival. In a new experimental platform [50; 8; 51], our group has been able to directly image vortices in-situ, without time-of-flight expansion. Additionally, we have developed an alternative way of spinning up a quantum gas, entirely without introducing vortices, which we term _geometric squeezing_. The present article aims to provide a theoretical account of this geometric squeezing and its consequences. In essence, this new method harnesses the lack of con -finement and the ensuing dynamical instability when spinning the gas at the trapping frequency, which originally prevented the observation of condensates in the LLL regime, to elongate the atomic cloud. This stretching simultaneously decreases the overall density of the system and increases its moment of inertia, and hence its angular momentum. As a result, condensates with typical angular momenta exceeding \(1000h\) per particle and contained entirely in the LLL, in the form of a single Landau gauge wavefunction, are produced. This method provides an ideal starting point for the study of interactions within the LLL, both in the mean-field regime where the particle number largely exceed the number of vortices [8]; but also beyond mean-field as the number of atoms is reduced during geometric squeezing to become comparable to the number of flux lines, paving a new route for the realization of fractional quantum Hall states with neutral atoms [52; 53; 54; 55; 56; 57]. ## II Model and Outline We consider atoms of mass \(m\) in a three-dimensional harmonic potential with natural frequencies [\(\omega_{x}=\omega_{\perp}(1+\varepsilon),\omega_{y}=\omega_{\perp}(1- \varepsilon),\omega_{z}\)] rotating around the vertical axis at angular velocity \(\Omega\). For simplicity, we assume the axial dynamics completely frozen due to a strong vertical confinement \(\omega_{z}\gg\omega_{\perp}\). The weak in-plane anisotropy, characterized by \(\varepsilon\), imprints the rotation of the trap onto the atoms. We choose \(\omega_{\perp}\), \(\hbar\omega_{\perp}\) and \(\ell_{\perp}=\sqrt{\hbar/(m\omega_{\perp})}\) as respective units of frequency, energy and length. In the frame co-rotating with the trap, the single-particle dynamics of the system is governed by the Hamiltonian \[\mathcal{H}=\frac{1}{2}\left[p_{x}^{2}+p_{y}^{2}+(1+\varepsilon)x^{2}+(1- \varepsilon)y^{2}\right]-\Omega L_{z}, \tag{1}\] with \(L_{z}=xp_{y}-yp_{x}\) the axial angular momentum [1]. The last term in Eq. 1 is responsible for the centrifugal and Coriolis fictitious forces. The latter has the same mathematical structure as the magnetic Lorentz force, which justifies the use of rotating gases to emulate the physics of charged particles in an magnetic field [15]. This is best seen by splitting \((-\Omega L_{z})\) into two contributions, a deconfining potential \(-\Omega^{2}(x^{2}+y^{2})/2\) and an effective vector potential \(\mathbf{A}=\Omega[-y,x]\) equivalent to an applied magnetic field along the vertical direction: \[\mathcal{H}=\frac{1}{2}\left[(\mathbf{p}-\mathbf{A})^{2}+(1-\Omega^{2}+ \varepsilon)x^{2}+(1-\Omega^{2}-\varepsilon)y^{2}\right], \tag{2}\] which holds up to an overall constant. The aim of the present article is to provide a theoretical description of the dynamical properties of Eq. 1, as experimentally observed in Refs. [8; 50; 51]. For that purpose, we first review the single-particle properties of the model, starting with its dynamical instability near \(\Omega=1\) (Sec. III) and observe how the latter squeezes quantum states over time (Sec. IV). This squeezing can be simply understood by the unitary evolution imposed by the rotating saddle potential, which physically implements a transformation from symmetric to Landau gauge in our system (Sec. IV.2), and more intuitively explains the elongation of the quantum states and the reduction of the density that allows to reach the LLL. As a result of squeezing, overlaps between neighboring quantum states, which define the quantum geometry of the system [58], also change, as formally captured by a squeezing transformation of the guiding centers (Sec. IV.3). This "geometric squeezing" provides a unique experimental control over the quantum geometry in Landau-level analogues. We finally connect this single-particle picture to a more realistic situation where interactions are accounted for using a hydrodynamic description of the superfluid (Sec. V) and full-fledged Gross-Pitaveskii numerical simulations of the condensate's dynamics (Sec. VI). ## III Classical Solution In this section, we study the dynamical instability of the Hamiltonian Eq. 1 in more detail. We first locate the regime of instability, which is heralded by unbounded trajectories of the classical equations of motion (Sec. III.1). We then interpret these unbounded solutions as a guiding center drift following the isopotentials imprinted by the rotating saddle (Sec. III.2), and study the effects of this drift for a thermal phase-space distribution of particles (Sec. III.3). ### Dynamical instability To put Eq. 1 in normal form and find its eigenmodes, we first decouple the position and momentum operators mixed by \(L_{z}=xp_{y}-yp_{x}\). This is achieved by the following rotations admixing \((x,p_{y})\) and \((y,p_{x})\) \[\begin{bmatrix}x^{\prime}\\ p_{y}^{\prime}\end{bmatrix}=\begin{bmatrix}c&s\\ -s&c\end{bmatrix}\begin{bmatrix}x\\ p_{y}\end{bmatrix},\quad\begin{bmatrix}y^{\prime}\\ p_{x}^{\prime}\end{bmatrix}=\begin{bmatrix}c&s\\ -s&c\end{bmatrix}\begin{bmatrix}y\\ p_{x}\end{bmatrix}, \tag{3}\] with \(c=\cos(\theta/2)\) and \(s=\sin(\theta/2)\) and \(\tan\theta=-2\Omega/\varepsilon\). Eq. 3 is a canonical transformation as it defines a new pair of conjugate variables \((x^{\prime},p_{x}^{\prime})\) and \((y^{\prime},p_{y}^{\prime})\). In terms of these new variables, the Hamiltonian can be split as \(\mathcal{H}=\mathcal{H}_{+}+\mathcal{H}_{-}\), where \[\mathcal{H}_{+}=\frac{p_{x}^{\prime 2}}{2m_{+}}+\frac{1}{2}k_{+}x^{\prime 2}, \quad\mathcal{H}_{-}=\frac{p_{y}^{\prime 2}}{2m_{-}}+\frac{1}{2}k_{-}{y^{ \prime}}^{2}, \tag{4}\] corresponds to harmonic oscillators with mass and coupling constant given by \[m_{\pm}^{-1} =1\mp(\varepsilon/2)\pm\sqrt{\Omega^{2}+(\varepsilon/2)^{2}}, \tag{5}\] \[k_{\pm} =1\pm(\varepsilon/2)\pm\sqrt{\Omega^{2}+(\varepsilon/2)^{2}}. \tag{6}\] While \(k_{+}\) and \(m_{+}\) are always positive, \(k_{-}\) and \(m_{-}\) respectively changes sign for \(\Omega_{-}=\sqrt{1-\varepsilon}\) and \(\Omega_{+}=\sqrt{1+\varepsilon}\) When \(\Omega\in[\Omega_{-},\Omega_{+}]\), these coefficients have opposite sign and one of the system's eigen-frequencies \[\omega_{\pm}=\sqrt{k_{\pm}/m_{\pm}}=\left[1+\Omega^{2}\pm\sqrt{\varepsilon^{2}+4 \Omega^{2}}\right]^{1/2}, \tag{7}\] becomes imaginary (see Fig. 1a), leading to a dynamical instability. To illustrate this instability, let us integrate the classical equations of motion of the model. To that aim, we first obtain the time evolution operators of the two decoupled harmonic oscillators \[\begin{bmatrix}x^{\prime}(t)\\ p_{x}^{\prime}(t)\end{bmatrix}=U_{+}(t)\begin{bmatrix}x^{\prime}(0)\\ p_{x}^{\prime}(0)\end{bmatrix},\quad\begin{bmatrix}y^{\prime}(t)\\ p_{y}^{\prime}(t)\end{bmatrix}=U_{-}(t)\begin{bmatrix}y^{\prime}(0)\\ p_{y}^{\prime}(0)\end{bmatrix}, \tag{8}\] where standard calculations, repeated in App. A for completeness, yield \[U_{\pm}(t)=\begin{bmatrix}\cos\omega_{\pm}t&\frac{\sin\omega_{\pm}t}{m_{\pm} \omega_{\pm}}\\ -\frac{k_{\pm}\sin\omega_{\pm}t}{\omega_{\pm}}&\cos\omega_{\pm}t\end{bmatrix}. \tag{9}\] Note that this result is valid for both real and imaginary frequencies \(\omega_{\pm}\). The complete time evolution in terms of the original variables is then inferred from the rotations given in Eq. 3. Some classical trajectories computed with these methods are displayed in Fig. 1b, where the black dot indicates the initial position from and \((p_{x},p_{y})|_{t=0}=(0,1)\). These trajectories clearly distinguish the stable regime with bounded trajectories (red and pink) from the dynamically unstable region characterized by unbounded trajectories (purple). ### Guiding center drift A clear separation of scale can be observed when the rotation frequency matches the original trap frequency \(\Omega=1\), where a slow drift along the first diagonal is superimposed to a much faster rotation of the particle (Fig. 1b). At this point, the centrifugal force exactly compensates the original confinement and the system is, in the rotating frame, equivalent to that of charged particles in a constant magnetic field subject to a saddle potential \(\varepsilon(x^{2}-y^{2})/2\)[59]. The fast rotation corresponds to the cyclotron motion with period \(2\pi/\omega_{+}\), while the drift corresponds to the guiding center motion along the isopotential lines of the saddle [60]. Besides a stronger emphasis on the behaviors within the unstable regime, the classical solutions derived in this section and their interpretation in terms of cyclotron and guiding center motion are not new. They were, for instance, discussed in the context of anisotropic perturbations to the Foucault pendulum to explain the weak ellipticity of trajectories observed in some experiments [61; 62]. ### Generic phase space distributions Over lengthscales larger than the cyclotron radius, the effects of the fast and short-range cyclotron motion can be averaged out and the guiding center dynamics alone remains. Here, we use the classical time evolution obtained above to isolate and study the effects of guiding center drift on a classical - or semi-classical - phase-space distribution. We assume that the ensemble of particles, prepared using \(\Omega=\varepsilon=0\), can be described by a phase-space distribution \(f_{0}(\mathbf{r},\mathbf{p})=f_{0}(E)\) that only depends on the local energy \(E=(\mathbf{r}^{2}+\mathbf{p}^{2})/2\) with \(\mathbf{r}=[x,y]^{T}\) and \(\mathbf{p}=[p_{x},p_{y}]^{T}\). Notably, this encompasses the Boltzmann, Fermi-Dirac and Bose-Einstein distributions, allowing us to describe classical, fermionic and bosonic ensembles at thermal equilibrium. However, our method is not limited to these cases and generically applies to all distributions that only depend on the classical local energy \(E\) of the problem. The rotation \(\Omega\) and ellipse \(\varepsilon\) are turned on at \(t=0\) to non-zero values, and the phase-space density \(f_{t}(\mathbf{r},\mathbf{p})\) at time \(t>0\) can be obtained by following the classical trajectories of all particles in the ensemble. A particle found at phase-space point \((\mathbf{r},\mathbf{p})\) at time \(t\) must have originated from the phase-space point \((\mathbf{r}(-t),\mathbf{p}(-t))\) at time \(t=0\), so we have \(f_{t}(\mathbf{r},\mathbf{p})=f_{0}(\mathbf{r}(-t),\mathbf{p}(-t))=f_{0}(E_{t})\), which only depends on the original energy \(E_{t}=[\mathbf{r}(-t)^{2}+\mathbf{p}(-t)^{2}]/2\) of the particle now found at \((\mathbf{r},\mathbf{p})\). Because Eq. 1 is quadratic, \(E_{t}\) also is a quadratic form in the variables \((\mathbf{r},\mathbf{p})\) that we formally write as \[E_{t}=\frac{1}{2}\begin{bmatrix}\mathbf{r}&\mathbf{p}\end{bmatrix}Q\begin{bmatrix} \mathbf{r}\\ \mathbf{p}\end{bmatrix},\quad Q=\begin{bmatrix}Q_{rr}&Q_{rp}\\ Q_{pr}&Q_{pp}\end{bmatrix}. \tag{10}\] We provide the explicit form of \(Q\) in App. B as determined from Eqs. 3 and 9. We are interested in the real-space density distribution \[\rho_{t}(\mathbf{r})=\int\mathrm{d}^{2}\mathbf{p}\,f_{t}(\mathbf{r},\mathbf{p })=\int\mathrm{d}^{2}\mathbf{p}\,f_{0}(E_{t}), \tag{11}\] Figure 1: a) Eigenfrequencies of the quadratic Hamiltonian Eq. 1 for \(\varepsilon=0.1\) as a function of the rotation frequency \(\Omega\). The instability region \([\sqrt{1-\varepsilon},\sqrt{1+\varepsilon}]\) where \(\omega_{-}\) becomes imaginary is hatched. b) Classical trajectories starting from the initial position marked as a black dot and \((p_{x},p_{y})|_{t=0}=(0,1)\). The color of the curve encodes the value of \(\Omega\), which is also marked with a vertical dashed line in (a): red for \(\Omega=0.4\), purple for \(\Omega=1\), and pink for \(\Omega=1.6\). which we compute using a linear transformation of the momenta consisting of a shift \(\tilde{\mathbf{p}}=\mathbf{p}+Q_{pp}^{-1}Q_{pp}\mathbf{r}\) followed by a rotation and dilatation \(\tilde{\mathbf{p}}_{\theta}=Q_{pp}^{1/2}\tilde{\mathbf{p}}\), with \(Q_{pp}^{1/2}\) the square root of the symmetric matrix \(Q_{pp}\). Relegating the lengthy but straightforward algebra to App. B, this procedure yields \[\rho_{t}(\mathbf{r}) = \int\mathrm{d}^{2}\tilde{\mathbf{p}}\,f_{0}\left(\mathbf{r}^{T}Q_ {pp}^{-1}\mathbf{r}+\tilde{\mathbf{p}}^{T}Q_{pp}\tilde{\mathbf{p}}\right)\] \[= \frac{1}{\sqrt{|\det Q_{pp}|}}\int\mathrm{d}^{2}\tilde{\mathbf{p} }_{\theta}\,f_{0}\left(\mathbf{r}^{T}Q_{pp}^{-1}\mathbf{r}+\tilde{\mathbf{p}} _{\theta}^{T}\tilde{\mathbf{p}}_{\theta}\right)\] \[= \frac{\rho_{0}\left(Q_{pp}^{-1/2}\mathbf{r}\right)}{\sqrt{|\det Q _{pp}|}}.\] It shows that the real-space density of the atomic ensemble keeps the same functional form in terms of a rotated and stretched coordinate, which results in elliptical equidensity lines. These ellipses are, up to an overall scale, entirely specified by the direction of their major axis, measured by its angle \(\phi(t)\) from the \(y\)-axis, and the principal axis lengths \(\lambda_{\pm}(t)\) given by the square roots of \(Q_{pp}\)'s eigenvalues. These parameters are derived in App. B using the explicit form of \(Q_{pp}\), and are given by \[\tan[2\phi(t)] =\frac{\Omega(c_{+}\tau_{+}-c_{-}\tau_{-})}{(\Omega_{R}+\Omega^{ 2})\tau_{+}^{2}+(\Omega_{R}-\Omega^{2})\tau_{-}^{2}}, \tag{13}\] \[\lambda_{\pm}^{2}(t) =1-\frac{\varepsilon^{2}(\tau_{+}^{2}-\tau_{-}^{2})}{4\Omega_{R} }\pm\frac{\varepsilon\Omega}{2\Omega_{R}}\left|\frac{c_{-}\tau_{-}-c_{+}\tau_ {+}}{\sin[2\phi(t)]}\right|,\] where \(c_{\pm}=\cos(\omega_{\pm}t)\), \(\tau_{\pm}=\sin(\omega_{\pm}t)/\omega_{\pm}\), and \(\Omega_{R}^{2}=\Omega^{2}+(\varepsilon/2)^{2}\). In the dynamical instability and small anisotropy regime (\(\Omega=1,\varepsilon\ll 1\)), \(\omega_{-}\) is imaginary such that the formula for \(c_{-}\) and \(\tau_{-}\) can be alternatively written in terms of \(|\omega_{-}|\) with hyperbolic trigonometric functions. At long times, they therefore largely dominate in magnitude over \(c_{+}\) and \(\tau_{+}\), allowing to make analytical progress. In particular, we find that the tilt \(\phi\simeq\arctan[-4/\varepsilon]/2\simeq-\pi/4+\varepsilon/8\simeq-\pi/4\) brings the major axis of the distribution along the first diagonal. Similarly, the behavior of the major and minor axis length can be studied by writing \(\lambda_{\pm}^{2}=\kappa_{\pm}e^{2|\omega_{-}|t}+\alpha_{\pm}\) with \(\alpha_{\pm}\) a function of time bounded by a constant, and \(\kappa_{\pm}\) the coefficient corresponding to the instability. Expanding for long time, we get \(\kappa_{-}=0\) and \(\kappa_{+}=1/(2\Omega_{R})\simeq 1\). This shows that the minor axis remains a constant at long time while the major axis increases exponentially quickly at a rate \(|\omega_{-}|=\varepsilon/2\). Altogether, the coefficients given in Eq. 13 describe an exponential squeezing of the original rotation-symmetric cloud along the first diagonal, as captured by the long time behavior \(\lambda_{+}/\lambda_{-}\propto e^{\varepsilon t/2}\). This is illustrated in Fig. 2, where we plot the density distribution \(\rho_{t}\) at different times starting from a Boltzman distribution \(\rho_{0}(\mathbf{r})=n_{0}e^{-\beta\mathbf{r}^{2}/2}\) of inverse temperature \(\beta=1\), for which integration over momenta can be performed analytically. To make a closer connection with the guiding center drift discussed above, we overlay some isopotential lines of the rotating saddle, making clear that such drift is the fundamental reason behind the squeezing of the cloud. ## IV Squeezing quantum states We now investigate the fate of a quantum state under the Hamiltonian Eq. 1, with a particular focus on the dynamically unstable regime identified above. Analogous to the classical case, single particle quantum states stretch out over time along the isopotential lines of the imposed rotating saddle (Sec. IV.1). In contrast to classical dynamics however, the zero point motion of the cyclotron harmonics imposes a minimum width to the density distribution even after an infinite evolution time. The quantum dynamics can be understood as physically effecting a transformation from the symmetric gauge to the Landau gauge (Sec. IV.2), which arises from the evolution under the potential imprinted by the rotating saddle. We finally observe that the dynamics of our model can be formally described by a squeezing transformation of the guiding centers, which defines the quantum geometry in Landau-level analogues (Sec. IV.3). As a result, the dynamical instability is a form of quantum "geometric squeezing". ### Explicit evolution of quantum states #### iv.1.1 Decoupling cyclotron and guiding center motion As in the classical case, we first decouple the normal modes of the Hamiltonian. While we could rely on the rotations used in Eq. 3 for that purpose, we notice that the decoupling can also be achieved by a simple gauge transformation. More precisely, we append the phase factor \[G=e^{i\kappa xy},\quad\kappa=\varepsilon/(2\Omega), \tag{14}\] Figure 2: Evolution of the real-space density for an ensemble originally described by a Boltzmann distribution with inverse temperature (\(\beta=1\)) as the rotation and anisotropy are switched on to \(\Omega=1\) and \(\varepsilon=0.1\) for \(t>0\). As a guide to the eye allowing to visualize the guiding center drift, some isopotential lines of the rotating saddle potential \(\varepsilon(x^{2}-y^{2})\) are shown with solid lines using a gray scale, which goes from black (negative) to white (positive) values. on all single particles states \(|\tilde{\psi}\rangle=G|\psi\rangle\), which are now ruled by the Hamiltonian \[\tilde{\mathcal{H}}=G\mathcal{H}G^{\dagger} \tag{15}\] \[=\frac{1}{2}[p_{x}^{2}+p_{y}^{2}+(1+\kappa^{2})(x^{2}+y^{2})]- \Omega L_{z}-\kappa(xp_{y}+yp_{x}).\] Introducing the cyclotron (\(a_{+}\)) and guiding center (\(a_{-}\)) bosonic operators, defined as \[a_{\pm}=\frac{1}{2}\left[\alpha(x\pm iy)+i\frac{p_{x}\pm ip_{y}}{\alpha} \right],\quad\alpha=(1+\kappa)^{1/4}, \tag{16}\] the Hamiltonian separates into two independent parts \(\tilde{\mathcal{H}}=\tilde{\mathcal{H}}_{+}+\tilde{\mathcal{H}}_{-}\) that read \[\tilde{\mathcal{H}}_{\pm}=\frac{\mu_{\pm}}{2}(2a_{\pm}^{\dagger}a_{\pm}+1)\pm \frac{\kappa}{2}(a_{\pm}^{2}+a_{\pm}^{\dagger\,2}), \tag{17}\] with \(\mu_{\pm}=\sqrt{\omega_{\pm}^{2}+\kappa^{2}}\), which describes the independent squeezing of the cyclotron and guiding center harmonic oscillators. #### iii.1.2 Heisenberg evolution Before looking at the real-space representation of time-evolved wavefunctions, it is instructive to consider the evolution of the cyclotron and guiding center operators defined by \[A_{\pm}(t)=\tilde{U}(t)a_{\pm}\tilde{U}^{\dagger}(t),\quad\tilde{U}(t)=e^{-it \tilde{\mathcal{H}}}. \tag{18}\] Using the Baker-Campbell-Hausdorff formula, we get \[A_{\pm}(t) =f_{\pm}(t)a_{\pm}+g_{\pm}(t)a_{\pm}^{\dagger}, \tag{19}\] \[f_{\pm}(t) =\cos\omega_{\pm}t+i\mu_{\pm}\frac{\sin\omega_{\pm}t}{\omega_{\pm }},\quad g_{\pm}(t)=\pm i\kappa\frac{\sin\omega_{\pm}t}{\omega_{\pm}}.\] When compared to Eq. 16, this explicit form of \(A_{+}(t)\) suggests the definition of a novel time-dependent complex coordinate \[\xi(t)=\alpha[f_{+}(t)(x+iy)+g_{+}(t)(x-iy)], \tag{20}\] which drastically simplifies its expression \[A_{+}(t)=\frac{1}{2}[\xi+2ip_{\bar{\xi}}], \tag{21}\] where we have introduced \(\bar{\xi}\) the complex conjugate of \(\xi\), and \((p_{\xi},p_{\bar{\xi}})\) the canonical momenta associated with \((\xi,\bar{\xi})\); their explicit representation is provided in App. C for completeness. From Eq. 21, the physical interpretation of \(\xi\) is clear: it defines the elliptic coordinate system most adapted to describe cyclotron orbits at any point in time. Note that we have defined \(\xi\) using \(A_{+}(t)\) to be sure that the method also applies in the regime of instability. Finally, we express the time-evolved guiding center operator using the new coordinate system \[A_{-}(t) =\frac{1}{2}\left[u(t)(\bar{\xi}+2ip_{\xi})-v(t)(\xi-2ip_{\bar{ \xi}})\right], \tag{22}\] \[u =f_{-}f_{+}-g_{-}g_{+},\quad v=f_{-}g_{+}^{*}-g_{-}f_{+}^{*}.\] #### iii.1.3 Quantum states Using this new system of coordinates, we can now efficiently determine the time evolution of arbitrary quantum states. For this, it is sufficient to find the evolution of a complete set of vectors at the initial time (\(t=0\)). We consider two such sets: (\(i\)) the coherent states \(|\mathbf{\alpha}\rangle_{0}\) satisfying \(a_{\pm}|\mathbf{\alpha}\rangle_{0}=\alpha_{\pm}|\mathbf{\alpha}\rangle_{0}\), and (\(ii\)) the Fock states \(|\mathbf{n}\rangle_{0}\) diagonalizing the number operators \(a_{\pm}^{\dagger}a_{\pm}|\mathbf{n}\rangle_{0}=n_{\pm}|\mathbf{n}\rangle_{0}\). To obtain their time evolution, we rely on the fact that \(|\mathbf{\alpha}\rangle_{t}=\tilde{U}(t)|\mathbf{\alpha}\rangle_{0}\) and \(|\mathbf{n}\rangle_{t}=\tilde{U}(t)|\mathbf{n}\rangle_{0}\) can be determined, up to a global phase, as solutions of \[A_{\pm}(t)|\mathbf{\alpha}\rangle_{t}=\alpha_{\pm}|\mathbf{\alpha}\rangle_{t},\quad A _{\pm}^{\dagger}(t)A_{\pm}(t)|\mathbf{n}\rangle_{t}=n_{\pm}|\mathbf{n}\rangle_{t}. \tag{23}\] As a first example, let us derive the real-space representation of the time-evolved vacuum state defined by \(A_{\pm}(t)|0,0\rangle_{t}=0\). These relations provide two differential equations, which can be solved using a Gaussian ansatz, yielding \[\phi_{t}(\xi,\bar{\xi}) \equiv\langle x,y|0,0\rangle_{t} \tag{24}\] \[=\frac{1}{\sqrt{\pi|u|}}\exp\left[\frac{\delta\xi^{2}-|\xi|^{2}}{2 }\right],\quad\delta=\frac{v}{u},\] where we have kept the time dependence of \(\xi\), \(u\) and \(v\) implicit. The same approach can in fact be extended to any coherent state and leads to \[\langle x,y|\mathbf{\alpha}\rangle_{t}=\phi_{t}(\xi-\alpha_{+},\bar{\xi}-\bar{ \alpha}_{+}-2\alpha_{-}/u)e^{-i\mathrm{Im}\,(\xi\bar{\alpha}_{+})}. \tag{25}\] As in the most usual case, we observe that the coherent states are, up to a phase factor, shifted copies of the vacuum \(|0,0\rangle_{t}\) obtained above. Finally, we can use the algebraic relations in Eq. 23 to express the time-evolved Fock states as \[|\mathbf{n}\rangle_{t}=\frac{1}{\sqrt{n_{+}!n_{-}!}}\left(A_{+}^{\dagger}\right)^{n _{+}}\left(A_{-}^{\dagger}\right)^{n_{-}}|0,0\rangle_{t}, \tag{26}\] which provides, after lengthy calculations relegated to App. C, the explicit real-space representation of \(\langle x,y|\mathbf{n}\rangle_{t}\). We finish this section by discussing more pictorially the time-evolution under \(\mathcal{H}\). In Fig. 3, we show the density of the vacuum state - which determines that of all other coherent states - and of a few Fock states as a function of time for \(\varepsilon=0.1\) and \(\Omega=1\). The most striking feature is a drastic change in aspect ratio as a function of time. As is the case of classical distributions (Fig. 2). this can be understood as a result of particles flowing along the isopotential lines of the saddle potential, which are depicted on the upper right panel of Fig. 3. The main difference between the classical and quantum cases is the finite minor width that the quantum states still possess at long times. The latter is due to the zero point motion of the cyclotron operator, which sets the fundamental limit \(\ell_{B}/\sqrt{2}\) on the width of the quantum states, with \(\ell_{B}=\sqrt{\hbar/(2m\Omega)}\) the magnetic length. ### Effecting a gauge transformation Focusing on a single Landau level, say the LLL, within which the kinetic energy is quenched, the time evolution under the saddle potential can be interpreted as performing a gauge transformation transforming symmetric gauge wavefunctions into Landau gauge ones, thus capturing the elongation of the states. Intuitively, this comes from the fact that the gauge transformation \(U=e^{i\Omega xy}\) allows to transform the symmetric gauge used in Eq. 2 into the Landau gauge, _i.e._\(U\mathcal{H}U^{\dagger}\) has an effective gauge potential \(\mathbf{A}^{\prime}=2\Omega[0,x]\). Now, since the function \(xy\) is identical to \((x^{2}-y^{2})/2\) after rotation of the axes by \(\pi/4\), time-evolution under the saddle potential exactly reproduces the effect of the gauge transformation \(U\). Note that this argument discards all kinetic contributions, which is justified within a given Landau level, where the kinetic energy is quenched by the stiff cyclotron harmonic oscillator. As a result, the time evolution under the rotating saddle potential physically implements a gauge transformation within the LLL. To see this more formally, let us project the original Hamiltonian (Eq. 1) for a rotation at the trap frequency \(\Omega=1\) onto its lowest energy level when \(\varepsilon=0\). This simply amounts to replacing \((x,y)\rightarrow(X,Y)\) with \((X,Y)\) the guiding center coordinates [60], which are defined by \[X=(x+p_{y})/\sqrt{2},\quad Y=(y-p_{x})/\sqrt{2}, \tag{27}\] and match the \((p^{\prime}_{y},y^{\prime})\) defined in Eq. 3 for \(\theta=-\pi/2\). After projection to the LLL, we thus get \(\mathcal{P}_{\mathrm{LLL}}\mathcal{H}\mathcal{P}_{\mathrm{LLL}}=\varepsilon( X^{2}-Y^{2})/2\) whose unitary evolution operator reads \(U(t)=e^{-i\epsilon t(X^{2}-Y^{2})/2}\). Starting from the ground state in the symmetric gauge \(\langle x,y|0,0\rangle_{0}=\exp[-|z|^{2}/2]/\sqrt{\pi}\) with \(z=x+iy\), we evolve it using \(U(t)\) noting that \((X-iY)|0,0\rangle_{0}=0\) and \([X,Y]=-i\) to find [50] \[\langle x,y|0,0\rangle_{t}=\frac{\exp\left[-\frac{|z|^{2}+i\tanh(\epsilon t/2 )z^{2}}{2}\right]}{\sqrt{\pi\cosh(\varepsilon t/2)}}, \tag{28}\] which is the same as Eq. 24 for \(\Omega=1\) and \(\kappa\ll 1\) if we assume the cyclotron motion unperturbed (\(f_{+}=1\), \(g_{+}=0\)). For long times \(\varepsilon t\gg 1\), it simply becomes \[\langle x,y|0,0\rangle_{t}\simeq\frac{\exp[-i(x^{2}-y^{2})/2-(x-y)^{2}/2]}{ \sqrt{\pi}e^{\varepsilon t/2}}, \tag{29}\] that indeed describes a Landau gauge wavefunction of length \(\sim e^{\varepsilon t/2}\) with its long axis rotated by \(\pi/4\) compared to the original system of coordinates (see Fig. 3). This simplified account of the dynamics provides two important insights. First, due to the quenched kinetic energy of the rapidly rotating quantum gas, the saddle potential effectively implements a gauge transformation through a unitary evolution. This evolution is coherent and does not introduce any heating nor turbulence in the form of disordered vortex nucleation. Second, the peak density of the cloud decreases exponentially with time, allowing to reach extremely dilute regimes that Figure 3: Density \(|\langle x,y|n,m\rangle_{t}|^{2}\) of the time evolved Fock states. After a small transient evolution, which is longer for higher cyclotron index \(n\), the dynamics is well captured by isopotential flow of guiding centers. As a guide to the eye allowing to visualize the guiding center drift, some isopotential lines of the rotating saddle potential \(\varepsilon(x^{2}-y^{2})\) are shown on the topmost right panel with solid lines using a gray scale, which goes from black (negative) to white (positive) values. were previously out of reach [50], which opens a realistic path towards the long sought-after quantum Hall regime of rotating quantum gases. ### Relation to quantum geometry After a small transient evolution, the full quantum evolution seems to be explained by guiding center drift along the saddle, with almost no appreciable effects attributable to the cyclotron degree of freedom. This might, at first sight, seem odd since both cyclotron and guiding center motion are ruled by similar squeezing Hamiltonians (Eq. 17). Looking more closely at the squeezing parameter \[\gamma_{\pm}=g_{\pm}/f_{\pm}, \tag{30}\] of the two operators, plotted as a function of time in Fig. 4, reveals that the cyclotron operator is very stiff \(\omega_{+}\gg\kappa\) and only slightly breathes at its natural frequency without experiencing much squeezing. On the contrary, \(\omega_{-}\) is of the order of or even smaller than the squeezing amplitude \(\kappa\), leading to a much larger squeezing parameter \(\gamma_{-}\gg\gamma_{+}\) that reaches values close to unity for long times. Most of the observable features in the time evolution of the system are thus directly attributable to the guiding center dynamics. This is a generic feature of well-separated Landau levels, which possess a stiff cyclotron mode and a much softer guiding center degree of freedom. In fact, it is known that, in presence of an external perturbation, only the soft guiding center mode will adjust to accommodate the perturbation. It will do so in a purely geometric manner, by changing the local shape of the guiding center coherent states. This geometrical response of a quantum Hall system was first spotlighted by Haldane, which described the quantum geometric degree of freedom of the system in terms of a _guiding center metric_\(g\)[58], which in our notation reads [63] \[g=\frac{1}{1-|\gamma_{-}|^{2}}\begin{bmatrix}|1+\gamma_{-}|^{2}&i(\gamma_{-}- \gamma_{-}^{*})\\ i(\gamma_{-}-\gamma_{-}^{*})&|1-\gamma_{-}|^{2}\end{bmatrix}. \tag{31}\] In the context of quantum Hall systems, geometric responses of the guiding-center metric have already been observed for an applied in-plane magnetic field [64] or an anisotropic band mass tensor [65]. In another context, a spontaneous symmetry breaking of a fractional quantum Hall system toward a nematic phase, _i.e._ a transition from \(\gamma=0\) to \(|\gamma|\simeq 1\), was predicted as a way to minimize the interaction energy [66]. Beyond these geometric responses due to the guiding center degree of freedom, the relation between quantum geometry and guiding center metric was observed to be more profound [58]. Indeed, in condensed-matter systems, bands with non-zero Chern number that analytically reproduce the physics of Landau levels were proved to only have one degree of freedom: their quantum metric, which also characterizes the guiding center metric of the Landau level they map onto [67]. Here, we have shown how the dynamical instability of rotating quantum gases could be used to modify the quantum geometry of the system by squeezing of the guiding centers. This "geometric squeezing" experimentally pioneered in Ref. [50] results in an unprecedented control over the quantum geometry of the system, and offers a new way to reach the lowest Landau level. ## V Hydrodynamic description of geometric squeezing The previous analysis completely neglected the effects of interatomic interactions. However, for the experimental conditions of Ref. [50], the typical interaction energy at initial time is much larger than the cyclotron gap and the density profile of the condensate at small time is mostly governed by interaction effects. We now develop a hydrodynamic description of the condensate in the presence of these interactions and solve for its dynamics in absence of quantum pressure. ### Hydrodynamic equations The dynamics of a condensate within a rotating trap \(U(r)\) is amenable to a hydrodynamic description, obtained by rewriting the Gross-Pitaevskii (GP) equation for the wavefunction \(\Psi=\sqrt{\rho}\,e^{iS}\) as equivalent hydrodynamic equations for the density \(\rho=|\psi|^{2}\) and the superfluid phase \(S\). As in Sec. II, we work in the rotating frame, where the GP equation reads [68] \[i\hbar\partial_{t}\Psi=\left[\frac{-\hbar^{2}\nabla^{2}}{2m}+U(\mathbf{r})- \Omega L_{z}\right]\Psi+g|\Psi|^{2}\Psi, \tag{32}\] Figure 4: Amplitude of the squeezing parameters for the cyclotron \(\gamma_{+}\) and guiding center \(\gamma_{-}\) operators when rotation and anisotropy are switched on at \(t=0\) to \(\Omega=1\) and \(\varepsilon=0.1\). Except for a weak breathing at frequency \(\omega_{+}\) due to the cyclotron motion, most of the observable features, _e.g._ in Fig. 3, are attributable to the guiding center dynamics. with \(g\) the interaction coefficient, and substitute in \(\psi=\sqrt{\rho}\,e^{iS}\). Taking the imaginary part yields \[\frac{\partial\rho}{\partial t} = -\nabla\cdot\left(\rho({\bf v}-{\bf\Omega}\times{\bf r})\right), \quad{\bf v}=(\hbar\nabla S)/m, \tag{33}\] which is the continuity equation in the rotating frame; while taking the real part yields \[-\hbar\frac{\partial S}{\partial t}= - \frac{\hbar^{2}}{2m}\frac{1}{\sqrt{\rho}}\nabla^{2}\sqrt{\rho}+ \frac{\hbar^{2}}{2m}(\nabla S)^{2}+U(\mathbf{r}) \tag{34}\] \[- \hbar\mathbf{\Omega}\cdot({\bf r}\times(\nabla S))+g\rho.\] As noted in [5], for a condensate in a harmonic trap these equations can be solved analytically via a quadratic ansatz for the superfluid wavefunction. It is necessary however to neglect the quantum pressure term \(\sim\nabla\sqrt{\rho}\), which is valid when it is dominated by the mean-field term \(g\rho\). The smallest lengthscale across which the superfluid wavefunction may vary is set by the magnetic length, corresponding to the spatial extent of cyclotron orbits in the lowest Landau level. This means that the relative importance of the quantum pressure and mean-field terms is simply given by the ratio of the chemical potential to the Landau level spacing, and loosely corresponds to the number of Landau levels admixed into the superfluid wavefunction. We therefore expect a classical hydrodynamic description to be valid when this number is much larger than unity. ### Analytic solution Following [5], to which we refer for more details, we consider an anisotropic harmonic trap of the form \[U({\bf r})=\frac{1}{2}m\omega_{\perp}^{2}\left(\left[1+\varepsilon\right]x^{2 }+\left[1-\varepsilon\right]y^{2}+\left(\frac{\omega_{z}}{\omega_{\perp}} \right)^{2}z^{2}\right), \tag{35}\] and make a quadratic ansatz for the density and phase, \[\rho({\bf r},t)=\rho_{0}(t)+\frac{m\omega_{\perp}^{2}}{g}\sum_{i, j=1}^{3}x_{i}A_{ij}(t)x_{j},\] \[S({\bf r},t)=S_{0}(t)+m\omega_{\perp}\sum_{i,j=1}^{3}x_{i}B_{ij}( t)x_{j}. \tag{36}\] The time-evolution of the condensate wavefunction is thus encoded in the matrices \(A\) and \(B\), which evolve according to [5] \[\frac{1}{\omega_{\perp}}\frac{{\rm d}A}{{\rm d}t} = -2A\operatorname{Tr}B-2(AB+BA)+\frac{\Omega(t)}{\omega_{\perp}}( RA-AR)\] \[\frac{1}{\omega_{\perp}}\frac{{\rm d}B}{{\rm d}t} = -2B^{2}-W-A+\frac{\Omega(t)}{\omega_{\perp}}(RB-BR), \tag{37}\] where the matrices \(W\) and \(R\) are defined as \[W=\frac{1}{2}\begin{pmatrix}1+\varepsilon&0&0\\ 0&1-\varepsilon&0\\ 0&0&(\omega_{z}/\omega_{\perp})^{2}\end{pmatrix}, \tag{38}\] and \[R=\begin{pmatrix}0&1&0\\ -1&0&0\\ 0&0&0\end{pmatrix}. \tag{39}\] This formalism allows straightforward calculation of the condensate dynamics under arbitrary variation in the trap rotation frequency. As an example, we show in Fig. 5 the evolution in the condensate \(e^{-1/2}\) radii along its major and minor axes, \(\sigma_{+}\) and \(\sigma_{-}\), for the experimental parameters of [50]. An initially equilibrium cloud was prepared in an anisotropic trap with \(\varepsilon=0.125\) and \(\omega_{\perp}=2\pi\times 88.6\) Hz, whose rotation rate was smoothly increased from \(\Omega=0\) to \(\Omega=\omega_{\perp}\) and held for a variable time \(t\). While the long cloud axis grows exponentially \(\sim\exp(\zeta t)\) where \(\zeta=\varepsilon\omega_{\perp}/2\), the short axis falls more slowly. This is because the cloud size contains contributions from both the guiding centers, which are squeezed at a rate \(\zeta\), and from the cyclotron orbits, whose size depends upon the number of occupied Landau levels \(N_{\rm LL}\equiv\mu/(2\hbar\omega_{\perp})\). The squeezing of guiding centers means that for most of the experiment \(\sigma_{-}\) is generally dominated by cyclotron motion and its evolution is captured by a simple scaling model. The chemical potential is proportional to the atomic density \(\sim 1/(\sigma_{+}\sigma_{-}\sigma_{z})\), where \(\sigma_{z}\) is the axial extent of the condensate. The major width always increases as \(\sigma_{+}\propto\exp(\zeta t)\), and the short axis size \(\sigma_{-,z}\propto\sqrt{\mu}\) when \(N_{\rm LL}\gg 1\). We therefore predict a time-dependence \(\sigma_{-}\propto\exp(-\zeta t/4)\), which is shown by the dashed line in Fig. 5. The falling chemical potential \(\mu\propto\exp(-\zeta t/2)\) guarantees that eventually \(\mu<2\hbar\omega_{\perp}\) and the condensate enters the LLL. In the experiment of [50] and the GP simulation of Fig. 6, we find that \(\sigma_{-}\) saturates at the zero-point Figure 5: The evolution in the major and minor radii of a condensate held in a rotating harmonic trap. While the long axis grows exponentially with a rate corresponding to the squeezing of guiding center coordinates (dotted line), the minor width falls more slowly (dashed line). This is due to the falling density leading to smaller Landau level occupation, and hence a smaller size of cyclotron orbits (see text). cyclotron orbit size imposed by Heisenberg uncertainty. However, since the hydrodynamic model neglects quantum pressure, it instead predicts that \(\sigma_{-}\to 0\). ## VI Gross-Pitaevskii simulations A clear picture of how geometric squeezing enables us to reach the lowest Landau level emerges from the non-interacting treatment presented above: due to the quenched kinetic energy of the rapidly rotating quantum gas, the time evolution operator is mainly due to the anisotropic saddle potential that physically implements a symmetric-to-Landau gauge transformation (Sec. IV.2). This evolution squeezes the atomic cloud, which simultaneously decreases the peak density exponentially with time and increases its moment of inertia. This reduces the effects of interactions (Sec. V), forming the ideal conditions to reach the LLL. We perform full Gross-Pitaveskii numerical simulations of the dynamics of the condensate, in the presence of interactions. The formalism and numerical details are gathered in Sec. VI.1, while the results of the simulations are discussed in Sec. VI.2 ### Formalism We now consider a BEC containing \(N\) atoms in the rotating anisotropic trap of Eq. 1 described by the macroscopically occupied mode \(\Psi\). Within the purely two-dimensional settings introduced in Sec. II, the GP equation takes the form [68]: \[i\partial_{t}\Psi(\mathbf{r},t)=\left[\mathcal{H}+g_{\rm 2d}|\Psi(\mathbf{r},t)|^{2} \right]\Psi(\mathbf{r},t), \tag{40}\] where \(g_{\rm 2d}=Na_{s}\sqrt{8\pi\omega_{z}}\) is the effective effective two-dimensional interaction parameter, with \(a_{s}\) the scattering length of the atoms [69]. To integrate the time-dependent GP equation, we use a time splitting pseudospectral method (TSSP) [70]. This method is most often used in absence of rotation, and harnesses the fact that the kinetic term \(\propto\mathbf{p}^{2}\) is easily evolved in Fourier space while the interaction and potential terms are most easily treated in real space [71]. The TSSP uses a Trotter decomposition of the time-evolution operator, and the infinitesimal time evolution operator at each timestep is split to separate the kinetic term from the rest of the Hamiltonian. This enables us to Fourier transform \(\Psi\) before evolving it with the kinetic terms, leading to local time evolution operators that can be efficiently implemented numerically [72]. In the rotating frame, the angular momentum operator \(L_{z}=xp_{y}-yp_{x}\) explicitly mixes the coordinate and momentum operators, and the split-step method should be modified for best performance. More precisely, we first implement a one-dimensional Fourier transform to bring \(\Psi(x,y,t)\) into \(\Psi(k_{x},y,t)\) and evolve it under the \(\Omegayp_{x}\) part of the Hamiltonian. Fourier transforming both axes, we arrive at \(\Psi(x,k_{y},t)\), which is evolved under \(\Omega xp_{y}\) piece of the Hamiltonian. We alternate the order of these evolutions, implementing \(yp_{x}\) first for even timesteps and \(xp_{y}\) first in odd ones, which further reduces systematic computational errors. ### Evolution under geometric squeezing An example of GP simulation is shown in Fig. 6. We initially prepare a non-rotating and weakly interacting BEC at equilibrium in the anisotropic trap of Eq. 1, where \(\varepsilon=0.125\) resembles the experimental settings. We then ramp up the rotation frequency from \(\Omega(t=0)=0\) following a sequence similar to the experimental procedure of Ref. [50]. This gradually brings the BEC gradually to rotate at the trap frequency \(\Omega=1\). The density profiles displayed in Fig. 6a clearly demonstrate that, as in the non-interacting case, the BEC elongates along the first diagonal in the rotating frame as a result of the guiding center flow following the isopotential of the rotating saddle. To be more quantitative, we plot in Fig. 6b the \(e^{-1/2}\) radii of the condensate wavefunction along its major (\(\sigma_{+}\)) and minor (\(\sigma_{-}\)) axes as a function of time. Consistent with the squeezing of the guiding centers, \(\sigma_{+}\) grows exponentially at a rate set by \(\varepsilon/2\). The minor axis is also exponentially reduced at early times, but eventually saturates around the value \(\ell_{B}/\sqrt{2}\), the width of a Landau gauge wavefunctions. This clearly signals that, as the peak density of the BEC decreases, Figure 6: a) Density of the condensate wave-function obtained by numerical integration of the GP equation for \(\varepsilon=0.125\) and a ramp from \(\Omega(t=0)=0\) to \(\Omega(t=\infty)=1\) that matches the experimental sequence of Ref. [50]. b) Major (\(\sigma_{+}\), green) and minor (\(\sigma_{-}\), red) typical width of the condensate wavefunction as a function of time. Time is normalized by the squeezing rate, \(\zeta t\) with \(\zeta=\varepsilon/2\), such that the exponential increase of the major width at long time becomes universal (affine with a slope of one in the semi-logarithmic used here). Finally, the dotted horizontal line shows indicates the zero-point width \(\ell_{B}/\sqrt{2}\) of a Landau gauge orbital, to which \(\sigma_{-}\) saturates. the chemical potential of the cloud becomes comparable to the cyclotron gap and all of the atom are eventually contained into the LLL, in the form of a macroscopically occupied Landau gauge wavefunction of minor width \(\ell_{B}/\sqrt{2}\). We finally note that oscillations in \(\sigma_{-}\) at frequency \(2\omega_{+}\), clearly visible in Fig. 6b, are also present in the non-interacting solutions of Sec. IV and are due to the breathing of the cyclotron degree of freedom, explicit in the oscillatory nature of \(\gamma_{-}(t)\) in Fig. 4. As another example, we show in Fig. 7 the evolution of a vortex lattice under the same evolution. At initial time, the vortex lattice is equilibrated with in a 2D harmonic trap rotating at \(\Omega=0.8\). The anisotropy of the trap is switched on at \(t=0\) and keeps rotating at \(\Omega\). In the rotating frame where the saddle potential is static, the vortex lattice elongates along the isopotential diagonal, with the vortices moving along the same direction. The flow of vortices at short times indicate that only the guiding center motion evolves and gets squeezed, leaving the cyclotron motion unchanged. For longer squeezing times, the inter-vortex distance becomes comparable to the minor axis of the cloud and their dynamics strongly couple to the overall squeezing of the atomic cloud, leading to a complex pattern formation. ## VII Conclusion In this article, the "geometric squeezing" experimentally pioneered in Ref. [50] to bring rotating quantum gases into the lowest Landau level has been theoretically studied using a combination of analytical arguments and numerical simulations. At its core, this method uses novel experimental improvements to harness the dynamical instability that originally prevented the observation of quantum gases in the lowest Landau level. This instability is due to the unbounded trajectories of the guiding center flowing along the isopotential lines of the saddle potential imposed by the rotating anisotropic trap. This evolution elongates the atomic ensemble, thereby simultaneously decreasing its peak density and increasing its moment of inertia. This naturally lead to a dilute gas with low interaction energy and high angular momentum per particle, which is entirely contained within the lowest Landau level in the form of a macroscopically occupied Landau gauge wavefunction. ## Acknowledgement This work was supported by the NSF through the Center for Ultracold Atoms and Grant PHY-2012110, and by ARO. M.Z. acknowledges funding from the Vannevar Bush Faculty Fellowship (ONR N00014-19-1-2631). R.J.F. acknowledges funding from the AFOSR Young Investigator Program (FA9550-22-1-0066). The Flatiron Institute is a division of the Simons Foundation. ## Appendix A Classical 1d harmonic oscillator In this appendix, we derive the evolution operators corresponding to the classical equations of motion of a one-dimensional harmonic oscillator, which is used in Eq. 8 of the main text to obtain the trajectories shown in Fig. 1b. We thus start from a Hamiltonian \[H=\frac{p^{2}}{2m}+\frac{1}{2}kr^{2}, \tag{10}\] where \(m\) and \(k\) are allowed an arbitrary sign, and with define the possibly complex eigenfrequency \(\omega=\sqrt{k/m}\). In terms of the reduced variables \[\tilde{r}=\sqrt{m\omega}r,\quad\tilde{p}=p/\sqrt{m\omega}, \tag{11}\] Hamilton's equations of motion read \[\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}\tilde{r}\\ \tilde{p}\end{bmatrix}=\omega\begin{bmatrix}0&1\\ -1&0\end{bmatrix}\begin{bmatrix}\tilde{r}\\ \tilde{p}\end{bmatrix} \tag{12}\] and can be straightforwardly solved by exponentiation \[\begin{bmatrix}\tilde{r}(t)\\ \tilde{p}(t)\end{bmatrix}=\begin{bmatrix}\cos\omega t&\sin\omega t\\ -\sin\omega t&\cos\omega t\end{bmatrix}\begin{bmatrix}\tilde{r}(0)\\ \tilde{p}(0)\end{bmatrix}, \tag{13}\] which we stress is valid both for a real and purely imaginary \(\omega\). Going back to the original variable leads to \[\begin{bmatrix}r(t)\\ p(t)\end{bmatrix}=\begin{bmatrix}\cos\omega t&\frac{1}{m\omega}\sin\omega t\\ -\frac{k}{\omega}\sin\omega t&\cos\omega t\end{bmatrix}\begin{bmatrix}r(0) \\ p(0)\end{bmatrix}. \tag{14}\] ## Appendix B Evolution of phase space densities In this appendix, we derive the parameters of the elliptic isodensity lines of the phase-space distributions studied in Sec. III.3. We recall that, following the classical equation of motion, the phase-space distribution at time \(t\) only depends on the following variable \[E_{t}=\frac{1}{2}\begin{bmatrix}\mathbf{r}&\mathbf{p}\end{bmatrix}Q\begin{bmatrix} \mathbf{r}\\ \mathbf{p}\end{bmatrix},\quad Q=\begin{bmatrix}Q_{rr}&Q_{rp}\\ Q_{pr}&Q_{pp}\end{bmatrix}, \tag{15}\] with the explicit form of \(Q\) inferred from Eq. 3 and Eq. 9 \[Q=R^{T}\begin{bmatrix}U_{+}^{T}U_{+}&0\\ 0&U_{-}^{T}U_{-}\end{bmatrix}R,\quad R=\begin{bmatrix}c&0&0&s\\ 0&-s&c&0\\ 0&c&s&0\\ -s&0&0&c\end{bmatrix}.\] The shift of momenta \(\tilde{\mathbf{p}}=\mathbf{p}+Q_{pp}^{-1}Q_{pr}\mathbf{r}\) in the first line of Eq. 12 leads to \[\rho_{t}(x,y)=\int\mathrm{d}^{2}\tilde{\mathbf{p}}\,f_{0}\left(\mathbf{r}^{T} A^{-1}\mathbf{r}+\tilde{\mathbf{p}}^{T}Q_{pp}\tilde{\mathbf{p}}\right), \tag{16}\] with \[A^{-1}=Q_{rr}-Q_{rp}Q_{pp}^{-1}Q_{pr}=[(Q^{-1})_{rr}]^{-1}, \tag{17}\] here the second equality can be either checked with lengthy by straightforward direct calculation, or derived using standard results on Schur complements. The matrix inverse \(Q^{-1}\) can be independently obtained as \[Q^{-1}=R^{T}\begin{bmatrix}(U_{+}^{T}U_{+})^{-1}&0\\ 0&(U_{-}^{T}U_{-})^{-1}\end{bmatrix}R, \tag{10}\] which can be efficiently evaluated using the relation \[(U_{\pm}^{T}U_{\pm})^{-1}=-Y(U_{\pm}^{T}U_{\pm})Y,\quad Y=\begin{bmatrix}0&1\\ -1&0\end{bmatrix}. \tag{11}\] Acting on the \(Y\) matrices on the rotations matrices, we find \[(Q^{-1})_{rr}=Q_{pp}\quad\Longrightarrow\quad A=Q_{pp}. \tag{12}\] As explained in the main text, all parameters of the ellipse characterizing the real-space density at time \(t\) can therefore be obtained from the eigen-decomposition of \(Q_{pp}\). We now find this decomposition and derive the expression given in Eq. 13. For simplicity, we decompose \(Q_{pp}=a_{\mu}\sigma^{\mu}\) onto Pauli matrices \(\sigma^{\mu=0,1,2,3}\), and find from the definition of \(Q\) above the explicit form \[a_{0} =1-\frac{\varepsilon^{2}}{4\Omega_{R}}(\tau_{+}^{2}-\tau_{-}^{2}), \tag{13}\] \[a_{1} =-\frac{\varepsilon\Omega}{2\Omega_{R}}(c_{-}\tau_{-}-c_{+}\tau_ {+}),\] (14) \[a_{2} =0,\] (15) \[a_{3} =-\frac{\varepsilon}{2}\left[\left(1+\frac{\Omega^{2}}{\Omega_{R }}\right)\tau_{+}^{2}+\left(1-\frac{\Omega^{2}}{\Omega_{R}}\right)\tau_{-}^{2} \right], \tag{16}\] where we recall that \(c_{\pm}=\cos(\omega_{\pm}t)\), \(\tau_{\pm}=\sin(\omega_{\pm}t)/\omega_{\pm}\) and \(\Omega_{R}^{2}=\Omega^{2}+(\varepsilon/2)^{2}\). The tilt of the ellipse, measured by the \(\phi\) from the \(y\)-axis, and the minor and the principal axis lengths can be obtained from these coefficients as \[\tan(2\phi) = -\frac{a_{1}}{a_{3}}=\frac{\Omega}{\Omega_{R}}\frac{c_{+}\tau_{+}- c_{-}\tau_{-}}{\left(1+\frac{\Omega^{2}}{\Omega_{R}}\right)\tau_{+}^{2}+ \left(1-\frac{\Omega^{2}}{\Omega_{R}}\right)\tau_{-}^{2}}, \tag{17}\] \[\lambda_{\pm}^{2} = a_{0}\pm\sqrt{a_{1}^{2}+a_{3}^{2}}=1-\frac{\varepsilon^{2}}{4 \Omega_{R}}(\tau_{+}^{2}-\tau_{-}^{2})\pm\frac{\varepsilon}{2}\sqrt{\left[ \left(1+\frac{\Omega^{2}}{\Omega_{R}}\right)\tau_{+}^{2}+\left(1-\frac{\Omega ^{2}}{\Omega_{R}}\right)\tau_{-}^{2}\right]^{2}+\frac{\Omega^{2}}{\Omega_{R}^{2 }}(c_{+}\tau_{+}-c_{-}\tau_{-})^{2}},\] leading to the results quoted in Eq. 13 of the main text. There, for short-handedness, we gave the expression of \(\lambda_{\pm}^{2}\) using the relation \[\lambda_{\pm}^{2}=a_{0}\pm|a_{1}|\sqrt{1+\frac{1}{\tan^{2}(2\phi)}}=a_{0}\pm \frac{|a_{1}|}{|\sin(2\phi)|}. \tag{18}\] ## Appendix C Time evolved Fock states In this appendix, we use the expression of \(A_{\pm}(t)\) in terms of the complex coordinate \(\xi,\xi\) (Eqs. 21 and 22) to derive the explicit real-space representation of the time evolved Fock states algebraically defined in Eq. 26. For completeness and readability, we first provide a more detailed description of the new holomorphic system of coordinates introduced in Eq. 20. We have first used the complex representation \[\begin{cases}z&=x+iy\\ \bar{z}&=x-iy\\ x&=\frac{1}{2}(z+\bar{z})\\ y&=\frac{1}{2i}(z-\bar{z})\end{cases}\,\ \ \begin{cases}p_{z}&=\frac{1}{2}(p_{x}-ip_{y})\\ p_{\bar{z}}&=\frac{1}{2}(p_{x}+ip_{y})\\ p_{x}&=p_{z}+p_{\bar{z}}\\ p_{y}&=i(p_{z}-p_{\bar{z}})\end{cases}\,\] well-suited to describe right and left moving particles in a magnetic field. Then, we obtain elliptic cyclotron or Figure 7: The density profile of the vortex lattice under geometric squeezing. Squeezing is manifested in the cloud profile and the vortex distribution. bits which motivates the definitions of stretched complex coordinates \[\begin{cases}\xi&=\alpha(f_{+}z+g_{+}\bar{z})\\ \bar{\xi}&=\alpha(g_{+}^{*}z+f_{+}^{*}\bar{z})\end{cases}\,,\quad\begin{cases} \alpha p_{\xi}&=f_{+}^{*}p_{z}-g_{+}^{*}p_{\bar{z}}\\ \alpha p_{\bar{\xi}}&=f_{+}p_{\bar{z}}-g_{+}p_{z}\end{cases}\,.\] made to enable the simple expression \(A=\frac{1}{2}(\xi+2ip_{\bar{\xi}})\) and to best match the geometry of the cyclotron orbit at any point in time. To get the explicit form of Fock states, we first recall the formula obtained for the vacuum state \[\phi_{t}(\xi,\bar{\xi})=\langle x,y|0,0\rangle_{t}=\frac{1}{\sqrt{\pi|u|}}\exp \left[\frac{\delta\xi^{2}-|\xi|^{2}}{2}\right]. \tag{10}\] Using the identity \[A_{-}(t)\phi_{t}(\xi,\bar{\xi})=\phi_{t}(\xi,\bar{\xi})\left(\frac{\xi}{u}-u \partial_{\bar{\xi}}-v\partial_{\xi}\right), \tag{11}\] multiple times offers the following expression for the guiding center Fock states in the lowest Landau level \[\langle x,y|0,n_{-}\rangle_{t}=\frac{\phi_{t}(\xi,\bar{\xi})}{\sqrt{n_{-}!}} \left(\frac{\xi}{u}-v\partial_{\xi}\right)^{m}\mathbbm{1}\,, \tag{12}\] with \(\mathbbm{1}\) the function everywhere equal to one. We recognize the definition of Hermite's polynomials \(\{H_{n}\}_{n}\) and conclude \[\langle x,y|0,n_{-}\rangle_{t}=\frac{\phi_{t}(\xi,\bar{\xi})}{\sqrt{n_{-}!}} \left(\frac{\delta}{2}\right)^{\frac{n_{-}}{2}}H_{n_{-}}\left(\frac{\xi}{\sqrt {2uv}}\right)\,. \tag{13}\] Similarly, we can combine the identities \[A_{+}^{\dagger}\phi_{t}(\xi,\bar{\xi})=\phi_{t}(\xi,\bar{\xi})[(\bar{\xi}- \delta\xi)-\partial_{\bar{\xi}}], \tag{14}\] \[[(\bar{\xi}-\delta\xi)-\partial_{\xi}]H_{k}(a\xi)=H_{k}(a\xi)[(\bar{\xi}- \delta\xi)-\partial_{\xi}]-2akH_{k-1}(a\xi)\] to find, after careful calculations, the expression \[\langle x,y|n_{+},n_{-}\rangle_{t}=\frac{i^{n_{+}}\phi_{t}(\xi,\bar{\xi})}{ \sqrt{n_{+}!n_{-}!}}\left(\frac{\delta}{2}\right)^{\frac{n_{-}+n_{+}}{2}}\sum \limits_{k=0}^{\min(n_{+},n_{-})}k!\binom{n_{+}}{k}\binom{n_{-}}{k}\left( \frac{2i}{v}\right)^{k}H_{n_{-}-k}\left(\frac{\xi}{\sqrt{2uv}}\right)H_{n_{+} -k}\left(\frac{\bar{\xi}-\delta\xi}{i\sqrt{2\delta}}\right)\,, \tag{15}\] which generalizes the \(n_{+}=0\) solution of Refs. [73; 74].
2307.14806
Prevalence and Associated Factors of Human Papillomavirus Infection among Iraqi Women
Human papillomavirus (HPV) is a significant public health concern, as it is a leading cause of cervical cancer in women. However, data on the prevalence of HPV infection among Iraqi women is scarce. This study aimed to estimate the prevalence of HPV infection and its associated factors among Iraqi women aged 15-50 attending health centers. In this cross-sectional study, 362 female participants aged 15-50 were recruited from health centers in Iraq. Serological tests were used to screen for HPV infection. Sociodemographic information, obstetric history, and contraceptive use were collected. Pap smears were performed to assess cervical changes related to HPV infection. Of the 362 participants, 65 (17.96%) tested positive for HPV. The majority of HPV-positive women were aged 30-35 years, housewives, and belonged to lower social classes. Among HPV-positive women, 30% had abnormal Pap smears, with 55% diagnosed with cervical intraepithelial neoplasia grade 1 (CIN1), 25% with CIN2, and 15% with CIN3. Biopsy confirmed the diagnosis in 5% of cases. No significant association was found between HPV infection and contraceptive use. Most HPV-positive women were multiparous. This study reveals a considerable prevalence of HPV infection among Iraqi women attending health centers, particularly in the age group of 30-35 years and among housewives. These findings highlight the need for targeted public health interventions to increase HPV awareness, promote regular screening, and improve access to healthcare services for women, especially those from lower social classes. Further research is warranted to better understand the factors contributing to HPV transmission in Iraq and to develop effective prevention strategies.
Maitham G. Yousif, Fadhil G. Al-Amran, Alaa M. Sadeq, Nasser Ghaly Yousif
2023-07-27T12:25:42Z
http://arxiv.org/abs/2307.14806v1
# Prevalence and Associated Factors of Human Papillomavirus Infection among Iraqi Women ###### Abstract Human papillomavirus (HPV) is a significant public health concern, as it is a leading cause of cervical cancer in women. However, data on the prevalence of HPV infection among Iraqi women is scarce. This study aimed to estimate the prevalence of HPV infection and its associated factors among Iraqi women aged 15-50 attending health centers. In this cross-sectional study, 362 female participants aged 15-50 were recruited from health centers in Iraq. Serological tests were used to screen for HPV infection. Sociodemographic information, obstetric history, and contraceptive use were collected. Pap smears were performed to assess cervical changes related to HPV infection. Of the 362 participants, 65 (17.98%) tested positive for HPV. The majority of HPV-positive women were aged 30-35 years, housewines, and belonged to lower social classes. Among HPV-positive women, 30% had abnormal Pap smears, with 55% diagnosed with cervical intraepithelial neoplasia grade 1 (UN1), 25% with CIN2, and 15% with CIN3. Biopsy confirmed the diagnosis in 5% of cases. No significant association was found between HPV infection and contraceptive use. Most HPV-positive women were multiparons. This study reveals a considerable prevalence of HPV infection among Iraqi women attending health centers, particularly in the age group of 30-35 years and among housewines. These findings highlight the need for targeted public health interventions to increase HPV awareness, promote regular screening, and improve access to healthcare services for women, especially those from lower social classes. Further research is warranted to better understand the factors contributing to HPV transmission in Iraq and to develop effective prevention strategies. H Human papillomavirus (HPV), prevalence, associated factors, Iraqi women, cervical cancer, serological tests. *Corresponding author: Maltham G. Yousif [email protected] m.g.alamran@@jfmu.ac.uk ## Introduction Human papillomavirus (HPV) is a group of more than 200 related viruses that infect human epithelial tissues, including the skin and mucous membranes [1]. Persistent infection with high-risk HPV types, such as HPV 16 and 18, is a significant risk factor for the development of cervical cancer, which ranks as the fourth most common cancer in women worldwide [2]. Additionally, there are potential associations with other viral and bacterial infections in pregnant women, as well as with comorbidities such as heart diseases. Several studies have shed light on the impact of viral infections on hematological changes, as seen in a longitudinal study by Yousuf et al., where they investigated hematological changes among COVID-19 patients [2]. In another study by Had et al., the role of NF-8 and oxidative pathways in atherosclerosis was explored, highlighting potential implications for cardiovascular health [3]. Moreover, Hasan et al. reported on extended-spectrum beta-lactamase-producing Klebsiella pneumonia in patients with urinary tract infections, which may have implications for overall health, including pregnant women [4]. In the context of cervical cancer, Yousuf et al. studied the association between shorter survival and high expression of Notch-1, providing insights into potential biomarkers for cervical cancer prognosis [9]. Furthermore, Sadio et al. investigated the correlation between highly sensitive C-reactive protein levels and preeclampsia, a condition that pregnant women may face [10]. While the focus is often on viral infections, bacterial infections like those studied by Yousuf et al., who characterized Staphylococcus aureus isolated from breast abscesses, can also have health implications for women [11]. The effect of doxorubicin-induced cardiotoxicity in rats, explored by Mohammad et al., may offer insights into the cardiac health of pregnant women facing cancer treatments [12]. As we delve into the complexity of infectious diseases, including the potential role of cytomegalovirus in breast cancer risk, as discussed by Yousuf, it becomes evident that understanding the connections between infections and various health conditions is vital [8]. The incidence of cervical cancer varies significantly across countries, with higher rates reported in low- and middle-income countries due to limited access to screening and vaccination programs [13]. In Iraq, limited data are available on the prevalence of HPV infection and its association with cervical cancer among women. Previous studies have reported varying HPV prevalence rates, ranging from 5.6% to 18.5% among women with normal cytology and higher rates among those with cervical abnormalities [14, 15]. Sociodemographic factors, such as age, marital status, education level, and occupation, have been shown to influence the risk of HPV infection in women [16]. Understanding the distribution of HPV types, risk factors for infection, and the association with cervical cytological changes is essential for designing effective prevention strategies, including HPV vaccination and cervical cancer screening programs in Iraq. This study aimed to investigate the prevalence of HPV infection and its association with sociodemographic factors, contraceptive use, and cervical cytological changes among women aged 15-50 years attending health centers in Iraq. To the best of our knowledge, this is the first study in Iraq to assess HPV infection using both molecular and serological diagnostic methods, providing a comprehensive overview of the epidemiology of HPV in the country [17]. Our findings may contribute to the development of targeted public health interventions and inform future research on the burden of HPV infection and cervical cancer in the region. ## Materials and Methods ### Study Design and Population A cross-sectional study was conducted to investigate the prevalence and associated factors of HPV infection among traaj women aged 15-50 attending health centers. The study population consisted of women seeking routine healthcare services or gynecological consultations at selected health centers between January and December 2022. A total of 362 women were included in the study using a convenience sampling technique. Informed consent was obtained from all participants before enrolment in the study. Ethical approval was granted by the local institutional review board. ### Data Collection A structured questionnaire was administered to collect sociodemographic information, obstetric history, and data on contraceptive use. The questionnaire included questions on age, marital status, education level, occupation, social class, parity, and type of contraception used. ### Sample Collection and Processing Cervical swabs were collected from each participant by a trained healthcare professional using a sterile cytobrush during a speculum-assisted pelvic examination. The swabs were immediately placed in a transport medium and stored at 4"C until further processing. ### Molecular Diagnosis DNA extraction was performed from the cervical swabs using a commercial DNA extraction kit, following the manufacturer's instructions. The presence of HPV DNA was determined by polymerase chain reaction (PCR) using consensus primers targeting the L1 region of the HPV genome. The PCR products were analyzed by agarose gel electrophoresis and visualized under ultraviolet light. ### Serological Diagnosis Blood samples were collected from each participant by venipuncture and were centrifuged to separate the serum. The serum samples were stored at -20"C until further analysis. Enzyme-linked immunosorbent assay (ELISA) kits were used to detect the presence of HPV-specific IgG antibodies, according to the manufacturer's instructions. The optical density values were measured using a microplate reader, and the results were interpreted based on the provided cut-off values. ### Pap Smear Examination Pap smears were performed on all participants to evaluate the cervical cytological changes associated with HPV infection. Slides were prepared by spreading the cervical cells onto glass slides, which were then fixed and stained using the Papanicola technique. The slides were examined under a light microscope by an experienced cytopathologist who was blinded to the HPV status of the participants. The results were reported according to the Bethesda System for Reporting Cervical Cytology. ### Statistical Analysis Data were entered and analyzed using the Statistical Package for Social Sciences (SPSS) version 26. Descriptive statistics were computed for sociodemographic characteristics, HPV prevalence, and Pap smear results. The chi-square test was used to assess the association between HPV infection and categorical variables. A p-value of less than 0.05 was considered statistically significant. ## Results Table 1 presents the sociodemographic characteristics of the study participants. A total of 362 women aged 15-50 years were included in the study, among which 65 (17.96%) tested positive for HPV. The highest prevalence of HPV infection was observed among women aged 30-34 years (22.45%). The majority of the HPV-positive women were housewives (59/65; 90.77%) and belonged to the low social class (52/65; 80%). A chi-square test was performed, revealing a significant association between age and HPV infection (x=17.63, df=6, p<0.01). Figure 1: Prevalence of different HPV types among HPV-positive women (N=65) \begin{table} \begin{tabular}{|l|l|l|} \hline Cytological change & Number & Percentage \\ \hline Normal & 45 & 69.23\% \\ \hline CIN1 & 18 & 27.69\% \\ \hline CIN2 & 8 & 12.31\% \\ \hline CIN3 & 4 & 6.15\% \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline Civil: cervical intraepithelial neoplasia \\ \hline CIN3. The remaining 69.23\% (n=45) of the HPV-positive women had normal cervical cytology. A chi-square test indicated a significant association between HPV infection and cytological changes (x\({}^{n}\)=28.36, df=3, p<0.001). \\ \hline \end{tabular} \end{table} Table 4: Association between HPV infection and contraceptive use \begin{table} \begin{tabular}{|c|c|c|c|} \hline Contraceptive method & HPV Positive & HPV Negative & P-value \\ \hline None & 15 & 67 & \\ \hline None & 15 & 67 & \\ \hline Barrier & 22 & 120 & \\ \hline Hormonal & 18 & 85 & \\ \hline Intrauterine device & 10 & 63 & \\ \hline Total & 65 & 297 & 0.76 \\ \hline \end{tabular} \end{table} Table 4: Association between HPV infection and contraceptive use. There was no significant association between the type of contraceptive method used and HPV \begin{table} \begin{tabular}{|l|l|l|l|} \hline Parity & HPV Positive & HPV Negative & P-value \\ \hline Nulliparous & 5 & 38 & \\ \hline 1-2 & 20 & 105 & \\ \hline 3-4 & 35 & 130 & \\ \hline 5 or more & 5 & 24 & \\ \hline Total & 65 & 297 & 0.68 \\ \hline \end{tabular} \end{table} Table 5: Association between HPV infection and parity. No significant association was observed between Figure 2: Histopathological findings in HPV-positive women with abnormal cytology (N=20) \begin{table} \begin{tabular}{|l|l|l|l|} \hline Parity & HPV Positive & HPV Negative & P-value \\ \hline Nulliparous & 5 & 38 & \\ \hline 1-2 & 20 & 105 & \\ \hline 3-4 & 35 & 130 & \\ \hline 5 or more & 5 & 24 & \\ \hline Total & 65 & 297 & 0.68 \\ \hline \end{tabular} \end{table} Table 5: shows the association between HPV infection and parity. No significant association was observed between with CIN1, 5 (25%) with CIN2, 3 (15%) with CIN3, and 1 positive women with abnormal cytology. Among the 20 women with abnormal cytology, 11 (55%) were diagnosed study, 30% of the HPV-positive women exhibited cervical cytological changes, with 55% having CIN1, 25% having CIN2, and 15% having CIN3. These findings highlight the importance of regular cervical cancer screening for the early detection and management of HPV-related cervical lesions [24]. It is worth noting that 5% of the cases were confirmed by biopsy, further emphasizing the role of histopathological examination in the definitive diagnosis of cervical abnormalities [25]. We did not find a significant association between HPV infection and contraceptive use in our study population. This is in contrast to some previous studies that reported a higher risk of HPV infection among users of hormonal contraceptives [26, 27]. The discrepancy in findings may be attributed to differences in study design, sample size, or population characteristics. Further research is needed to elucidate the relationship between contraceptive use and HPV infection in various settings. Our study showed that most of the HPV-positive women were multiporous, which is consistent with the literature suggesting an increased risk of HPV infection and cervical cancer among women with higher parity [28, 29]. The underlying mechanisms for this association are not fully understood but may involve hormonal and immunological factors, as well as cervical trauma during childbirth [30, 31]. In conclusion, our findings contribute to the understanding of HPV prevalence and its associated factors among women in Iraq. The high prevalence of HPV infection, particularly among women of lower socioeconomic status, highlights the need for targeted public health interventions, including HPV vaccination and cervical cancer screening programs. Further research is needed to investigate the long-term outcomes of HPV infection and cervical abnormalities in the Iraqi population and to evaluate the effectiveness of preventive measures.
2306.12186
Cloud Behaviour on Tidally Locked Rocky Planets from Global High-resolution Modeling
Determining the behaviour of convection and clouds is one of the biggest challenges in our understanding of exoplanetary climates. Given the lack of in situ observations, one of the most preferable approaches is to use cloud-resolving or cloud-permitting models (CPM). Here we present CPM simulations in a quasi-global domain with high spatial resolution (4$\times$4 km grid) and explicit convection to study the cloud regime of 1 to 1 tidally locked rocky planets orbiting around low-mass stars. We show that the substellar region is covered by deep convective clouds and cloud albedo increases with increasing stellar flux. The CPM produces relatively less cloud liquid water concentration, smaller cloud coverage, lower cloud albedo, and deeper H2O spectral features than previous general circulation model (GCM) simulations employing empirical convection and cloud parameterizations. Furthermore, cloud streets--long bands of low-level clouds oriented nearly parallel to the direction of the mean boundary-layer winds--appear in the CPM and substantially affect energy balance and surface precipitation at a local level.
Jun Yang, Yixiao Zhang, Zuntao Fu, Mingyu Yan, Xinyi Song, Mengyu Wei, Jiachen Liu, Feng Ding, Zhihong Tan
2023-06-21T11:33:06Z
http://arxiv.org/abs/2306.12186v1
# Cloud Behaviour on Tidally Locked Rocky Planets from Global High-resolution Modeling ###### Abstract We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the 3316-1008 system. We present a new method for computing the stability of the proposed model for the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability of the 3316-10008 system. We present a new method for computing the 3316-1008 system. We present a new method for computing the stability of the 3316-1008 system. We present a new method for computing the stability **Abstract:** Determining the behaviour of convection and clouds is one of the biggest challenges in our understanding of exoplanetary climates. Given the lack of in situ observations, one of the most preferable approaches is to use cloud-resolving or cloud-permitting models (CPM). Here we present CPM simulations in a quasi-global domain with high spatial resolution (4\(\times\)4 km grid) and explicit convection to study the cloud regime of 1:1 tidally locked rocky planets orbiting around low-mass stars. We show that the substellar region is covered by deep convective clouds and cloud albedo increases with increasing stellar flux. The CPM produces relatively less cloud liquid water concentration, smaller cloud coverage, lower cloud albedo, and deeper H\({}_{2}\)O spectral features than previous general circulation model (GCM) simulations employing empirical convection and cloud parameterizations. Furthermore, cloud streets--long bands of low-level clouds oriented nearly parallel to the direction of the mean boundary-layer winds--appear in the CPM and substantially affect energy balance and surface precipitation at a local level. ## Introduction Clouds are critical for planetary climate and habitability since they can absorb and reflect stellar radiation and meanwhile absorb and re-emit thermal infrared radiation[1]. Clouds are also critical for the observational characterizations of exoplanets because they can affect the amplitude of thermal phase curves and can mute the transmission and emission spectral features of atmospheric species[2, 3, 4, 5]. The strength of the climatic effects and the degree of the spectral muting depend on the cloud composition, coverage, thickness, altitude, and microphysical properties. Therefore, knowing which types of clouds can form and what characteristics they exhibit are important to understand planetary habitability and the detectability of atmospheric species such as H\({}_{2}\)O. In this study, for the first time, a global-scale CPM is employed and modified to simulate the convection and clouds and their climatic effects on tidally locked rocky planets orbiting around low-mass stars. Tidally locked rocky planets are the primary targets for finding potentially habitable planets beyond the solar system. This is due to their frequent transits and large planet-to-star size ratios. Previous studies using global GCMs showed that there are mainly two types of clouds on this kind of planets, deep convective clouds over the substellar region and low-level clouds on the permanent nightside[6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. However, the grid sizes of GCMs are hundreds of kilometers, so that small-scale processes such as convection and clouds on scales of meters to kilometers are not resolved and their effects on radiation, momentum, moisture, and energy need to be parameterized. The parameterization schemes involve many empirical equations and parameters based on Earth, raising the question of their applicability to exoplanetary environments that are quite different from Earth. The System for Atmospheric Modeling (SAM)[17] in a quasi-global domain is used in this study (see Methods and Supplementary Table 1-4 and Supplementary Figs. 1-7). In the model's dynamical core, the hydrostatic approximation is not made and the full vertical momentum equation is solved. The resolution is 4 km in \(x\) direction by 4 km in \(y\) direction (or 2 km by 2 km) with 48 vertical levels, and the model time step is 10 or 20 s. Two rocky planets, TRAPPIST-1e and K2-72e, are simulated, and both are assumed to be in synchronously rotating orbits. The rotation period of TRAPPIST-1e is 6.1 Earth days, so it is either close to or exactly as the circulation regime of rapidly rotating planets [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 141, 142, 143, 144, 145, 146, 147, 150, 151, 152, 153, 154, 155, 156, 157, 161, 170, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 222, 233, 240, 241, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 300, 311, 325, 334, 35, 36, 371, 38, 392, 301, 338, 393, 310, 339, 340, 341, 342, 343, 351, 36, 371, 38, 393, 394, 395, 396, 397, 398, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 422, 433, 444, 445, 45, 46, 471, 48, 492, 400, 407, 409, 411, 422, 433, 444, 45, 46, 472, 48, 493, 400, 407, 409, 412, 433, 444, 45, 46, 473, 48, 494, 495, 496, 497, 409, 413, 44, 400, 414, 401, 422, 433, 44, 45, 46, 474, 48, 495, 496, 497, 409, 414, 435, 46, 498, 499, 500, 515, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 87, 89, 91, 84, 85, 88, 89, 92, 85, 89, 93, 86, 89, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 144, 145, 146, 147, 150, 151, 152, 153, 154, 155, 156, 157, 161, 170, 181, 182, 183, 184, 185, 186, 187, 188, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 222, 23, 240, 241, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 281, 282, 284, 285, 286, 287, 288, 289, 291, 288, 292, 293, 294, 295, 296, 297, 298, 300, 31, 32, 334, 35, 36, 371, 38, 392, 303, 341, 342, 343, 351, 36, 371, 38, 393, 394, 395, 396, 397, 398, 401, 412, 433, 44, 45, 46, 473, 48, 499, 500, 515, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 112, 109, 111, 112, 123, 124, 125, 126, 127, 128, 129, 130, 141, 142, 143, 144, 145, 146, 147, 150, 151, 152, 153, 154, 156, 157, 161, 170, 181, 182, 183, 184, 185, 186, 187, 188, 191, 192, 193, 194, 195, 196, 197, 198, 200, 211, 222, 23, 240, 241, 251, 252, 253, 253, 254, 255, 256, 257, 258, 259, 260, 271, 272, 273, 274, 275, 276, 277, 278, 279, 282, 285, 286, 287, 288, 289, 293, 294, 295, 296, 297, 298, 303, 310, 33, 32, 334, 351, 36, 371, 38, 392, 303, 341, 342, 343, 351, 36, 371, 38, 392, 303, 341, 342, 343, 351, 36, 371, 38, 392, 303, 343, 351, 36, 371, 38, 392, 303, 341, 342, 343, 351, 36, 371, 38, 392, 303, 343, 351, 36, 371, 38, 392, 303, 341, 342, 343, 351, 36, 371, 38, 392, 303, 341, 342, 343, 351, 36, 371, 38, 392, 303, 343, 351, 36, 371, 38, 392, 303, 341, 342, 343, 351, 36, 371, 38, 392, 303, 344, 351, 36, nightside and at the high latitudes of the dayside. In CAM3 and ExoCAM, clouds are not resolved and cloud coverages are empirically parameterized based on relative humidity, atmospheric stratification, and large-scale updrafts[20]. Because water vapor is trapped in the inversion layer of the nightside, near-surface relative humidity is high there (Supplementary Fig. 15), the parameterized cloud coverages are large in particular for low-level clouds (see also Fig. 1(C) in ref. 6). This bias also occurred when the models were used to simulate Earth's Arctic clouds[22]. Tests of increasing the horizontal resolution from 4 to 2 km and extending the model top from 27 to 48 or 65 km in the quasi-global SAM experiments does not change the main conclusions (Supplementary Figs. 2 & 3). Moreover, small-domain sensitivity tests show that as the horizontal grid spacing in SAM is increased from 6.4 to 1.6, 0.4, or 0.1 km, the change of the substellar clouds is small whereas the nightside cloud water path increases significantly but still less than those in the two GCMs (Fig. 3 & Supplementary Figs. 4 & 10). This might be because the mixing within the boundary layer is the main source of the nightside clouds and a finer resolution can better capture the turbulent motion in small scales. The overall effect of varying the resolution, however, is very weak, within 1.0 W m\({}^{\text{-2}}\) in longwave cloud radiative effect on the nightside, because this layer of clouds is close to the surface and the cloud water path is very small. Overall, the simulated climates between CPM and GCM are broadly similar, although quantitative differences exist and they are not small in some regional aspects. These experiments confirm that the substellar region of tidally locked habitable planets should be covered by optically-thick clouds especially when the planets have slow rotation rates. The similarity in the results between the two types of models also increases our confidence of using GCMs for the climate simulations of planets in tidally locked orbits, at least for the atmospheric composition employed in this study. **Stabilizing Cloud Feedback:** Previous studies using GCMs have shown that when stellar flux is increased, more convective clouds form over the substellar region and planetary albedo becomes higher. The clouds act to weaken the original warming caused by the increased stellar flux and to move the inner edge of the habitable zone closer to the host stars, especially for slowly rotating planets[6, 8, 11, 12]. This was called as a'stabilizing cloud feedback'[6]. In order to more accurately simulate this feedback, we employ SAM to re-investigate this phenomenon and the results are shown in Fig. 4 & Supplementary Fig. 16. From Fig. 4B, it is clear to see that the planetary albedo increases with increasing stellar flux in all the three models. The increasing rate of the planetary albedo in SAM is close to that in CAM3 but higher than that in ExoCAM. However, the absolute value of SAM's planetary albedo for each stellar flux is only \(\sim\)1/2 to 3/4 of those in CAM3 and ExoCAM. This means that the background planetary albedo in SAM is relatively lower but its increasing rate is high. The lower albedo in SAM is mainly due to that its simulated cloud liquid water mass is less than those in CAM3 and ExoCAM, although the cloud ice water mass is similar or even higher (Fig. 4C & D). Among the three models, ExoCAM is the warmest although its planetary albedo is in the middle. This is mainly due to that the greenhouse effect and shortwave absorption by water vapor in ExoCAM are significantly stronger than that in CAM3[23, 24, 9], because an updated radiative transfer module is employed in ExoCAM[21]. This pushes ExoCAM to enter a runaway greenhouse state at a stellar flux of \(\sim\)1750 W m\({}^{-2}\), whereas SAM and CAM3 are still in habitable states even when the stellar flux reaches 2000 W m\({}^{-2}\). Note that the radiative transfer module used in the SAM experiments is the same as CAM3. Both the planetary albedo and the stabilizing cloud feedback simulated in SAM are stronger than the previous simulations using a limited-area cloud-resolving model shown in Lefevre et al. (2021)[25]. In a small domain, 250 km by 250 km at the substellar region, Lefevre et al. (2021)[25] showed that the planetary albedo is about 6%-10% and its increasing rate is about 3% (absolute value) per an increase of 400 W m\({}^{-2}\) in the stellar flux. Here, we find that the planetary albedo is about 20% and its increasing rate is about 8% (absolute value) per an increase of 400 W m\({}^{-2}\) in the stellar flux. One possible reason is that the simulated domain in Lefevre et al. (2021)[25] was too small to well simulate the strong large-scale near-surface convergence towards the substellar region, which is critical for the formation of large-area dense convective clouds on the dayside. **Cloud streets:** Between the deep convection clouds over the substellar area and the west/east terminators, the global CPM simulates a special low-level cumulus clouds. Different from the night-side low-level clouds, these clouds exhibit a special structure, characterized by parallel bands of clouds separated by parallel bands of clear-sky air (Fig. 5). The direction of the cloud bands is nearly the same as the mean winds in the planetary boundary layer. These clouds are similar to those observed in convective boundary layer of Earth during cold air outbreaks, which are characterized by cold and dry air flowing fast towards relatively warm neighboring oceans or lands[26, 27, 28]. This special cloud distribution looks like straight streets, so that they are called as "cloud streets". The formation of cloud streets is from the coupling between large-scale circulation and small-scale convection (Supplementary Fig. 17). When the cool air is advected from the nightside to the much warmer dayside, the cool air is heavier and starts to descend, and meanwhile the warm air in the boundary layer is lighter and starts to ascend. Modified by vertical wind shear, wavelike structures develop in the direction nearly parallel to the mean wind[29, 30]. In the ascending regions, the air cools, moist condensation occurs, and clouds form, and they are separated by descending regions of clear and relatively dry air. The convection as well as water vapor and clouds is trapped in boundary layer by a strong temperature inversion (Supplementary Fig. 17F), which is maintained by large-scale downwelling and adiabatic compression. Below the inversion, relative humidity is high, \(\sim\)70-100%, but above the inversion, relative humidity is lower than 30% (Supplementary Fig. 17C). The characteristics of the cloud streets simulated here are somewhat different from those on Earth. From Fig. 5 and Supplementary Fig. 17, one could find that the spacing of the cloud bands ranges from \(\sim\)10 to 100 km, generally larger than that on Earth, \(\sim\)2 to 20 km[30]. The convection depths are similar, \(\sim\)2 km. These mean that the aspect ratios (i.e., cloud street wavelength divided by the convection depth) are \(\sim\)5-50, greater than the values of 2 to 20 with the most frequent values being \(\sim\)3 to 4 on Earth[30]. This may be due to the fact that the vertical shear of the horizontal winds near the top of the boundary layer is stronger than that on Earth (Supplementary Fig. 17E). The larger horizontal winds are due to the dramatic horizontal day-night surface temperature contrast on the tidally locked planets, \(\sim\)30-70 K (Supplementary Figs. 8 & 16). Under a stronger vertical wind shear, the absolute value of the Richardson number (\(R_{i}\), an index of instability) is smaller, thereby the maximum growth rate of an unstable perturbation occurs at a smaller wavenumber, i.e., at a greater wavelength[31]. The cloud streets have significant effects on energy balance and surface precipitation, but only in local regions, as shown in Supplementary Fig. 18. In the updraft regions of the cloud streets, there is more cloud water and cloud top temperatures are lower, so that planetary albedo is higher, downward shortwave radiation reaching the surface is smaller, outgoing longwave radiation to space is smaller, and surface precipitation is larger, compared to the downdraft regions between the cloud streets. The clear-sky regions between updrafts allow the atmosphere to emit thermal radiation to space more easily than that in an alternative condition when all the regions were covered by uniform clouds. In this view, the clear-sky downdraft regions act like a number of small 'radiator fins'. This is similar to the effect of the large radiator fin of the dry subtropics on Earth, which acts to stabilize the tropical climate[32]. In area, the cloud street region covers about 10% to 30% of the dayside (Fig. 1), which indicates that the cloud streets can have significant local climatic effects, although less important than the substellar deep convective clouds. Cloud streets were not found in previous GCM simulations for tidally locked planets. In GCMs[6, 9], the coupling between large-scale circulation and grid-scale convection was considered, but horizontal resolutions (hundreds of kilometers) were too coarse to resolve the cloud streets, and GCMs' convection processes were empirically parameterized. Similar patterns to the cloud streets could be vaguely seen in previous high-resolution simulations such as Fig. 1 of Zhang et al. (2017)[33] and Fig. 12 of Sergeev et al. (2020)[34]. But, neither of them had realized that these clouds belong to cloud streets, and the structure, underlying mechanism, and climatic effects were not analyzed. For more details on the convection and precipitation, please see supplementary Figs. 19-21 and Supporting Information. **Observational Characteristics:** The different concentrations and spatial patterns of clouds simulated by the models can influence the projection of observational characteristics, as shown in Fig. 6. For TRAPPIST-1e, phase curves obtained from the three models are similar in shape, with the dayside infrared emission being higher than the nightside, generally following the distribution of surface temperature. The peak of each phase curve exhibits a westward shift related to the substellar point (Fig. 6A). This westward shift is due to that equatorial superrotation transports clouds towards the east side of the substellar point and the infrared emission from the cloud top is small because of the low cloud top temperatures. The degree of the westward shift is \(\sim\)5\({}^{\rm o}\), 20\({}^{\rm o}\), and 40\({}^{\rm o}\) in CAM3, ExoCAM, and SAM, respectively. For K2-72e, all the three models show a hump-like shape in the thermal phase curves with a minimum value at the orbital phase angle between -20\({}^{\rm o}\) and 90\({}^{\rm o}\) and one maximum value on each side (Fig. 6B). ExoCAM shows a minimum at the orbit phase angle of about 0\({}^{\rm o}\) (viewing the dayside). This is due to that liquid cloud water path in ExoCAM is much higher than those in SAM and CAM3 (Supplementary Fig. 10F & Table 3), and the clouds absorb more thermal emission from the surface and meanwhile the cloud-top temperature is relatively lower. Figures 6C & E show the transmission spectra of TRAPPIST-1e for the morning terminator and the evening terminator, respectively. For the evening terminator, all the three models exhibit very weak signals, less than 2-3 ppm (parts per million) at the 1.4 um wavelength of H\({}_{2}\)O molecular absorption features (which does not overlap with CO\({}_{2}\) absorption). This is due to that dense clouds cover the evening terminator (Fig. 2; see also references 3 & 35). For the morning terminator, the atmospheric signal at 1.4 um simulated in SAM, \(\sim\)6-10 ppm, is deeper than those in CAM3 and ExoCAM and also the model LMDG used in the study of Fauchez et al. (2019)\({}^{2}\), \(\sim\)2-3 ppm. It is due to the fact that the morning terminator of SAM has less clouds (Fig. 2A). Figures 6D & F show the transmission spectra of K2-72e. Its transmission spectral features are shallower than those for TRAPPIST-1e, such as \(\sim\)3 versus \(\sim\)10 ppm at 1.4 um for the morning terminator. This is due to two factors: 1) the star K2-72 is much larger than TRAPPIST-1, 0.33 versus 0.12 of Sun's radius, and the transit depth is inversely proportional to the stellar projected area; and 2) the atmospheric scale height of K2-72e is \(\sim\)78% of TRAPPIST-1e, due to the larger gravity (1.29 versus 0.93 Earth's gravity), although K2-72e is \(\sim\)20 K warmer than TRAPPIST-1e. Moreover, K2-72e is \(\sim\)217 light years away from Earth and TRAPPIST-1e is \(\sim\)40 light years away, which influences the signal-to-noise ratio (SNR) of observations. Therefore, the detection of H\({}_{2}\)O molecule on K2-72e would be harder than TRAPPIST-1e. ## Discussions In this study, for the first time, a quasi-global CPM is employed to simulate the clouds and climate of tidally locked habitable planets having Earth-like atmospheric compositions. These simulations show dense convective clouds over the substellar region, a stabilizing cloud feedback as increasing stellar flux, and a special type of clouds--cloud street. For time- and global-mean, the simulated surface temperature and planetary albedo in the CPM are broadly similar to those found in GCM experiments, but GCM simulations may have overestimated the cloud liquid water amount and cloud coverage especially on the nightside, and have also overestimated the effect of clouds on the transmission spectra of H\({}_{2}\)O molecule. This work improves our understanding of convection and clouds and the interactions between convection, cloud, and circulation on tidally locked rocky planets. It also opens the door of global-scale cloud-resolving simulations for exoplanets. Our results may be used to benchmark and improve convection and cloud parameterizations in GCMs, an important subject for future exoplanet model development. Further work is required to use high-resolution models to examine the inner edge (higher insolation) and the outer edge (denser CO\({}_{2}\)) of the habitable zone for both locked and non-locked planets. This would have a number of implications for designing telescopes and for finding habitable exoplanets. Finally, we emphasize that even in high-resolution cloud simulations, there are considerable uncertainties. One is from microscale processes, such as ice aggregation rate, evaporation of rain droplets, and the partitions of cloud water, cloud ice, and rain[17, 37]. These processes are still required to be parameterized and the parameterization scheme is not unique. Another one is from model resolution. For perfect simulations of stratus and stratocumulus clouds, a minimum horizontal grid spacing of \(\mathcal{O}\left(100\right)\) m and a minimum vertical grid resolution of \(\mathcal{O}\left(10\right)\) m are required[38]. For large-domain simulations, these requirements are far beyond present computation powers. ### Methods **Exoplanet TRAPPIST-1e and K2-72e**: Two confirmed terrestrial planets, TRAPPIST-1e[39] and K2-72e[40], are simulated in this study. The planetary parameters are listed in Supplementary Tables 1 & 2. TRAPPIST-1e is somewhat smaller than Earth (0.91 Earth's radius) and its orbital period (= rotation period) is \(\sim\)6.1 Earth days, and K2-72e is larger than Earth (1.29 Earth's radius) and its orbital period (= rotation period) is \(\sim\)24.2 Earth days. Stellar spectra of \(\sim\)2500 and \(\sim\)3400 K are used for TRAPPIST-1e and K2-72e, respectively. The atmosphere is assumed as earth-like, 10\({}^{4}\) kg m\({}^{-2}\) N\({}_{2}\) plus 355 ppmv CO\({}_{2}\) and variable H\({}_{2}\)O. There are no other greenhouse gases, aerosols, O\({}_{2}\), or O\({}_{3}\). The surface gravities of the two planets are 0.93 and 1.29 of the Earth's value (9.81 m s\({}^{-2}\)), respectively. Due to the different gravities, the mean surface pressure is \(\sim\)0.93 bar for TAPPIST-1e but \(\sim\)1.29 bar for K2-72e. The atmospheric circulation of TRAPPIST-1e is in or close to the regime of rapid rotation[19], while K2-72e is in the slow rotation regime. Another interesting, slowly rotating planet is TOI-700d, which has a rotation period of \(\sim\)37.4 Earth days[41]. The effective temperatures of the host stars are similar, \(\sim\)3480 and \(\sim\)3360 K for TOI-700 and K2-72, respectively. We choose K2-72e rather than TOI-700d, because the stellar flux receiving on the planet of K2-72e is greater, \(\sim\)1510 vs \(\sim\)1183 W m\({}^{-2}\), although TOI-700d is somewhat closer to Earth, \(\sim\)101.4 vs \(\sim\)217.1 light years away. The greater stellar flux can trigger more convective clouds[6, 9]; this make us easier to compare the differences between SAM and GCMs. **Cloud-permitting simulations:** The model we employ here is the System for Atmospheric Modeling (SAM)[17] version 6.11.6. The model is based on the anelastic dynamical equations with bulk microphysics and without cumulus parameterization. Prognostic quantities in the model are the three velocity components, liquid/ice water static energy, non-precipitating water (water vapor, cloud water, and cloud ice), and precipitating water (rain, snow, and graupel). Sub-grid momentum, moisture, and energy fluxes are evaluated using a Smagorinsky-type prognostic closure. Cloud microphysical processes on scales of microns to millimeters, such as evaporation and sublimation in the atmosphere and precipitation formation via collisions, are still required to be parameterized. In this study, the experiments use the computationally efficient single-moment microphysics scheme. The model uses a Cartesian geometry rather than a spherical geometry, because the spherical geometry is not yet allowed in SAM. The horizontal resolution is 4 km in latitude by 4 km in longitude or 2 km by 2 km (Supplementary Table 2). The computational domain is in a global range extending from 90\({}^{\rm o}\)S to 90\({}^{\rm o}\)N in latitude and from 0\({}^{\rm o}\) to 360\({}^{\rm o}\) in longitude but in the Cartesian geometry. So, we call the experiments as 'quasi-global cloud-permitting', rather than 'global cloud-resolving' over a sphere or 'near-global cloud-resolving' in a Cartesian geometry from 46\({}^{\rm o}\)S to 46\({}^{\rm o}\)N (used in such as Bretherton & Khairoutdinov (2015)[42]). Due to this, atmospheric circulation in polar regions simulated in this frame is not credible. However, to facilitate view of the results, the zonal and meridional coordinates of the figures shown in this paper are given in degrees of longitude and latitude, respectively. The nightside covers the longitudes smaller than 90\({}^{\rm o}\) or larger than 270\({}^{\rm o}\), where the stellar flux is zero. Due to the different planetary radii, one degree of longitude is equal to \(\sim\)103 km for TRAPPIST-1e but \(\sim\)143 km for K2-72e, and the same for one degree of latitude. For the limitations of the Cartesian geometry, please see the discussions in Supporting Information. A latitudinally-dependent Coriolis parameter is used in the simulations, \(\mathrm{f=2\Omega sin\varphi}\), where \(\Omega\) is the planetary rotation rate and \(\varphi\) is the latitude. So, the Coriolis force (as well as the beta effect) is included. For TRAPPIST-1e, the planetary rotation rate is 0.16 of modern Earth, and for K2-72e, it is 0.04 of modern Earth. The Coriolis parameter is a key factor in the climate simulations [43]. **Two steps for each experiment**: There are two steps for each experiment. Firstly, a quasi-global experiment with a 40-km resolution was run, and it was initialized with no wind and horizontally uniform surface temperature (300 K) and saturated humidity profiles. The surface is coupled to a slab ocean with a constant depth of 1.0 m. Each 40-km experiment was run for 210 Earth days, by which time it has reached an approximate statistical equilibrium state (Supplementary Fig. 1). This spin-up procedure is able to greatly reduce the required computation time for the next step. Note that the ocean depth is 1 m in the 40-km experiments. Because there is neither seasonal nor diurnal cycle on the synchronously rotating planets, so the depth of the slab ocean has a very small influence on the climate. In the sensitivity test of Yang et al. (2013)[6], changing the slab ocean depth from 1 m to 50 m affects the global-mean surface temperature by \(\sim\)1 K (see their online Table 1). Secondly, the instantaneous variables (including air temperature, winds, water vapor, clouds, etc.) of the final quasi-equilibrium fields of the 40-km experiment is linearly interpolated to a 4-km grid in also a quasi-global domain. Then, we run the model for 30 or 50 Earth days. In the 4 km experiment, the sea surface temperatures (SSTs) are fixed to the time-mean values of the last 30 Earth days of the 40-km experiment. The time step of the experiments is 10 or 20 s, but radiative fluxes and heating rates are updated every 15 mins. The setup of these two steps is similar to that used in Bretherton & Khiroutdinov (2015)[42]. Due to computation resource limits, the 4 km experiments did not use a slab ocean or run for a longer time than 50 Earth days. The results are shown in Figs. 1-2 and Supplementary Figs. 1-2 and online Supplementary Video 1-3. We have added one 2 km experiment but also with fixed SSTs (from the 4-km experiment) and several regional experiments (described below) with higher resolutions. All these experiments showed that the main conclusions are likely robust except that the night-side clouds strongly depend on the horizontal resolution (see Fig. 3 & Supplementary Fig. 4). In the experiments, the timescale of convection is less than hours, the timescale of the global atmospheric overturning circulation is about 30-100 Earth days, and the radiative timescale is about 100 Earth days. The required model time to reach equilibrium is mainly determined by the greatest one, the radiative timescale. So, the simulation length of 50 Earth days (under 4 km resolution) is not long enough to let the system reach perfect statistical equilibrium. But, due to the antecedent 210 days run under the 40 km resolution in the first step, it requires a shorter time to reach quasi-equilibrium for the 4 km experiments. In Bretherton & Khiroutdinov (2015)[42], they stated that "The spin-up on the 4 km grid mainly just fills in small-scale variance unresolved by the 20 km (_40 km in this study_) grid, with some adjustments in statistics of cloud and humidity-related fields on larger scales." Therefore, the short run of 50 Earth days is roughly suitable for the target in this study; of course, further longer experiments would be better to examine the time-mean and transient characteristics of the system. Another time scale is related to the thermal inertia of the ocean and atmosphere. For a well-mixed slab ocean, the strength of its thermal inertia is determined by \(\rho_{o}C_{p}^{o}H\), where \(\rho_{o}\) is seawater density, \(C_{p}^{o}\) is the specific heat capacity of seawater, and \(H\) is the slab ocean depth. For the atmosphere, the strength of its thermal inertia is approximately determined by \(\rho_{o}C_{p}^{o}H_{eq}\), where \(H_{eq}\) is the equivalent water mixed layer depth of the atmosphere. The value of \(H_{eq}\) can be calculated as \(P_{s}/(g\rho_{o})\times C_{p}^{a}/C_{p}^{o}\), where \(P_{s}\) is the mean surface pressure, \(g\) is the planetary surface gravity, and \(C_{p}^{a}\) is the specific heat capacity of the atmosphere. For Earth-like atmosphere, \(P_{s}\) is \(\sim\)1.0 bar, \(g\) is \(\sim\)10 m s\({}^{-2}\), and \(C_{p}^{a}/C_{p}^{o}\) is approximately equal to 0.24, which is 1004/4218. So, the equivalent water depth of the atmosphere is about 2.4 m. If slab ocean depth is 50 m, the equivalent mixed layer depth for the thermal inertia of the atmosphere-ocean system is roughly 52.4 m. This means that in one experiment with the slab ocean depth of 50 m, the timescale for the system to reach equilibrium is about 21.8 (= 52.4/2.4) times that of one experiment with fixed SST. If slab ocean depth is 1 m, the equivalent mixed layer depth of the atmosphere-ocean system decreases to 3.4 m. This means that in one experiment with the slab ocean depth of 1 m, the timescale for the system to reach equilibrium is about 1.4 (= 3.4/2.4) times that of one experiment with fixed SST. If further consider that the atmosphere and ocean are not perfectly in synchronous responses to an external forcing, the two ratios (21.8 and 1.4) would likely be larger. These analyses suggest that the equilibrium time scale with a very shallow slab ocean depth does not differ significantly from that of a fixed-SST experiment. Therefore, for improving the simulations in the future, a slab ocean should be employed instead of fixed SST, provided there are sufficient computational resources. **Vertical resolution and the top of the model:** The model has 48 vertical levels from the surface to about 27 km with variable grid spacing, which is suitable to resolve the troposphere. Boundary conditions are periodic in the zonal direction and free-slip rigid walls at northern and southern boundaries; there is no cross-pole mass or energy transport in SAM. A sponge layer in the upper 9 km of the model is included in order to minimize the reflection of gravity waves from the model top. In order to test the effects of varying the vertical resolution and model top, we do several additional experiments with increasing the number of the vertical levels from 48 to 72 and meanwhile raising the top of the model from 27 to 42 or 65 km. The results are shown in the Supplementary Fig. 3. Overall, these two parameters do not influence the conclusions of this study, although they do affect some details. **Small-domain high-resolution experiments**: We set up a series of small-domain simulations which use several different grid sizes to further examine the robustness of our experiments. These simulations use the same version of SAM as the quasi-global simulations but run in rectangular domains with periodic boundary conditions on both zonal and meridional sides and are forced by the large-scale temperature, wind, and moisture profiles from the quasi-global 4 km simulations at certain locations. We choose the sub-stellar point (SP) and the anti-stellar point (AP) as the locations for the small-domain simulations. We have done four groups of simulations, two for TRAPPIST-1e and another two for K2-72e. Each group consists of four simulations with horizontal grid sizes of 6.4, 1.6, 0.4, and 0.1 km. The number of grid points in the small domain (\(\mathrm{Nx}\times\mathrm{Ny}\times\mathrm{Nz}\)) is \(32\times 32\times 48\), and it is the same for all the small-domain experiments. For all these simulations, the vertical grid is identical to that of quasi-global simulations. For the SP experiments, the incident stellar radiation is uniform in the domain, the stellar strength and the incidence angle are the same as the corresponding location in quasi-global experiments. For the AP experiments, these is no incident stellar radiation. The Coriolis force is not considered, since both SP and AP are located on the equator. To represent the large-scale forcing from the quasi-global simulation, we apply 1) the nudging of horizontal velocity, temperature, and specific humidity with a relaxation timescale of two hours, and 2) a large-scale vertical velocity forcing from the reference profile. The total large-scale forcing tendency can be expressed as \[\left(\frac{\partial\lambda}{\partial\mathrm{t}}\right)_{l.s.}=-\frac{\lambda -\lambda_{\mathrm{ref}}}{\tau}-w_{\mathrm{ref}}\frac{\partial\lambda}{ \partial z}\] where \(\lambda\) represents the liquid-ice static energy, the specific humidity, or the wind components in the x or y directions; \(\lambda_{\mathrm{ref}}\) is the corresponding reference profile; specifically, \(w_{\mathrm{ref}}\) is the reference vertical velocity; and \(\tau\) is equal to two hours. The reference profile is the timely and horizontally averaged profile from the quasi-global simulation in a square with a side length of 160 km, the center of which is the SP or AP. We use the data between Days 230 to 239 for time averaging. Although it is possible to use reference profiles that are varying with time, we use a fixed time-mean profiles for simplification and for noise reduction. The surface temperature (uniform) is also fixed to the mean value of the same region. This method is similar to that used in references 44 & 45. We run each simulation for 30 days starting from the corresponding reference profiles. Figure S4 shows that the total cloud water content reaches equilibrium in no more than 10 Earth days in all the simulations. For the two SP simulations, the total cloud water path is insensitive to the grid size with an overall relative error of approximately 10%. These values are consistent with our quasi-global simulations (see also Fig. 3). These results suggest that a cloud-permitting resolution of 4 km for the dayside can possibly produce almost the same results of a resolution as fine as 0.1 km. For the AP simulations, a coarse resolution results in an underestimated cloud water path (Fig. 3 & Supplementary Fig. 4). **Stabilizing cloud feedback experiments:** Besides the simulations of the two planets TRAPPIST-1e and K2-72e, a series of experiments has been performed to examine the response of the tidally locked climate to the orbit of being closer and closer to the host star, using the three models (SAM, CAM3, and ExoCAM). Six experiments are performed for each model. The star temperature is 3300 K, and planetary radius and gravity are the same as Earth's values. Atmospheric composition is N\({}_{2}\) of 10\({}^{4}\) kg m\({}^{-2}\) with variable water vapor; there is no CO\({}_{2}\), N\({}_{2}\)O, CH\({}_{4}\), O\({}_{2}\), or O\({}_{3}\). These experiments are useful in knowing the possible effect of cloud feedback on the location for the inner edge of the habitable zone. Note that the values of greenhouse gas concentrations (such as CO\({}_{2}\)) can influence the exact level of surface temperature but cannot affect the trend under increasing stellar flux. For example, Yang et al. (2013)[6] showed that increasing CO\({}_{2}\) concentration from zero to 355 ppmv causes an increase of global-mean surface temperature by 7.5 K (see the online version of their full Table 1). For SAM, a lower resolution (than 4 km by 4 km) is used in this group of experiments, due to computation resource limitations. Again, for each experiment, there are two steps. The first step is using a relatively low resolution of 40 km by 40 km, and the second step is using a relatively higher resolution of 20 km by 20 km. The initial profiles of the second step are from the quasi-equilibrium fields of the first step through linear interpolation. So, these experiments could be called as cloud-permitting simulations, rather than cloud-resolving simulations. In both steps, the atmosphere is coupled to a 1-m slab ocean. The time series of the results are shown in Supplementary Fig. 5. Each experiment was run for 350 Earth days in total; all the experiments have already reached quasi-equilibrium. For CAM3 and ExoCAM, the surface is coupled to a slab ocean of 50 m everywhere, and each experiment is run for \(\sim\)60 Earth years and the last 5 years are used for analyses. The main results are showed in Fig. 4 and Supplementary Fig. 16. The behavior of convection and precipitation under extremely hot climates are discussed in the Supporting Information and showed in Supplementary Figs. 19 & 20. Note that the treatment of cloud particle sizes is the same for the three models, which makes the comparisons between the three models be a little more straightforward. The effective droplet radius for liquid water clouds is specified to be 14 \(\upmu\)m over ocean. For ice water clouds, the effective droplet radius is a non-monotonic function of air temperature[20]. But, the partition between liquid and ice clouds are different between SAM and CAM3/ExoCAM; see Supporting Information for discussing this. **Testing the method of quasi-global cloud-permitting simulation:** In recent years, the method described above has been successfully used in the simulations of clouds, circulation, and their interactions on an aqua-Earth[42, 46]. We have also done two experiments of a global-scale aqua-Earth. Its domain is a zonally periodic 20000 km-long equator-centered channel with latitudinally varying surface temperatures and spans from 60\({}^{\circ}\)S to 60\({}^{\circ}\)N (the first experiment) or 90\({}^{\circ}\)S to 90\({}^{\circ}\)N (the second experiment). The horizontal resolution is 10.4 km in longitude by 13.9 km in latitude. The surface is covered by ocean everywhere. Surface temperatures of Q\({}_{\rm OBS}\) that used in the Aqua-Planet Experiment project[47] are specified in the simulation: It is zonally and hemi-spherically symmetric but decreases as a function of latitude from 300 K at the equator to \(\sim\)273 K at 60\({}^{\circ}\)S and 60\({}^{\circ}\)N. In the region between 60\({}^{\circ}\)S-90\({}^{\circ}\)S and 60\({}^{\circ}\)N-90\({}^{\circ}\)N of the second experiment, the surface temperature is a constant, 273 K. There is no polar sea ice or snow in these two experiments. Surface gravity is the same as Earth's value, and the mean surface air pressure is \(\sim\)1.0 bar (N\({}_{2}\)) with a CO\({}_{2}\) concentration of 369 ppmv. The rotation period is one Earth day, and a latitude-dependent Coriolis parameter and a Cartesian geometry are used. The simulation results (Supplementary Figs. 6 & 7) and some discussions are shown in the Supporting Information. Overall, the results suggest that the method used here is suitable, although not perfect. **Global GCM simulations:** In order to compare the results of SAM with other models, two AGCMs (CAM3 and ExoCAM) are employed to run for the corresponding experiments of TRAPPIST-1e and K2-72e. CAM3 solves the primitive equations over a rotating sphere, developed at the National Center for Atmospheric Research of USA[20]. The radiative transfer module used in CAM3 is the same as that in SAM. Clouds, convection, condensation, precipitation, and boundary layer mixing are parameterized in the model. The horizontal resolution is 3.75\({}^{\rm o}\) in latitude and 3.75\({}^{\rm o}\) in longitude with 26 vertical levels from near surface to the level of \(\sim\)3 hPa. The surface is coupled to a slab ocean of 50 m deep everywhere. Sensitivity tests with a slab ocean depth of 1.0 m show that the ocean depth does not influence the equilibrium surface temperatures; this is due to the fact that there is neither diurnal nor seasonal cycle in the experiments. All other parameters such as planetary orbits are the same as those in the SAM experiments. ExoCAM is similar to CAM3, except that its radiative transfer module is more accurate, and it is able to well simulate moist and runaway greenhouse states, which are directly linked to the inner edge of the habitable zone[21]. For planets in the middle range of the habitable zone, ExoCAM is similar to CAM3[24]. ExoCAM was run under a horizontal resolution of 4\({}^{\rm o}\) in latitude by 5\({}^{\rm o}\) in longitude. Each experiment was run for 60 Earth years and the last 5 years are used for analyses. The results are showed in Figs. 2 & 4 and Supplementary Figs. 8-14. **Radiative transfer modules:** In SAM, longwave and shortwave radiative heating rates are calculated interactively using the CAM radiation scheme[20]. The radiation transfer is roughly suitable as long as surface temperature is less than \(\sim\)320 K and CO\({}_{2}\) concentration is less than \(\sim\)10\({}^{5}\) ppmv, and there are moderate differences in H\({}_{2}\)O and CO\({}_{2}\) radiative forcing between CAM and line-by-line radiative transfer models[23, 48]. So, SAM is able to well simulate the climate of planets in the middle range of the habitable zone, but not for planets near the inner or outer edge of the habitable zone. ExoCAM has an update, more accurate radiative transfer scheme[21, 9]. The differences in the radiative transfer scheme between CAM3 (also used in SAM) and ExoRT (used in ExoCAM) were addressed in references [23, 24, & 9]. ExoRT has stronger greenhouse effect and larger shortwave absorption by water vapor, as shown in Figs. 3 & 7 in Yang et al. (2016)[23]. When the surface temperature is below 300 K, differences between the two modules are small. At higher temperatures, the differences are large: In longwave radiation, the difference in outgoing longwave radiation can reach \(\sim\)20 W m\({}^{\rm-2}\), and in shortwave radiation, the difference in absorbed shortwave energy by water vapor can reach tens of W m\({}^{\rm-2}\). If SAM's radiative transfer module is replaced with ExoRT, the surface will become much warmer, as suggested in the simulations of Kopparapu et al. (2017, see their[9] Figure 2). In all the experiments, the surface is covered by ocean everywhere, or called 'aqua-planet'. No sea ice is included because sea ice module has not yet been incorporated in SAM. Local surface albedo is between \(\sim\)2% and \(\sim\)30%, depending on the solar zenith angle and the stellar spectrum. The solar zenith angle is a function of both latitude and longitude, but it has neither seasonal nor diurnal cycle due to the permanent day and night of the simulated planets. Thermal phase curve and transmission spectra calculations:Thermal phase curve is temporal variations of the disk-integrated broadband infrared emission of the planet as a function of its orbital phase angle, observed by a distant observer[49]. It is mainly determined by surface/air temperature, atmospheric composition, cloud/haze, and horizontal energy transport. Transmission spectrum is the apparent radius of the planet as a function of wavelength, measured when the stellar light travels through the planetary limbs[4]. It is mainly determined by stellar spectrum, atmospheric composition, atmospheric scale height, and cloud/haze at the terminators (i.e., the longitudes around 90\({}^{\circ}\) and 270\({}^{\circ}\) as shown in Fig. 2). The Planetary Spectrum Generator (PSG) developed by NASA[50] is used in this study. Firstly, profiles of air temperature, pressure, water vapor concentrations, cloud mixing ratios, and cloud particle sizes of each latitude obtained from the climate models are used as the input data of PSG to calculate the transit spectrum. Then, the average of all the latitudes is calculated for the east terminator and for the west terminator, following Song & Yang (2021)[35]. A transit spectrum of the full disk-integrated atmosphere (if required) could be simply the average between the east-terminator spectrum and the west-terminator spectrum. The relative transit depth is approximately equal to \(2R_{p}\delta R/R_{s}^{2}\), where \(R_{p}\) is the planetary radius of the solid surface, \(\delta R\) is the transit atmospheric thickness in altitude, and \(R_{s}\) is the radius of the host star. For TRAPPIST-1 and K2-72, the star's radius is \(0.84\times 10^{8}\) km and \(2.30\times 10^{8}\) km, respectively. For each planet, we calculate the spectrum over 30 instantaneous moments and then do the average. For transmission spectrum, wavelengths between 0.6 and 5.0 \(\mu\)m are calculated under a resolving power of 300. These wavelengths are the best choice for the atmospheric characterizations of terrestrial planets due to relatively small instrumental noise expectations[36]. Only the range between 0.6 and 1.7 \(\mu\)m is shown in Fig. 6 because this range is the best for detecting H\({}_{2}\)O molecule in the atmosphere. The 1.4 \(\mu\)m feature does not overlap with a CO\({}_{2}\) feature. Besides, the wavelengths[5] around 6 \(\mu\)m are also appropriate for detecting H\({}_{2}\)O. There are some overlaps between CO\({}_{2}\) and H\({}_{2}\)O at \(\sim\)2.0 and 2.7 \(\mu\)m and between CO\({}_{2}\) and N\({}_{2}\)-N\({}_{2}\) at 4.3 \(\mu\)m. Moreover, previous studies had already clearly shown that detecting CO\({}_{2}\) on tidally locked habitable planets using _JWST_ is possible[2, 3, 4, 36], so in this study we focus on the detectability of H\({}_{2}\)O only. For the temporal variability of the transmission spectra, please see refs. 3 & 36. By default, we did not extrapolate the model top to a lower pressure in the calculations. In order to know its effect, we have done one test by extrapolating the model top to 0.1 Pa, using the method of "Intermediate" proposed in Suissa et al. (2020)[5]. We find that the effect of extrapolating the model top is small, within 1 ppm. This result is consistent with the finding in Suissa et al. (2020)[5]: For a non-runaway planet, additional layers will not have a great effect, as long as the original model top is higher than the cloud deck. **Computation hours:** All the experiments were performed on the supercomputer system of Tianhe-2 in the National Supercomputer Center of China. For the SAM simulations of TRAPPIST-1e and K2-72e, the numbers of horizontal grids are 4.3\(\times\)10\({}^{7}\) and 8.3\(\times\)10\({}^{7}\), respectively. These two values are about 3-4 orders larger than those in the global GCMs. Due to the high resolution and the short time step, about 5 \(\times\) 10\({}^{6}\) core hours have been spent in this project, which is about 2-3 orders higher than that required for GCM experiments. Due to the absence of seasonal and diurnal cycles for 1:1 tidally locked planets and due to the shallow ocean or fixed surface temperatures employed, the system does not require a very long time to reach quasi-equilibrium. If an ocean depth of 50 m or 500-5000 m were used, the required integration time would be tens of to thousands of Earth years or more, which is far beyond the allowable source range of commonly used modern supercomputers. **Data Availability:** All model output data used in this paper can be found in the public storage: [https://doi.org/10.5281/zenodo.7226615](https://doi.org/10.5281/zenodo.7226615) **Code Availability:** SAM can be downloaded from [http://rossby.msrc.sunysb.edu/](http://rossby.msrc.sunysb.edu/)\(\sim\)marat/SAM.html, ExoCAM can be download from [https://github.com/storyofthewolf/ExoCAM](https://github.com/storyofthewolf/ExoCAM), CAM3 can be downloaded from [https://www.cesm.ucar.edu/models/atm-cam/](https://www.cesm.ucar.edu/models/atm-cam/), and the transmission spectra module PSG can be found on [https://psg.gsfc.nasa.gov/index.php](https://psg.gsfc.nasa.gov/index.php). Code modifications for SAM and CAM3 can be found in the public storage of Harvard Dataverse: [https://doi.org/10.7910/DVN/EM1NPX](https://doi.org/10.7910/DVN/EM1NPX) **Acknowledgments** We thank the two referees for their time and effort on reviewing this manuscript. We thank Marat F. Khairoutdinov for the release of the model SAM, thank Eric T. Wolf for the release of the model ExoCAM, and thank Geronimo Villanueva for the release of the tool PSG. We are grateful to the helpful discussions with Daniel D.B. Koll, Bolei Yang, Huanzhou Yang, Dorian S. Abbot, Nadir Jeevanjee, Cheng Li, and Shizuo Fu. We thank Ji Nie for his help in installing the model SAM on Tianhe-2. ZF was supported by the National Natural Science Foundation of China (NSFC) under grant 42175065, and JY was supported by NSFC under grants 42075046, 41888101, and 42161144011. In total, about 5 \(\times\) 10\({}^{6}\) core hours has been used in the experiments. This is corresponding to CO\({}_{2}\) emission of \(\sim\)8000 kg, if assume the power per core is 3.7 Watts and the carbon emission intensity is \(\sim\)0.7 kg per kWh. **Author Contributions:** JY led this project; JY, ZF, and YZ designed the experiments; YZ modified the model SAM and performed most of the SAM experiments; MY did the six quasi-global cloud feedback experiments using SAM; JY did the CAM3 experiments; YZ did the ExoCAM experiments; XS calculated the observational characteristics; JL did the episodic deluges analyses; YZ, MY, XS, and MW plotted the figures; all authors discussed the results; JY wrote the draft, and all authors improve the manuscript. **Competing Interests:** There is no any competing interest for all authors. **Additional Information**: Supplementary information is available for this paper at xxx. **Correspondence and requests for materials** should be addressed to JY, Email: [email protected]. **Figure captions:** Figure 2: **Time-mean vertically-integrated cloud water amount (including both liquid and ice) simulated by the three models.** Left panels are for TRAPPIST-1e, and right panels are for K2-72e. The global-mean values are listed in the top left corner of each panel. The red cross marks the substellar point. The blue dashed line is the morning terminator and the blue dotted line is the evening terminator, used for transmission spectra calculations (Fig. 6 below). SAM has less total cloud water path. Figure 3: **Vertical profiles of cloud water content (ice + liquid) in the small-domain simulations** with different horizontal resolutions, 6.4, 1.6, 0.4, and 0.1 km. (A) and (B) are for TRAPPIST-1e, and (C) and (D) are for K2-72e. (A) & (C) are for the region around the substellar point (SP), and (B) and (D) are for the region around the antistellar point (AP). The black lines represent the quasi-global (QG) experiments with a grid spacing of 4 km. The SP profiles of cloud water content match well with the QG experiments, but the AP profiles with different resolutions have significant differences near the surface. Figure 4: **Stabilizing cloud feedback simulated by the three models.** Time- and global-mean (A) surface temperature, (B) planetary albedo, (C) vertically-integrated cloud liquid water amount, and (D) vertically-integrated cloud ice water amount, as a function of the stellar flux at the substellar point. Black is for SAM, blue is for CAM3, and red is for ExoCAM. For SAM and CAM3, the stellar fluxes are 1000, 1200, 1400, 1600, 1800, and 2000 W m\({}^{-2}\), and the corresponding planetary rotation periods are 28.47, 24.83, 22.12, 20.01, 18.32, and 16.93 Earth days, respectively (following the Kepler’s Third Law). For ExoCAM, the stellar fluxes are 1000, 1200, 1400, 1600, 1650, and 1700 W m\({}^{-2}\), and the corresponding planetary rotation periods are 28.47, 24.83, 22.12, 20.01, 19.56, and 19.12 Earth days, respectively. For SAM, the horizontal grid size is 20 km by 20 km. In these experiments, all the three models are coupled to a slab ocean but with zero ocean heat transport. Figure 5: **Cloud streets in the SAM experiments with a resolution of 4 km.** The position and size of the selected region are indicated in Fig. 1 by the box with the same boundary colors. Left panels are for TRAPPIST-1e, and right panels are for K2-72e. The blue vector shows the direction and strength of the mean winds in the planetary boundary layer. The orange line in panel B is the region employed to analyze the formation of the cloud streets shown in Supplementary Fig. 17. Figure 6: **The effects of cloud permitting on the observational characteristics of the planets**. Left is for TRAPPIST-1e, and right is for K2-72e. Panels A & B are for broadband thermal infrared emission phase curves; the \(x\) axis is orbital phase angle; and the \(y\) axis is the ratio of planetary thermal emission to stellar thermal emission (in 5-50 \(\upmu\)m) in units of parts per million (ppm). One observer views the whole dayside at phase angle of 0\({}^{\circ}\) (the secondary eclipse) and sees the whole nightside at \(\pm\)180\({}^{\circ}\) (transit). Observer inclination is assumed to be 90\({}^{\circ}\). Note the different ranges of the \(y\) axis between (A) and (B). Although K2-72e is warmer than TRAPPIST-1e, its thermal infrared contrast is lower, due to that the star K2-72 is \(\sim\)7.6 times of TRAPPIST-1 in area. Panels C, D, E, & F are transmission spectra between 0.6 and 1.7 \(\upmu\)m. Panels C & D are the morning terminator (90\({}^{\circ}\) longitude in Fig. 2), and panels E & F are the evening terminator (270\({}^{\circ}\) longitude). The minimum value for the relative transit depth has been subtracted for ease of display. ## References * [1] Hartmann, D. L. Global Physical Climatology. _Elsevier Science Press,_ pp. 473 (2015). * [2] Fauchez, T. J., Turbet, M., Villanueva, G. L., et al. Impact of Clouds and Hazes on the Simulated JWST Transmission Spectra of Habitable Zone Planets in the TRAPPIST-1 System. _ApJ_ 887, 2, 194 (2019). * [3] Fauchez, T. J., Villanueva, G. L., Sergeev, D. E., et al. The TRAPPIST-1 Habitable Atmosphere Intercomparison (THAI). III: Simulated Observables--The Return of the Spectrum. _Planet. Sci. J._ 3, 9, 213 (2022). * [4] Komacek, T. D., Fauchez, T. J., Wolf, E. T. & Abbot, D. S. Clouds Will Likely Prevent the Detection of Water Vapor in JWST Transmission Spectra of Terrestrial Exoplanets. _ApJ_ 888, 2, L20 (2020). * [5] Suissa, G., Mandell, A. M., Wolf, E. T., et al. Dim Prospects for Transmission Spectra of Ocean Earths around M Stars. _ApJ_ 891, 1, 58 (2020). * [6] Yang, J., Cowan, N. B. & Abbot, D. S. Stabilizing Cloud Feedback Dramatically Expands the Habitable Zone of Tidally Locked Planets. _ApJL_ 771, L45 (2013). * [7] Turbet, M., Leconte, J., Selsis, F., et al. The Habitability of Proxima Centauri b-II. Possible Climates and Observability. _A&A_ 596, A112 (2016). * [8] Kopparapu, R. k., Wolf, E. T., Haqq-Misra, J., et al. The Inner Edge of the Habitable Zone for Synchronously Rotating Planets around Low-Mass Stars Using General Circulation Models. _ApJ_ 819, 1, 84 (2016). * [9] Kopparapu, R. k., Wolf, E. T., Arney, G., et al. Habitable Moist Atmospheres on Terrestrial Planets near the Inner Edge of the Habitable Zone around M Dwarfs. _ApJ_ 845, 1, 5 (2017). * [10] Boutle, I. A., Mayne, N. J., Drummond, B., et al. Exploring the climate of Proxima B with the Met Office Unified Model. _A&A_ 061, A120 (2017). * [11] Bin, J., Tian, F. & Liu, L. New Inner Boundaries of the Habitable Zones around M Dwarfs. _EPSL_ 492, 121-129 (2018). * [12] Way, M. J., Del Genio, A. D., Aleinov, I., et al. Climates of Warm Earth-Like Planets. I: 3-D Model Simulations. _ApJS_ 239, 2, 24 (2018). * [13] Hammond, M. & Pierrehumbert, R. T. Wave-Mean Flow Interactions in the Atmospheric Circulation of Tidally Locked Planets. _ApJ_ 869, 1, 65 (2018). * [14] Del Genio, A. D., Way, M. J., Amundsen, D. S., et al. Habitable Climate Scenarios for Proxima Centauri B with a Dynamic Ocean. _Astrobiology_ 19, 1, 99-125 (2019). * [15] Wei, M., Zhang, Y. & Yang, J. Small Sensitivity of the Simulated Climate of Tidally Locked Aquaplanets to Model Resolution. _ApJ_ 898, 156 (2020). * [16] Sergeev, D. E., Fauchez, T. J., Turbet, M., et al. The TRAPPIST-1 Habitable Atmosphere Intercomparison (THAI). Part II: Moist Cases--The Two Waterworlds. _Planet. Sci. J._ 3, 9, 212 (2022a). * [17] Khairoutdinov, M. F. & Randall, D. A. Cloud Resolving Modeling of the ARM Summer 1997 IOP: Model Formulation, Results, Uncertainties, and Sensitivities. _JAS_ 60, 4, 607-625 (2003). * [18] Haqq-Misra, J., Wolf, E. T., Joshi, M., et al. Demarcating Circulation Regimes of Synchronously Rotating Terrestrial Planets within the Habitable Zone. _ApJ_ 852, 2, 67 (2018). * [19] Sergeev, D. E., Lewis, N. T., Lambert, F. H., et al. Bistability of the Atmospheric Circulation on TRAPPIST-1e. _Planet. Sci. J._ 3, 214 (2022b). * [20] Collins, W. D., Rasch, J., Boville, A., et al. Description of the NCAR Community Atmosphere Model (CAM 3.0). _NCAR Tech. Note NCAR/TN-464+STR_, doi:10.5065/D63N21CH, pp. 226 (2004). * [21] Wolf, E. T., Kopparapu, R., Haqq-Misra, J., Fauchez, T. J. ExoCAM: A 3D Climate model for Exoplanet Atmospheres. _Planet. Sci. J._ 3, 7 (2022). * [22] de Boer, G., Chapman, W., Kay, J. E., et al. A Characterization of the Present-Day Arctic Atmosphere in CCSM4. _J. Climate_ 25, 8, 2676-2695 (2012). * [23] Yang, J., Leconte, J., Wolf, E. T., et al. Differences in Water Vapor Radiative Transfer among 1D Models Can Significantly Affect the Inner Edge of the Habitable Zone. _ApJ_ 826, 2, 222 (2016). * [24] Yang, J., Leconte, J., Wolf, E. T., et al. Simulations of Water Vapor and Clouds on Rapidly Rotating and Tidally Locked Planets: A 3D Model Intercomparison. _ApJ_ 875, 1, 46 (2019). * [25] Lefevre, M., Turbet, M. & Pierrehumbert, R. 3D Convection-Resolving Model of Temperate, Tidally Locked Exoplanets. _ApJ_ 913, 2, 101 (2021). * [26] Brown, R. A. Longitudinal Instabilities and Secondary Flows in the Planetary Boundary Layer: A Review. _Reviews of Geophysics_ 18, 683-697 (1980). * [27] Brummer, B. & Pohlmann, S. Wintertime roll and cell convection over Greenland and Barents Sea regions: A climatology. _JGR-Atmos._ 105, D12, 15559-15566 (2000). * [28] Gryschka, M. & Raasch, S. Roll convection during a cold air outbreak: A large eddy simulation with stationary model domain. _GRL_ 32, L14805 (2005). * [29] Etling, D. & Brown, R. A. Roll Vortices in the Planetary Boundary Layer: A Review. _Boundary-Layer Meteorology_ 65, 3, 215-248 (1993). * [30] Atkinson, B. W. & Zhang, J. W. Mesoscale Shallow Convection in the Atmosphere. _Reviews of Geophysics_ 34, 403-431 (1996). * [31] Asai, T. Stability of a Plane Parallel Flow with Variable Vertical Shear and Unstable Stratification. _J. Meteorol. Soc. Jpn._ 48, 2, 129-139 (1970). * [32] Pierrehumbert, R. T. Thermostats, Radiator Fins, and the Local Runaway Greenhouse. _JAS 52_, 10, 1784-1806 (1995). * [33] Zhang, X., Tian, F., Wang, Y., Dudhia, J. & Chen, M. Surface Variability of Short-wavelength Radiation and Temperature on Exoplanets around M Dwarfs. _ApJ_ 837, 2, L27 (2017). * [34] Sergeev, D. E., Lambert, F. H., Mayne, N. J., et al. Atmospheric Convection Plays a Key Role in the Climate of Tidally Locked Terrestrial Exoplanets: Insights from High-Resolution Simulations. _ApJ_ 894, 2, 84 (2020). * [35] Song, X. & Yang, J. Asymmetry and Variability in the Transmission Spectra of Tidally Locked Habitable Planets. _Front. Astron. Space_ 8, 134 (2021). * [36] Pidhorodetska, D., Fauchez, T. J., Villanueva, G. L., et al. Detectability of Molecular Signatures on TRAPPIST-1e through Transmission Spectroscopy Simulated for Future Space-based Observatories. _ApJL_ 898, L33 (2020). * [37] Loftus, K. & Wordsworth R. The physics of falling raindrops in diverse planetary atmospheres. _JGR-Planets_ 126, e2020JE006653 (2021). * [38] Schneider, T., Kaul, C. M. & Pressel, K. G. Possible Climate Transitions from Breakup of Stratocumulus Decks under Greenhouse Warming. _Nature Geoscience_ 12, 3, 163-167 (2019). * [39] Grimm, S. L., Demory, B.-O., Gillon, M., et al. The Nature of the TRAPPIST-1 Exoplanets. _A&A_ 613, A68 (2018). * [40] Dressing, C. D., Vanderburg, A., Schlieder, J. E., et al. Characterizing K2 Candidate Planetary Systems Orbiting Low-Mass Stars II. Planetary Systems Observed During Campaigns 1-7. _Astron. J._ 154, 5, 207 (2017). * [41] Gilbert, E. A., Barclay, T., Schlieder, J. E., et al. The First Habitable-Zone Earth-Sized Planet from TESS. I. Validation of the TOI-700 System. _Astron. J._ 160, 3, 116 (2020). * [42] Bretherton, C. S. & Khairoutdinov, M. F. Convective Self-aggregation Feedbacks in Near-Global Cloud-Resolving Simulations of an Aquaplanet. _JAMES_ 7, 4, 1765-1787 (2015). * [43] Vallis, G. F. Essentials of Atmospheric and Oceanic Dynamics. Cambridge: Cambridge University Press, doi:10.1017/9781107588431, pp. 356 (2019). * [44] Soong, S.-T. & Ogura, Y. Response of tradeoff cumuli to large-scale processes. _JAS_ 37, 2035-2050 (1980). * [45] Li, X., Sui, C.-H., Lau, K.-M., et al. Large-scale forcing and cloud-radiation interaction in the tropical deep convective regime. _JAS_ 56, 3028-3042 (1999). * [46] Khairoutdinov, M. F. & Emanuel, K. Intraseasonal Variability in a Cloud-Permitting Near-Global Equatorial Aquaplanet Model. _JAS_ 75, 12, 4337-4355 (2018). * [47] Neale, R. B. & Hoskins, B. J. A Standard Test for AGCMs Including Their Physical Parametrizations: I: The Proposal. _Atmospheric Science Letters_ 1, 101-107 (2000). * [48] Goldblatt, C., McDonald, V. L. & McCusker, K. E. Earth's Long-Term Climate Stabilized by Clouds. _Nature Geoscience_ 14, 143-150 (2021). * [49] Koll, D. D. B. & Abbot, D. S. Deciphering Thermal Phase Curves of Dry, Tidally Locked Terrestrial Planets. _ApJ_ 802, 21 (2015). * [50] Villanueva, G. L., Smith, M. D., Protopapa, S., et al. Planetary Spectrum Generator: An Accurate Online Radiative Transfer Suite for Atmospheres, Comets, Small Bodies and Exoplanets. _J. Quant. Spectrosc. R.A._ 217, 86-104 (2018). **This supporting information file includes:** Text 1: Previous simulations Text 2: Results of the SAM benchmark simulation for Earth Text 3: Different definitions for cloud fraction between different models Text 4: The sensitivity to vertical resolution and model top Text 5: Temperature limit for ice/liquid cloud formation Text 6: More details on the cloud streets Text 7: Atmospheric circulation, Cartesian geometry, and the limitations of this study Text 8: Convection and precipitation in extremely hot climates Text 9: Gibbs phenomenon in the CAM3 experiment of K2-72e References (27 in total) Supplementary Tables 1-4 Supplementary Figures 1 to 21 Legends for Supplementary Videos 1 to 3 **Other online supplementary materials for this manuscript include the following:** Supplementary Videos 1 to 3 **Text 1. Previous simulations.** Previous GCM inter-comparisons for tidally locked rocky planets have shown that there are large differences among GCMs, and the difference in global-mean surface temperature can be as large as 30 K when the models are run under the same boundary conditions (Yang et al., 2019; Fauchez et al., 2020; Sergeev et al., 2022). Moreover, even in the simulations of present and future climates on Earth, clouds are the largest source of model uncertainty, since different GCMs employ different convection and cloud schemes (e.g., Cess et al., 1990; Zelinka et al., 2020). Given the lack of in-situ cloud observations for exoplanets, one of the best (although not perfect) solutions to this big problem is to use cloud-resolving models (CRMs) or cloud-permitting models (CPMs), which have fine spatial resolution and explicitly calculate convection and clouds without cumulus parameterization. In the past five years, CRMs and CPMs have been employed to simulate convection/clouds on tidally locked planets but only in limited-area domains, such as 1000 by 1000 km (Zhang et al., 2017), an equatorial strip (Koll and Cronin, 2017), 6000 by 6000 km (Sergeev et al., 2020), 250 by 250 km (Lefevre et al., 2021), 72 by 72 km (Seeley and Wordsworth, 2021), and a 2D idealized configuration along the equator (Song et al., 2022). While this small-domain approach is useful in examining the clouds in local regions, it is unable to investigate the clouds in a global view, to correctly represent the interactions between convection/clouds and large-scale dynamics, or to well examine the effect of clouds on planetary climate and habitability. Moreover, in the simulations of Zhang et al. (2017), Koll and Cronin (2017), Lefevre et al. (2021), and Song et al. (2022), the important effect of the Coriolis force was not included. In order to overcome these shortcomings, here we carry out global-scale CPM simulations for tidally locked rocky planets orbiting around M dwarfs. **Text 2. Results of the SAM benchmark simulation for Earth.** The results of the experiment testing the method of quasi-global cloud-permitting simulation is showed in Supplementary Fig. 6. Snapshots of cloud water path (liquid plus ice), surface precipitation, precipitable water (the sum of water vapor and liquid and ice clouds), and vertical velocity at the level of 5 km are shown in Supplementary Fig. 6(A-D). These figures show that the key characteristics of the Intertropical Convergence Zone (ITCZ) and mid-latitude baroclinic zone can be properly simulated. Supplementary Figs. 6(E-H) show the zonal-mean air temperature, specific humidity, zonal winds, and atmospheric mass streamfunction. These figures suggest that the model can suitably simulate the atmospheric stratification, water vapor field, zonal jets in mid-latitudes, tropical trade winds, and the Hadley cells, and the Ferrel cells. Because the surface is ocean everywhere and there is neither seasonal cycle nor diurnal cycle, the model cannot well simulate the location of the ITCZ, the zonal asymmetry of the clouds such as stratocumulus clouds over eastern oceans, and stationary waves related to land-sea contrasts on Earth. For modern Earth, the mean location of the ITCZ is at the north of the equator, whereas in our aqua-planet simulation, the ITCZ is right at the equator. Supplementary Fig. 6(H) shows that there is another thin cell over the Hadley cell in each hemisphere; to our knowledge, the underlying reason is unclear yet but it may be related to the lack of the seasonal cycle in the experiment. Moreover, because two solid walls are set at the latitudes of 60\({}^{\circ}\)S and 60\({}^{\circ}\)N, some clouds are advected from lower latitudes and then trapped near the walls (Supplementary Fig. 6A & B). Meanwhile, an unrealistic overturning circulation near the wall is simulated in each hemisphere (Supplementary Fig. 6H). When the simulated region is extended to from 90\({}^{\circ}\)S to 90\({}^{\circ}\)N, the simulated atmospheric circulation is closer to that on Earth (Supplementary Fig. 7). The edges of the Hadley cells are at 30\({}^{\circ}\)S and 30\({}^{\circ}\)N, the Ferrel cells are right at the middle latitudes (\(\sim\)30\({}^{\circ}\)S-60\({}^{\circ}\)S and \(\sim\)30\({}^{\circ}\)N-60\({}^{\circ}\)N), and the two polar cells are right at the high latitudes (\(\sim\)60\({}^{\circ}\)S-90\({}^{\circ}\)S and \(\sim\)60\({}^{\circ}\)N-90\({}^{\circ}\)N). **Text 3. Different definitions for cloud fraction between different models.** In SAM, cloud fraction is a diagnostic variable. It means that in the model source codes of SAM, cloud fraction is not required to be calculated. The cloud fraction shown in Supplementary Fig. 9 is diagnosed from the model output of cloud water content. At a given level, it is defined as 100% when the cloud water content is greater than 0.01 g kg-1, otherwise it is 0% (Khairoutdinov & Randall, 2003). The threshold of 0.01 g kg-1 is in order to exclude very optically-thin clouds. For a given column from the surface to the top of the model, the value of cloud fraction is 100% when the vertically-integrated cloud water content is higher than 0.02 kg m-2 (note it is different from the limit for a given level), otherwise it is 0%. These imply that for each time step and each grid the cloud fraction could be either 0% or 100%, not between. But, for long-term mean, it could be any value between 0% and 100%. For example, for a certain grid, if at one time step it is 100%, and at the next step it is 0%. Then, the average of these two steps is 50%. Moreover, for cloud overlap between different levels, the overlap in SAM is 100% because the cloud (if it exists) occupies the entire grid. For each grid, there is no partial cloudy or clear sky, because the cloud is resolved in SAM. In contrast, there are partial cloudy and clear skies for each grid in GCMs such as CAM3 and ExoCAM. In GCMs, cloud fraction is also a diagnostic variable, but it is empirically parameterized based on relative humidity, convective mass flux, atmospheric stratification, and other variables (Collins et al., 2004). For a given level, it can be 0%, 100%, or any value between 0% and 100%. In GCMs, the overlap of clouds between different vertical levels could be minimum, maximum, random, or an arbitrary combination, depending on the parameterization scheme used. **Text 4. The sensitivity to vertical resolution and model top.** In Lefevre et al. (2021), the convective plumes reach 21 km when the sea surface temperature (SST) is \(\sim\)320 K, 19 km at \(\sim\)310 K, and \(\sim\)17 km at \(\sim\)300 K (see their Figure 2(a)). In our SAM simulations, the maximum SST is about 310 K and it occurs in only one experiment (see Supplementary Fig. 19(A)), and in other experiments the maximum SSTs are below 310 K or close to 300 K (Supplementary Fig. 8). In all our experiments, the sponge layer is between 18 and 27 km, so it does not significantly influence our results. In order to further clarify this, we have done two additional experiments, within which the top of the model is extended and meanwhile the vertical resolution is increased. In one experiment, the top of the model is at 42 km, the number of the vertical levels is increased from 48 to 72, and the sponge layer is between 28 and 42 km. In the other experiment, the top of the model is at 65 km, the number of the vertical levels is 72, and the sponge layer is between 43 and 65 km. The result is showed in the following Supplementary Fig. 3. One could find that the results do not have essential changes although there are small detailed variations. **Text 5. Temperature limits for ice/liquid cloud formation.** Different models use different temperature limits to distinguish between ice cloud and liquid cloud. In the three models (CAM3, ExoCAM, and SAM), the total cloud water is decomposed into ice and liquid clouds with assuming the ice cloud fraction is: \(f_{ice}=(T-T_{max})/(T_{min}-T_{max})\), where \(T\) is air temperature, \(T_{max}\) is the maximum air temperature for ice cloud formation, and \(T_{min}\) is the minimum air temperature for ice cloud formation. When \(T\) is higher than \(T_{max}\), all clouds are in liquid phase, and when \(T\) is lower than \(T_{min}\), all clouds are in ice phase. Different models use different temperature limits. In CAM3 and ExoCAM, \(T_{max}\) is equal to -10\({}^{\circ}\)C and \(T_{min}\) is equal to -40\({}^{\circ}\)C. However, in SAM, \(T_{max}\) is equal to 0\({}^{\circ}\)C and \(T_{min}\) is equal to -20\({}^{\circ}\)C. In order to test these two parameters, we use CAM3 to do one experiment within which \(T_{max}\) is changed from -10\({}^{\circ}\)C to 0\({}^{\circ}\)C and \(T_{min}\) is changed from -40\({}^{\circ}\)C to -20\({}^{\circ}\)C. The results are shown in Supplementary Table 3 below, labelled as CAM3_TLC. We find that the vertically-integrated liquid cloud water path decreases but the vertically-integrated ice cloud water path increases, consistent with the change of the temperature limits. The planetary albedo changes from 47.0% to 41.0% and becomes closer to that of SAM. Other variables such as surface precipitation, cloud longwave radiative effect, and surface temperature also become closer to those in SAM. These results suggest that the temperature limits for liquid and ice clouds are two of the key parameters in explaining the differences between SAM and CAM3/ExoCAM; of course, other parameters should also influence the simulation results. **Text 6. More details on the cloud streets:** The cloud streets are parallel bands of low-level cumulus clouds in convective boundary layer, and the cloud bands are oriented nearly parallel to the mean winds. On Earth, all clouds are generally classified as ten mutually exclusive cloud genera, and the cloud streets belong to the genus of stratocumulus (Houze 2014). One necessary condition for the formation of cloud streets on tidally locked planets is the large day-night surface temperature contrast. Because the contrast decreases with increasing stellar flux (Haqq-Misra et al. 2018), there should have few or no cloud streets for planets close to the inner edge of the habitable zone. For planets close to the outer edge of the habitable zone and having dense CO\({}_{2}\) or other greenhouse gases, the day-night surface temperature is also smaller, but it is still larger than such as \(\sim\)20 K under a 20 bar CO\({}_{2}\) atmosphere (see Fig. 1 in Wordsworth et al. (2011)), allowing the possible formation of cloud streets. In our experiments, the relatively cloudless regions between the cloud streets (see Supplementary Fig. 17 below) can be viewed as radiator fins. Radiator fins represent clear-sky dry columns those have relatively large outgoing longwave radiation to space. This can be compared with either cloud-sky columns or clear-sky wet columns. Both cloud-sky and clear-sky wet columns have lower outgoing longwave radiation than that of adjacent, clear-sky dry columns. Pierrehumbert (1995) called the dry columns of the subtropics as "radiator fins". Please see Fig. 7 in Pierrehumbert (1995) for a schematic representation for the radiator fin. On Earth, the lifetime of cloud streets is always in the range between hours and several days, mainly determined by the duration of cold air outbreaks (e.g., Brummer & Pohlmann, 2000; Gryschka & Raasch 2005). On tidally locked planets, the cloud streets persist in the entire time of the simulations, so that the cloud streets are not hard to see even in the time-mean field of cloud water amount shown in Fig. 2A & B. This is due to the fact that the favorable conditions for the formation of the cloud streets, such as the strong night-to-day cold advection and the temperature inversion, never stop on tidally locked planets. Moreover, due to the large distance of the cold advection from the nightside to the dayside, the cloud bands along the mean winds is long in length, \(\sim\)10\({}^{3}\) to 10\({}^{4}\) km (see Fig. 1 in the main text). Besides the cloud streets, one could also see gravity-wave clouds (Supplementary Videos 1-3). The gravity-wave cloud bands are nearly perpendicular to the direction of large-scale mean winds, and the gravity waves propagate downstream along the mean winds. Moreover, the atmospheric composition is N\({}_{2}\)-dominated in this study, which is heavier than H\({}_{2}\)O in molecular weight and thereby promotes convection (such as Li & Ingersoll 2015). If the atmosphere is H\({}_{2}\)- or He-dominated, a moist parcel would be heavier than a dry parcel under the same pressure and temperature, and therefore moist convection would likely be inhibited and both cloud streets and deep convective clouds would be much harder to form. **Text 7. Atmospheric circulation, Cartesian geometry, and the limitations of this study.** SAM is able to basically simulate the large-scale atmospheric circulation. Strong upwelling occupies the substellar region and the rest of the planet is mainly dominated by weak downwelling. The upwelling and downwelling are linked by strong convergence in the planetary boundary layer towards the substellar region and strong divergence in the free troposphere towards the nightside and the high latitudes of the dayside (Supplementary Figs. 11 & 12). This circulation is called as a global-scale overturning or 'Walker' circulation (e.g., Showman et al. 2013; Hammond & Lewis 2021). In the horizontal plane of the free troposphere, the atmosphere is characterized by an equatorial superrotation flowing from west to east (Supplementary Fig. 12). The superrotation is maintained by equatorward momentum transports by coupled Rossby-Kelvin waves, ultimately driven by the uneven stellar radiation distribution between the permanent dayside and nightside (e.g., Showman & Polvani 2011). Note that there are dramatic differences in the spatial pattern of the convergence/divergence between SAM and CAM3/ExoCAM (Supplementary Fig. 11), so that the distributions of cloud water on the dayside are quite different (see Fig. 2 in the main text). Supplementary Fig. 8 shows the surface temperatures simulated by SAM and the two general circulation models CAM3 and ExoCAM. For TRAPPIST-1e, the spatial pattern of the surface temperatures is similar between the three models. In global mean, the surface temperatures are similar, 243, 240, and 243 K in SAM, CAM3, and ExoCAM, respectively. For K2-72e, the spatial pattern of the surface temperatures has a significant difference: In SAM, the west side of the substellar point is significantly warmer than the east side of the substellar point (Supplementary Fig. 8B), but the symmetry around the substellar point is almost perfect in CAM3 and ExoCAM. The strong zonal asymmetry in SAM can also be viewed from the spatial distributions of vertical velocity and horizontal winds of K2-72e as shown in Supplementary Figs. 11-12. This is likely due to the interaction among the resolved clouds, radiative transfer, and equatorial superrotation. The deep convective clouds are transported to the east side of the substellar point by the equatorial superrotation, so the clouds and planetary albedo exhibit strong zonal tilts towards the east (see Fig. 2B in the main text). Therefore, much more shortwave radiation reaches the west surface of the substellar point than that on the east side (Supplementary Fig. 13). As a result, the west side is warmer than the east side. This zonal asymmetry is much weaker in CAM3 and ExoCAM. However, the global-mean surface temperatures have relatively small differences, 263, 253, and 268 K in SAM, CAM3, and ExoCAM, respectively. The dayside and the low- and middle-latitudes of the nightside in SAM are slightly warmer than CAM3, but the high latitudes in SAM are somewhat cooler. For night-side mean, the surface temperature of TRAPPIST-1e is 219.4 and 215.9 K in SAM and CAM3, respectively, and the surface temperature of K2-72e is 241.4 and 227.5 K in SAM and CAM3, respectively. One of the reasons for the differences is related to water vapor concentration on the night side. The mean night-side water vapor concentrations of TRAPPIST-1e are respectively 2.6 and 1.9 kg m\({}^{-2}\) in SAM and CAM3. For K2-72e, these two values are 8.3 and 4.1 kg m\({}^{-2}\), respectively (Supplementary Fig. 14). The differences in water vapor concentration act to make the night-side surface of SAM be relatively warmer than CAM3. A part of the water vapor is from local evaporation, and the rest is from horizontal transport from the dayside (Supplementary Table 4). In SAM, the simulated equatorial superrotation is stronger than those obtained in CAM3 and ExoCAM, especially for K2-72e (Supplementary Fig. 12). Several reasons can cause this. (1) Latent heat release (can be viewed from the surface precipitation shown in Supplementary Fig. 10J) in SAM is larger than those in CAM/ExoCAM. The global-mean surface precipitation rates on K2-72e are 2.4, 1.1, and 0.9 mm day-1 in SAM, CAM3, and ExoCAM, respectively. More latent heat release (or more accurately larger gradients in the spatial pattern of the latent heat release) can drive larger perturbation in the geopotential field of the atmosphere and subsequently induce stronger waves and equatorial superrotation. (2) The Rossby deformation radius of the atmosphere of TRAPPIST-1e is at the edge between fast and Rhines rotation regimes, so a change in convection scheme, initial condition, boundary-layer friction or other factor can push the atmospheric circulation from one regime to the other regime (e.g., Sergeev et al., 2020, 2022, 2022, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020). Convection scheme also has strong effects on slowly rotating planets such as Proxima Centauri b (Sergeev et al., 2020). (3) The models employ different damping schemes near the top of the model (numerical dissipation or sponge), and this can influence the circulation in the upper atmosphere (Turbet et al., 2022). (4) The models use different dynamical cores (Eulerian in CAM3, finite volume in ExoCAM, and finite difference in SAM), which can also influence the simulated atmospheric circulation (e.g., Lee & Richardson, 2010). (5) SAM in these simulations employs a global Cartesian geometry rather than the realistic global spherical geometry as that used in CAM3 and ExoCAM. This can influence the atmospheric circulation especially in the high latitudes. From Supplementary Fig. 12, one could find that in the free troposphere there is one cyclone in the region between 0\({}^{\circ}\) and 90\({}^{\circ}\) longitudes of each hemisphere for both CAM3 and ExoCAM. This is consistent with previous theoretical derivations, 2D shallow water model studies, and 3D GCM simulations (e.g., Showman & Polvani 2011; Tsai et al. 2014; Hammond & Pierrehumbert 2018; Wang & Yang 2021). SAM tends to produce the cyclones but the spatial pattern is not obvious. This is likely due to that a global-scale Cartesian geometry rather than a global spherical geometry is used in the simulations (see Methods), so the simulated high-latitude atmospheric circulation has its own flaws in SAM. How strong does the Cartesian geometry influence the simulation results in this study? We answer this question based on our best understanding. (1) The Cartesian geometry has two weaknesses: the grid area in high latitudes is larger than that of the spherical geometry, and the momentum equations do not include the metric terms (or curvature effects). The former can influence the atmospheric circulation in high latitudes. The magnitude of the geometric terms is always small as long as the winds are smaller than the order of 100-1000 m s\({}^{-1}\) (Chapter 2 in Holton & Hakim (2012)). In our simulations of both aqua-Earth and tidally locked rocky planets, the wind speed is lower than the limit. (2) The planetary rotation is central for many dynamical phenomena such as Hadley cells, Ferrel cells, baroclinic instability, Rossby waves, Kelvin waves, and wave--mean-flow interactions, but the sphericity of the planet is not always important (Vallis 2019). Since the pattern of the Coriolis parameter in the quasi-global simulations is the same as the real pattern, the key behaviors of the atmospheric circulation should be correctly simulated. (3) Local convective-scale motion can be well captured by the Cartesian geometry, if large-scale forcing is correctly given. This is because the radii of Earth and other terrestrial planets are much larger than the characteristic length of convection. This is also the reason why the Cartesian geometry is so widely used in small-domain cloud-solving simulations. (4) Conservation properties are not sacrificed in the Cartesian geometry. Key thermodynamical and dynamical properties, including momentum, mass, and moist static energy, are conserved. This means that the basic laws of fluid dynamics are well represented in our quasi-global simulations. (5) Our simulations are able to capture the key features of the large-scale circulation on tidally-locked planets, including superrotation and the global 'Walker circulation' (see Supplementary Figs. 11 & 12). These features are originated mainly from planetary rotation and the zonal variation of the stellar forcing. The distribution of the incident radiation in our simulations matches that in the spherical geometry, both featuring a strong heating near the substellar point and zero incident radiation at the night side. (6) The Cartesian geometry cannot well represent the circulation at the high latitudes. So, we make clouds and the circulation at low and middle latitudes be the focus of this study and admit that the circulation at high latitudes in our simulations might not be correct. (7) In Hammond and Pierrehumbert (2018), they compared the effect of changing the geometry from a beta-plane Cartesian to a sphere in a linear shallow water model for tidally locked planets, and they found that "the beta-plane approximation produces much the same results as the spherical geometry", although small detailed differences exist (see the Fig. 6 versus Fig. 10 in their paper). Overall, the use of the Cartesian geometry rather than the spherical geometry is a limit of this study, but in theory this is not a big or fatal problem and will not influence the main conclusions. Meanwhile, our group is trying to re-do the simulations using other two models WRF and MPAS. These two models use spherical geometry. We will know more about this in the near future. Another limitation of this study is that there is no cross-pole flow in our SAM experiments, due to the use of rigid wall boundary condition at the two poles. Previous studies have showed that there are significant cross-pole flows on tidally locked planets, as showed in Figures 10-12 in Joshi et al. (1997), Figures 6-8 in Haqq-Misra & Kopparapu (2015), and Figure 7 of Kopparapu et al. (2016). How large does this shortcoming of SAM influence the results? In order to answer this question, we calculate the day-to-night mass transports. The total day-to-night transport in ExoCAM is contributed by two parts, cross-pole flows and cross-terminator flows. In SAM, all the day-to-night transport is from cross-terminator flows. In supplementary Table 4, we compare the net day-to-night transports of water vapor and clouds between SAM and ExoCAM. For the water vapor transport, SAM is relatively smaller than ExoCAM in the simulation of TRAPPIST-1e, 10.6 versus 19.1 kg m-1 s-1, but larger in the experiment of K2-72e, 33.8 versus 30.6 kg m-1 s-1. For the former, it is at least partially related to the lack of cross-pole flow in the SAM experiment. For the latter, although there is no cross-pole flow in SAM, the zonal flows in the SAM experiment of K2-72e are relatively stronger than that in ExoCAM (see supplementary Figs. 12B & 12F), which can make the day-to-night water vapor transport in SAM be relatively greater. For the cloud water transport from dayside to nightside, SAM is smaller than ExoCAM in the simulation of TRAPPIST-1e, 0.1 versus 0.6 kg m-1 s-1, but larger in the experiment of K2-72e, 0.8 versus 0.3 kg m-1 s-1. These trends are the same as those for water vapor, and the reasons should be similar because these two traces (clouds and water vapor) are transported by the same winds. Moreover, the dayside cloud water amount of TRAPPIST-1e in the SAM experiment is less than that in ExoCAM (Fig. 2 in the main text); this can also reduce the magnitude of the day-to-night cloud transport in SAM. Therefore, the lack of cross-pole winds in SAM is one of the reasons for the relatively less night-side cloud cover (Supplementary Fig. 9) in the simulation for TRAPPIST-1e, but not for K2-72e. The key reason is likely the much less cloud formation on the night side in SAM; this can be viewed from the supplementary videos online. **Text 8. Convection and precipitation in extremely hot climates.** Using three different cloud-resolving models (DAM, CM1, and SAM), Seeley & Wordsworth (2021) investigated the convection behaviour under extremely hot climates above 320 K. Three domain sizes were employed, 72 km \(\times\) 72 km, 144 km \(\times\) 144 km, and 512 km \(\times\) 512 km. They found that in the extremely hot climates, the system enters one regime called "episodic deluges": periodic, short, and strong precipitations are separated by relatively long dry spells. The strength of precipitation can reach tens of mm day-1 or even over 100 mm day-1. The period of the oscillations is about 1 to 4 Earth days. In our quasi-global cloud-permitting simulations for tidally locked planets, we find a similar phenomenon (Supplementary Fig. 19). Supplementary Fig. 19(b) shows that the strength of precipitation can reach 100-200 mm day-1 in a short time, and during the dry spells the precipitation is close to zero. But, the oscillations are much less regular than those found in the small-domain simulations of Seeley & Wordsworth (2021); this makes it hard to verify what exact period it is in our experiments. Moreover, it is important to point out that this oscillation behavior occurs only in a small area in our simulations, neither the whole dayside nor a wide region around the substellar point. In Seeley & Wordsworth (2021), they suggested that positive net lower-tropospheric radiative heating (LTRH) is the main mechanism that stabilizes the lower troposphere and decouples the surface and the upper troposphere during the dry spells, and the trigger of intense convection and precipitation is related to evaporative cooling at the base of elevated convection. LTRH causes an inhibition layer in the lower troposphere. When the evaporative cooling erodes the inhibition layer, deep convection starting from the near surface is allowed and strong rainfall occurs. In our simulations, the mechanism is likely different from that showed in Seeley & Wordsworth (2021). As shown in Supplementary Fig. 20(C), the radiative (shortwave plus longwave) heating rate is positive in the whole troposphere at the selected region. This is due to the concentrated stellar radiation on the dayside. The star temperature is 3300 K, so more stellar energy is in the range of near-infrared wavelengths, within which water vapor and clouds can absorb more stellar energy. Note that in Seeley & Wordsworth (2021), the radiative heating rate is positive only in the near-surface layers (see their Figure 2b & c). Moreover, during the deluges in our experiment, the convection top reaches only \(\sim\)8 km, rather than the whole troposphere (Supplementary Fig. 20). Supplementary Fig. 20(D) shows that there are two separated layers in the atmosphere: one is between the near surface and the level of \(\sim\)8 km, and the other one is between 10 and 20 km. These two convection layers seem have no direct connection during all the time of the simulation. This is quite different from that found in Seeley & Wordsworth (2021). In their simulations, the convection during the deluge phase occupies the whole troposphere from the near surface to the tropopause. All the above differences suggest that the underlying mechanisms are likely quite different. The apparent reason is that large-scale circulation is included in our simulations but not in Seeley & Wordsworth (2021). Further overthought analyses are required and a separated article is required to clearly address them. **Text 9. Gibbs phenomenon in the CAM3 experiment of K2-72e.** The oscillation of cloud fraction in the CAM3 simulation (Supplementary Fig. 9(D)) can be explained by Gibbs phenomenon, which refers to the overshoot/undershoot of a partial sum expansion of a function near a discontinuity as compared to the original function (Navarra et al., 1994; Raeen, 2008). In the CAM3 simulations, we use the spectral Eulerian dynamical core and the T31 truncation (Collins et al., 2004). The spectral method has the problem of producing artificial oscillation at discontinuities, especially when the horizontal resolution is not high. Here, we demonstrate the Gibbs phenomenon by applying the T31 truncation to a simple 2D field that is 1 on the dayside and -1 on the nightside. The field truncated at T31 is shown in Supplementary Fig. 21(C), and ripple-like oscillations can be seen in the truncated field. The wavelength of the Gibbs oscillation is the circumference of the planet divided by 32. This value is approximately the shortest resolved zonal wave at the equator (Laprise, 1992). The truncated pattern resembles the oscillations in the spatial pattern of low-level cloud fraction (Supplementary Fig. 21(A)). We have also added one CAM3 experiment using the spectral Eulerian dynamical core under a T42 truncation. The result of low-level cloud fraction is showed in Supplementary Fig. 21(B). A truncated pattern of T42 (Supplementary Fig. 21(D)) also resembles the oscillations in the spatial pattern of the simulated low-level cloud fraction.
2301.00525
Blowing-up Hermitian Yang--Mills connections
We investigate hermitian Yang--Mills connections for pullback vector bundles on blow-ups of K\"ahler manifolds along submanifolds. Under some mild asumptions on the graded object of a simple and semi-stable vector bundle, we provide a necessary and sufficent numerical criterion for the pullback bundle to admit a sequence of hermitian Yang--Mills connections for polarisations that make the exceptional divisor sufficiently small, and show that those connections converge to the pulled back hermitian Yang-Mills connection of the graded object.
Andrew Clarke, Carl Tipler
2023-01-02T04:27:45Z
http://arxiv.org/abs/2301.00525v2
# Blowing-up Hermitian Yang-Mills connections ###### Abstract. We investigate hermitian Yang-Mills connections for pullback vector bundles on blow-ups of Kahler manifolds along submanifolds. Under some mild asumptions on the graded object of a simple and semi-stable vector bundle, we provide a necessary and sufficent numerical criterion for the pullback bundle to admit a sequence of hermitian Yang-Mills connections for polarisations that make the exceptional divisor sufficiently small, and show that those connections converge to the pulled back hermitian Yang-Mills connection of the graded object. 2010 Mathematics Subject Classification: Primary: 53C07, Secondary: 53C55, 14J60 ## 1. Introduction A corner stone in gauge theory is the Hitchin-Kobayashi correspondence ([17, 20, 30, 12]). This celebrated generalisation of the Narasimhan and Seshadri theorem asserts that a holomorphic vector bundle over a Kahler manifold carries an Hermite-Einstein metric if and only if it is polystable in the sense of Mumford and Takemoto ([22, 29]). The interplay between the differential geometric side, hermitian Yang-Mills connections (HYM for short) that originated from physics, and the algebro-geometric side, the stability notion motivated by moduli constructions, has had many applications and became a very fertile source of inspiration. Given that HYM connections are canonically attached to polystable vector bundles, it is natural to investigate their relations to natural maps between vector bundles, such as pullbacks. In this paper, we address the problem of pulling back HYM connections along blow-ups. While the similar problem for extremal Kahler metrics has seen many developments in the past ten years [1, 2, 3, 28, 26, 8], relatively little seems to be known about the behaviour of HYM connections under blow-ups [6, 9]. In this paper, under some mild asumptions, we solve the problem for pullback of _semi-stable_ vector bundles on blow-ups along smooth centers. Let \(\pi:X^{\prime}\to X\) be the blow-up of a polarised Kahler manifold \((X,[\omega])\) along a submanifold \(Z\subset X\), and \(E^{\prime}=\pi^{*}E\) the pullback of a holomorphic vector bundle \(E\to X\). For \(0<\varepsilon\ll 1\), \(L_{\varepsilon}:=\pi^{*}[\omega]-\varepsilon c_{1}(Z^{\prime})\) defines a polarisation on \(X^{\prime}\), where we set \(Z^{\prime}=\pi^{-1}(Z)\) the exceptional divisor. There are obstructions for \(E^{\prime}\) to admit HYM connections with respect to \(\omega_{\varepsilon}\in c_{1}(L_{\varepsilon})\), with \(0<\varepsilon\ll 1\). In particular, \(E\) should be _simple_ and _semi-stable_ with respect to \([\omega]\) (see Section 2.3). In the latter case, \(E\) admits a Jordan-Holder filtration by semi-stable sheaves with polystable graded object \(\operatorname{Gr}(E)\) (see Section 2.2 for definitions). A further obstruction comes then from subsheaves of \(E\) arising from \(\operatorname{Gr}(E)\). While those sheaves have the same slope as \(E\), their pullbacks to \(X^{\prime}\) could destabilise \(E^{\prime}\). Our main result asserts that those are actually the only obstructions for \(E^{\prime}\) to carry a HYM connection, under some mild asumptions on \(\operatorname{Gr}(E)\). Recall that a semi-stable holomorphic vector bundle \(E\to(X,[\omega])\) is said to be _sufficiently smooth_ if its graded object \(\operatorname{Gr}(E)\) is locally free. Let \(\mathfrak{E}_{[\omega]}\) denote the set of all subbundles of \(E\) arising in a Jordan-Holder filtration for \(E\), or equivalently of same slope as \(E\) with respect to \([\omega]\). For \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\), denote by \(\mu_{L_{\varepsilon}}(\mathcal{F})=\frac{c_{\varepsilon}(\pi^{*}\mathcal{F}) \cdot L_{\varepsilon}^{n-1}}{\operatorname{rank}(\mathcal{F})}\) the slope of \(\pi^{*}\mathcal{F}\) on \((X^{\prime},L_{\varepsilon})\). **Theorem 1.1**.: _Let \(E\to X\) be a simple sufficiently smooth semi-stable holomorphic vector bundle on \((X,[\omega])\). Assume that the stable components of \(\operatorname{Gr}(E)\) are pairwise non-isomorphic. Then, there exists \(\varepsilon_{0}>0\) and a sequence of HYM connections \((A_{\varepsilon})_{\varepsilon\in(0,\varepsilon_{0})}\) on \(\pi^{*}E\) with respect to \((\omega_{\varepsilon})_{\varepsilon\in(0,\varepsilon_{0})}\) if and only if_ \[\forall\;\mathcal{F}\in\mathfrak{E}_{[\omega]},\,\mu_{L_{\varepsilon}}( \mathcal{F})\underset{\varepsilon\to 0}{<}\mu_{L_{\varepsilon}}(E). \tag{1.1}\] _In that case, if \(A\) denotes a HYM connection on \(\operatorname{Gr}(E)\) with respect to \(\omega\), then \((A_{\varepsilon})_{\varepsilon\in(0,\varepsilon_{0})}\) can be chosen so that \(A_{\varepsilon}\underset{\varepsilon\to 0}{\longrightarrow}\pi^{*}A\) in any Sobolev norm._ In the statement, the expression \(\mu_{L_{\varepsilon}}(\mathcal{F})\underset{\varepsilon\to 0}{<}\mu_{L_{ \varepsilon}}(E)\) means that the first non-zero term in the \(\varepsilon\)-expansion for \(\mu_{L_{\varepsilon}}(E)-\mu_{L_{\varepsilon}}(\mathcal{F})\) is strictly positive. **Remark 1.2**.: Simplicity, semi-stability and condition (1.1) are necessary to produce the connections \((A_{\varepsilon})\) from Theorem 1.1. The other two asumptions on \(\operatorname{Gr}(E)\) are technical. Assuming \(\operatorname{Gr}(E)\) to be locally free enables to see \(E\) as a smooth complex deformation of \(\operatorname{Gr}(E)\) and to work with the various connections on the same underlying complex vector bundle. We should warn the reader though that if one drops this asumption, Condition (1.1) might not be enough to ensure semi-stability of \(\pi^{*}E\) on \((X^{\prime},L_{\varepsilon})\) (see the extra conditions in [23, Theorem 1.10]). On the other hand, the asumption on \(\operatorname{Gr}(E)\) having no pairwise isomorphic components is purely technical, and ensures that its automorphism group, that will provide obstructions in the perturbative theory, is abelian. We now list some corollaries of Theorem 1.1. First, the stable case : **Corollary 1.3**.: _Let \(E\to X\) be a stable holomorphic vector bundle on \((X,[\omega])\) and let \(A\) be a HYM connection on \(E\). Then, there exists \(\varepsilon_{0}>0\) and a sequence of HYM connections \((A_{\varepsilon})_{\varepsilon\in(0,\varepsilon_{0})}\) on \(\pi^{*}E\) with respect to \((\omega_{\varepsilon})_{\varepsilon\in(0,\varepsilon_{0})}\) such that \(A_{\varepsilon}\underset{\varepsilon\to 0}{\rightarrow}\pi^{*}A\) in any Sobolev norm._ For the semi-stable case, Condition (1.1) reduces to a finite number of intersection product computations. One interesting feature comes from the second term in the expansion of \(\mu_{L_{\varepsilon}}(E)\). It is the opposite of the slope of the restriction of \(E\) to \(Z\). The following formula is proved in [23, Section 4.1], where \(m=\dim(Z)\) : \[\mu_{L_{\varepsilon}}(E)=\mu_{L}(E)-\binom{n-1}{m-1}\mu_{L_{|Z}}(E_{|Z}) \varepsilon^{n-m}+O(\varepsilon^{n-m+1}). \tag{1.2}\] We then have : **Corollary 1.4**.: _Let \(E\to X\) be a simple sufficiently smooth semi-stable holomorphic vector bundle on \((X,[\omega])\). Assume that the stable components of \(\operatorname{Gr}(E)\) are pairwise non-isomorphic. Denote by \(A\) an HYM connection on \(E\). If_ \[\forall\;\mathcal{F}\in\mathfrak{E}_{[\omega]},\,\mu_{L_{|Z}}(E_{|Z})<\mu_{L_{ |Z}}(\mathcal{F}_{|Z}), \tag{1.3}\] _then, there exists \(\varepsilon_{0}>0\) and a sequence of HYM connections \((A_{\varepsilon})_{\varepsilon\in(0,\varepsilon_{0})}\) on \(\pi^{*}E\) with respect to \((\omega_{\varepsilon})_{\varepsilon\in(0,\varepsilon_{0})}\) converging to \(\pi^{*}A\) in any Sobolev norm._ Condition (1.3) was checked on explicit examples in [23, Section 4.5] to produce stable perturbations of tangent sheaves by blow-ups, and our result provides information on the associated connections and their asymptotic behaviour. Note that by Mehta-Ramanathan theorem [21], if \([\omega]=c_{1}(L)\) is integral, and if \(Z\) is a generic intersection of divisors in linear systems \(|L^{k}|\), then \(E_{|Z}\) is semi-stable as soon as \(E\) is. In that case, Condition (1.3) cannot be satisfied, and it seems unlikely that Condition (1.1) will hold true. Hence, blowing-up such subvarieties tend to destabilise a semi-stable bundle. In general, we expect that it should not be too hard to obtain stability of sufficiently smooth pulled back bundles under condition (1.1) with purely algebraic methods. However, we emphasize that the Hitchin-Kobayashi correspondence doesn't provide any information on the asymptotic behaviour of the associated HYM connections, which is then the main content of Theorem 1.1. Nevertheless, we state the following corollary, that extends [23, Theorem 1.10] to a non-equivariant situation: **Corollary 1.5**.: _Let \(E\to X\) be a simple sufficiently smooth semi-stable holomorphic vector bundle on \((X,[\omega])\). Assume that the stable components of \(\operatorname{Gr}(E)\) are pairwise non-isomorphic. Then, there exists \(\varepsilon_{0}>0\) such that \(\pi^{*}E\to(X^{\prime},L_{\varepsilon})\) is_ 1. _stable if and only if for all_ \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\)_,_ \(\mu_{L_{\varepsilon}}(\mathcal{F})\underset{\varepsilon\to 0}{<}\mu_{L_{ \varepsilon}}(E)\)_,_ 2. _semi-stable if and only if for all_ \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\)_,_ \(\mu_{L_{\varepsilon}}(\mathcal{F})\underset{\varepsilon\to 0}{\leq}\mu_{L_{ \varepsilon}}(E)\)_,_ 3. _unstable otherwise._ Finally, we comment on previous related works. Theorem 1.1 extends results from [6, 9] where blow-ups of HYM connections along points are considered. In the present paper, we consider blow-ups along any smooth subvariety, and also cover the semi-stable situation, which is technically more involved due to the presence of automorphisms of the graded object that obstruct the linear theory. While [9] is a gluing construction as in the similar problem of producing extremal Kahler metrics on blow-ups [2, 3, 28, 26, 8], one of the key feature in our approach is to apply directly the implicit function theorem to reduce to (an \(\varepsilon\) dependent family of) finite dimensional GIT problems on a Kuranishi space parametrising small deformations of \(\operatorname{Gr}(E)\), as in [27, 8]. We then use the new technology developed in [24] to control the perturbations of the associated moment maps when \(\omega_{\varepsilon}\) varries. This is where our hypothesis on \(\operatorname{Aut}(\operatorname{Gr}(E))\) being abelian is used. The main new technical input comes from the fact that the underlying smooth manifold \(X\) is fixed in [24], while it varries with the blow-up, which requires a carefull analysis of the operator introduced to apply the implicit function theorem. **Outline:** In Section 2, we recall basic material about HYM connections and stability. We then perform in Section 2.3 the analysis of the linear theory on the blow-up. Relying on this, in Section 3 we explain how to reduce the problem to finding zeros of finite dimensional moment maps. Then, we conclude the proof of Theorem 1.1 and its corollaries in Section 4. **Acknowledgments:** The authors benefited from visits to LMBA and Gotheborg University; they would like to thank these welcoming institutions for providing stimulating work environments. The idea of this project emerged from discussions with Lars Martin Sektnan, whom we thank for sharing his ideas and insight. CT is partially supported by the grants MARGE ANR-21-CE40-0011 and BRIDGES ANR-FAPESP ANR-21-CE40-0017. ## 2. Preliminaries In Sections 2.1 and 2.2 we introduce the notions of HYM connections and slope stability, together with some general results, and refer the reader to [18] and [16]. From Section 2.3 we start to specialise the discussion to blow-ups. In particular, in Section 2.3.2, we provide various asymptotic expressions for the linearisation of the HYM equation on the blow-up. Those results will be used in Section 3. ### The hermitian Yang-Mills equation Let \(E\to X\) be a holomorphic vector bundle over a compact Kahler manifold \(X\). A hermitian metric on \(E\) is _Hermite-Einstein_ with respect to a Kahler metric with Kahler form \(\omega\) if the curvature \(F_{h}\in\Omega^{2}\left(X,\operatorname{End}E\right)\) of the corresponding Chern connection satisfies \[\Lambda_{\omega}\left(iF_{h}\right)=c\operatorname{Id}_{E} \tag{2.1}\] for some real constant \(c\). Equivalently, if \(h\) is some hermitian metric on the smooth complex vector bundle underlying \(E\), a hermitian connection \(A\) on \((E,h)\) is said to be _hermitian Yang-Mills_ if it satisfies \[\left\{\begin{array}{rcl}F_{A}^{0,2}&=&0,\\ \Lambda_{\omega}\left(iF_{A}\right)&=&c\operatorname{Id}_{E}.\end{array}\right.\] The first equation of this system implies that the \((0,1)\)-part of \(A\) determines a holomorphic structure on \(E\), while the second that \(h\) is Hermite-Einstein for this holomorphic structure. We will try to find hermitian Yang-Mills connections within the complex gauge group orbit, which we now define. The (hermitian) _complex gauge group_ is \[\mathscr{G}^{\mathbb{C}}(E,h)=\Gamma\left(\operatorname{GL}\left(E,\mathbb{C }\right)\right)\cap\Gamma\left(\operatorname{End}_{H}(E,h)\right),\] where \(\operatorname{End}_{H}(E,h)\) stands for the hermitian endomorphisms of \((E,h)\). Note that if \(\bar{\partial}\) is the Dolbeault operator defining the holomorphic structure on \(E\), then \(f\circ\bar{\partial}\circ f^{-1}\) defines a biholomorphic complex structure on \(E\). Let \(d_{A}=\partial_{A}+\bar{\partial}_{A}\) be the Chern connection of \((E,h)\) with respect to the original complex structure (that is \(\bar{\partial}_{A}=\bar{\partial}\)). Then the Chern connection \(A^{f}\) of \(h\) with respect to \(f\circ\bar{\partial}\circ f^{-1}\) is \[d_{A^{f}}=(f^{*})^{-1}\circ\partial_{A}\circ(f^{*})+f\circ\bar{\partial}\circ f ^{-1}.\] Solving the hermitian Yang-Mills equation is equivalent to solving \[\Psi(s)=c\operatorname{Id}_{E}\] where \[\begin{array}{rcl}\Psi:&\operatorname{Lie}(\mathscr{G}^{\mathbb{C}}(E,h))& \longrightarrow&\operatorname{Lie}(\mathscr{G}^{\mathbb{C}}(E,h))\\ s&\longmapsto&i\Lambda_{\omega}(F_{A^{\operatorname{exp}(s)}}),\end{array}\] and where \(\operatorname{Lie}(\mathscr{G}^{\mathbb{C}}(E,h)):=i\Gamma(\operatorname{ End}_{H}(E,h))\) is the tangent space to \(\mathscr{G}^{\mathbb{C}}(E,h)\) at the identity. For a connection \(A\) on \(E\), the Laplace operator \(\Delta_{A}\) is \[\Delta_{A}=i\Lambda_{\omega}\left(\bar{\partial}_{A}\partial_{A}-\partial_{A} \bar{\partial}_{A}\right). \tag{2.2}\] If \(A_{\operatorname{End}E}\) denote the connection induced by \(A\) on \(\operatorname{End}E\), then : **Lemma 2.1**.: _If \(A\) is the Chern connection of \((E,\overline{\partial},h)\), the differential of \(\Psi\) at identity is_ \[d\Psi_{\operatorname{Id}_{E}}=\Delta_{A_{\operatorname{End}\,E}}.\] _If moreover \(A\) is assumed to be hermitian Yang-Mills, then the kernel of \(\Delta_{A_{\operatorname{End}\,E}}\) acting on \(\Gamma(\operatorname{End}(E))\) is given by the Lie algebra \(\mathfrak{aut}(E)\) of the space of automorphisms \(\operatorname{Aut}(E)\) of \((E,\overline{\partial})\)._ The last statement about the kernel follows from the Kahler identities and the Akizuki-Nakano identity that imply \(\Delta_{A_{\operatorname{End}\,E}}=\partial_{A}^{*}\partial_{A}+\bar{\partial} _{A}^{*}\bar{\partial}_{A}\), the two terms of which are equal if \(A\) is Hermitian Yang-Mills. The operator \(\Delta_{A_{\operatorname{End}\,E}}\) being elliptic and self-adjoint, \(\mathfrak{aut}(E)\) will then appear as a cokernel in the linear theory for perturbations of hermitian Yang-Mills connections. ### Slope stability We recall some basic facts about slope stability, as introduced by [22, 29], and refer the interested reader to [16] for a detailed treatment. We denote here \(L:=[\omega]\) the polarisation of the \(n\)-dimensional Kahler manifold \(X\). **Definition 2.2**.: For \(\mathcal{E}\) a torsion-free coherent sheaf on \(X\), the slope \(\mu_{L}(\mathcal{E})\in\mathbb{Q}\) (with respect to \(L\)) is given by the intersection formula \[\mu_{L}(\mathcal{E})=\frac{\deg_{L}(\mathcal{E})}{\operatorname{rank}( \mathcal{E})}, \tag{2.3}\] where \(\operatorname{rank}(\mathcal{E})\) denotes the rank of \(\mathcal{E}\) while \(\deg_{L}(\mathcal{E})=c_{1}(\mathcal{E})\cdot L^{n-1}\) stands for its degree. Then, \(\mathcal{E}\) is said to be _slope semi-stable_ (resp. _slope stable_) with respect to \(L\) if for any coherent subsheaf \(\mathcal{F}\) of \(\mathcal{E}\) with \(0<\operatorname{rank}(\mathcal{F})<\operatorname{rank}(\mathcal{E})\), one has \[\mu_{L}(\mathcal{F})\leq\mu_{L}(\mathcal{E})\,(\text{ resp. }\mu_{L}( \mathcal{F})<\mu_{L}(\mathcal{E})).\] A direct sum of slope stable sheaves of the same slope is said to be _slope polystable_. In this paper, we will often omit "slope" and simply refer to stability of a sheaf, the polarisation being implicit. We will make the standard identification of a holomorphic vector bundle \(E\) with its sheaf of sections, and thus talk about slope stability notions for vector bundles as well. In that case slope stability relates nicely to differential geometry via the Hitchin-Kobayashi correspondence : **Theorem 2.3** ([17, 20, 30, 12]).: _There exists a Hermite-Einstein metric on \(E\) with respect to \(\omega\) if and only if \(E\) is polystable with respect to \(L\)_ We will be mostly interested in semi-stable vector bundles. A _Jordan-Holder filtration_ for a torsion-free sheaf \(\mathcal{E}\) is a filtration by coherent subsheaves: \[0=\mathcal{F}_{0}\subset\mathcal{F}_{1}\subset\ldots\subset \mathcal{F}_{\ell}=\mathcal{E}, \tag{2.4}\] such that the corresponding quotients, \[\mathcal{G}_{i}=\frac{\mathcal{F}_{i}}{\mathcal{F}_{i-1}}, \tag{2.5}\] for \(i=1,\ldots,\ell\), are stable with slope \(\mu_{L}(\mathcal{G}_{i})=\mu_{L}(\mathcal{E})\). In particular, the graded object of this filtration \[\operatorname{Gr}(\mathcal{E}):=\bigoplus_{i=1}^{l}\mathcal{G}_{i} \tag{2.6}\] is polystable. From [16, Section 1], we have the standard existence and uniqueness result: **Proposition 2.4**.: _Any semi-stable coherent torsion-free sheaf \(\mathcal{E}\) on \((X,L)\) admits a Jordan-Holder filtration, and the graded object \(\operatorname{Gr}(\mathcal{E})\) of such filtrations is unique up to isomorphism._ When \(E\) is locally-free and semi-stable, we say that it is _sufficiently smooth_ if \(\operatorname{Gr}(E)\) is locally-free. In that case, we denote \(\mathfrak{E}_{[\omega]}\) the set of holomorphic subbundles of \(E\) built out of successive extensions of some of the stable components of \(\operatorname{Gr}(E)\). Equivalently, \(\mathfrak{E}_{[\omega]}\) is the set of holomorphic subbundles of \(E\) arising in a Jordan-Holder filtration for \(E\). Finally, we recall that a necessary condition for \(E\) to be stable is simplicity, that is \(\operatorname{Aut}(E)=\mathbb{C}^{*}\cdot\operatorname{Id}_{E}\). ### Geometry of the blow-up We consider now \(Z\subset X\) a \(m\)-dimensional complex submanifold of codimension \(r=n-m\geq 2\) and the blow-up map \[\pi:\operatorname{Bl}_{Z}(X)\to X.\] We will denote by \(X^{\prime}=\operatorname{Bl}_{Z}(X)\) the blown-up manifold and by \(Z^{\prime}=\pi^{-1}(Z)\) the exceptional divisor. We denote by \[L_{\varepsilon}:=\pi^{*}L-\varepsilon[Z^{\prime}]\] a polarisation on \(X^{\prime}\), for \(0<\varepsilon\ll 1\). Let \(E\to X\) be a holomorphic vector bundle, and denote by \(E^{\prime}=\pi^{*}E\) the pulled back bundle. For any holomorphic subbundle \(F\subset E\), the intersection numbers \(\mu_{L_{*}}(\pi^{*}E)-\mu_{L_{*}}(\pi^{*}F)\) admit expansions in \(\varepsilon\), with first term given by \(\mu_{L}(E)-\mu_{L}(F)\). For that reason, given the Hitchin-Kobayashi correspondence in Theorem 2.3, semi-stability of \(E\) on \((X,L)\) is a necessary condition for its pullback \(E^{\prime}\) to admit an HYM connection with respect to a Kahler metric in \(L_{\varepsilon}\), for all \(0<\varepsilon\ll 1\). Another necessary condition is simplicity of \(E^{\prime}\), which, by Hartogs' theorem, is equivalent to simplicity of \(E\). Then, natural candidates to test for stability of \(E^{\prime}\) are given by the pullbacks of elements in \(\mathfrak{E}_{[\omega]}\), and Condition (1.1) clearly is necessary for \(E^{\prime}\) to be stable in the polarisations we consider, and thus to admit an HYM connection. Hence, we will assume \(E\) to be simple, semi-stable, and to satisfy (1.1). We now turn back to the differential geometry of the blow-up. #### 2.3.1. Decomposition on spaces of sections We have a commutative diagramm: \[\begin{array}{ccc}Z^{\prime}&\stackrel{{\iota}}{{ \longrightarrow}}&X^{\prime}\\ \downarrow&&\downarrow\\ Z&\stackrel{{\iota_{0}}}{{\longrightarrow}}&X\end{array}\] where \(\iota_{0}\) and \(\iota\) denote the inclusions, while the vertical arrows are given by the projection map \(\pi\). We then have a pullback map on sections \[\pi^{*}:\Gamma(X,\operatorname{End}(E))\longrightarrow\Gamma(X^{\prime}, \operatorname{End}(\pi^{*}E))\] as well as a restriction map : \[\iota^{*}:\Gamma(X^{\prime},\operatorname{End}(\pi^{*}E))\longrightarrow \Gamma(Z^{\prime},\operatorname{End}(\iota^{*}\pi^{*}E)).\] Our goal now is to fit those maps in a short exact sequence, that will in the end split the space \(\Gamma(X^{\prime},\operatorname{End}(\pi^{*}E))\). If \(N_{Z}=TX_{|Z}/TZ\) denotes the normal bundle of \(Z\) in \(X\), then \(Z^{\prime}\simeq\mathbb{P}(N_{Z})\), and we can fix a \((1,1)\)-form \(\lambda\in c_{1}(\mathcal{O}_{\mathbb{P}(N_{Z})}(1))\) that restricts to Kahler metrics on the fibers of \(\mathbb{P}(N_{Z})\to Z\). We also fix a Kahler form \(\omega\in c_{1}(L)\) on \(X\), and consider its restriction to \(Z\). We then have a Kahler \(\mathbb{CP}^{r-1}\)-fibration : \[\pi:(Z^{\prime},\lambda)\longrightarrow(Z,\omega).\] By averaging along fibers as described in [25, Section 2.3], we obtain a splitting \[\Gamma(Z^{\prime},\operatorname{End}(\iota^{*}\pi^{*}E))=\pi^{*}(\Gamma(Z, \operatorname{End}(\iota^{*}_{0}E)))\oplus\Gamma_{0}(Z^{\prime},\operatorname{ End}(\iota^{*}\pi^{*}E)). \tag{2.7}\] We will omit the \(\iota^{*}\) and \(\pi^{*}\) to simplify notation. Using the projection on the second factor \[p_{0}:\Gamma(Z^{\prime},\operatorname{End}(E))\to\Gamma_{0}(Z^{\prime}, \operatorname{End}(E))\] in (2.7), we deduce a short exact sequence : \[0\longrightarrow\Gamma(X,\operatorname{End}(E))\xrightarrow{\pi^{*}}\Gamma(X ^{\prime},\operatorname{End}(E))\xrightarrow{p_{0}\circ\iota^{*}}\Gamma_{0}(Z ^{\prime},\operatorname{End}(E))\longrightarrow 0.\] We can actually split this sequence by mean of a linear extension operator \[\iota_{*}:\Gamma_{0}(Z^{\prime},\operatorname{End}(E))\longrightarrow\Gamma( X^{\prime},\operatorname{End}(E))\] such that \[p_{0}\circ\iota^{*}\circ\iota_{*}=\operatorname{Id}.\] This can be done using bump functions and a standard partition of unity argument. The outcome is an isomorphism : \[\begin{array}{ccc}\Gamma(X^{\prime},\operatorname{End}(E))&\longrightarrow& \Gamma(X,\operatorname{End}(E))\oplus\Gamma_{0}(Z^{\prime},\operatorname{End}( E))\\ s&\longmapsto&(s-\iota_{*}\circ p_{0}\circ\iota^{*}s\,,\ p_{0}\circ\iota^{*}s), \end{array} \tag{2.8}\] with inverse map \((s_{X},s_{Z})\mapsto(\pi^{*}s_{X}+\iota_{*}s_{Z})\). This splits the Lie algebra of gauge transformations, and will be used to identify contributions coming from \(X\) and from \(Z^{\prime}\) in the \(\varepsilon\)-expansion of the linearisation, which we describe in the next section. From now on, by abuse of notations, we will consider the spaces \(\Gamma(X,\operatorname{End}(E))\) and \(\Gamma_{0}(Z^{\prime},\operatorname{End}(E))\) as subspaces of \(\Gamma(X^{\prime},\operatorname{End}(\pi^{*}E))\), and denote \(s=s_{X}+s_{Z}\) the decomposition of an element \(s\in\Gamma(X^{\prime},\operatorname{End}(E))\). #### 2.3.2. Decomposition of the Laplace operator We extend \(\lambda\) to a closed \((1,1)\)-form over \(X^{\prime}\) as in [31, Section 3.3] and consider the family of Kahler metrics on \(X^{\prime}\): \[\omega_{\varepsilon}=\pi^{*}\omega+\varepsilon\lambda\in c_{1}(L_{\varepsilon }),\,0<\varepsilon\ll 1.\] Let \(A\) be a Hermitian connection on \(E\), which we pull back to \(X^{\prime}\) and extend to the bundle \(\operatorname{End}(\pi^{*}E)\). We will now study the Laplace operator \[\Delta_{\varepsilon}s=i\Lambda_{\varepsilon}(\bar{\partial}_{A}\partial_{A}- \partial_{A}\bar{\partial}_{A})s\] acting on the various components of \(s=s_{X}+s_{Z}\in\Gamma(X^{\prime},\operatorname{End}(E))\), where \(\Lambda_{\varepsilon}\) is the Lefschetz operator for the metric \(\omega_{\varepsilon}\). For this, we need to introduce an elliptic operator on \(Z^{\prime}\). The _vertical Laplace operator_, denoted \[\Delta_{\mathcal{V}}:\Gamma_{0}\left(Z^{\prime},\operatorname{End}(E)\right) \to\Gamma_{0}\left(Z^{\prime},\operatorname{End}(E)\right),\] is the operator defined by the following procedure. Let \(\sigma\in\Gamma_{0}(Z^{\prime},\operatorname{End}(E))\). Over a point \(x\in Z\), take the restriction \(\sigma_{x}\) of \(\sigma\) to \(Z^{\prime}_{x}=\pi^{-1}(x)\), and consider \(\sigma_{x}\) as a map to \(\mathbb{C}^{p}\) with components \(\sigma^{i}_{x}\) in a trivialisation \(\pi^{*}\operatorname{End}(E)_{x}\cong\mathbb{C}^{p}\) of the restriction of \(\pi^{*}\operatorname{End}(E)\) to the fibre \(Z^{\prime}_{x}\) of \(Z^{\prime}\to Z\). Define \[\left(\Delta_{\mathcal{V}}\left(\sigma\right)\right)^{i}_{x}=\Delta_{\left( \lambda\right)_{|Z^{\prime}_{x}}}\left(\sigma^{i}_{x}\right),\] for \(\Delta_{\lambda}\) the Laplacian of the Kahler form \(\lambda\) on \(Z^{\prime}_{x}\). Then glue together to form a section of \(\pi^{*}\operatorname{End}(E)\). As in [25, Section 4.1], one easily obtains that this construction is independent on the trivialisation chosen, and sends smooth sections to smooth sections. In the following Lemma, the supscript \(l\) (or \(l+2\)) stands for the Sobolev completion with respect to some \(L^{2,l}\) Sobolev norm, where those norms can be produced out of the metrics \(\omega,\lambda\) and any metric \(h\) on \(E\), together with the covariant derivatives given by \(A\). **Lemma 2.5**.: _[_25_, Section 4.1]_ _The vertical Laplacian_ \[\Delta_{\mathcal{V}}:\Gamma_{0}\left(Z^{\prime},\operatorname{End}(E)\right)^{ l+2}\to\Gamma_{0}\left(Z^{\prime},\operatorname{End}(E)\right)^{l}\] _is invertible._ In the following statements, if \(\mathcal{A}\) denotes a second order operator acting on sections, then in an expression of the form \[\mathcal{A}(\sigma)=\sigma_{0}+\varepsilon\sigma_{1}+\ldots+ \varepsilon^{d-1}\sigma_{d-1}+\mathcal{O}(\varepsilon^{d})\] the term \(\mathcal{O}(\varepsilon^{d})\) will stand for \(\sigma_{d}\cdot\varepsilon^{d}\), where \(\sigma_{d}\) is a section whose \(L^{2,l}\) Sobolev norm is bounded by the \(L^{2,l+2}\) Sobolev norm of \(\sigma\). **Lemma 2.6**.: _If \(s_{Z}=\iota_{*}\sigma_{Z}\) for \(\sigma_{Z}\in\Gamma(Z^{\prime},\operatorname{End}(E))\), then_ \[(p_{0}\circ\iota^{*})\Delta_{\varepsilon}(\iota_{*}\sigma_{Z})= \varepsilon^{-1}\Delta_{\mathcal{V}}\sigma_{Z}+\mathcal{O}(1).\] Proof.: We introduce the operator \(D\) given by \[Ds_{Z}=i(\bar{\partial}_{A}\partial_{A}-\partial_{A}\bar{\partial}_{A})s_{Z}.\] The Laplacian \(\Delta_{\varepsilon}\) satisfies on \(X^{\prime}\) : \[\Delta_{\varepsilon}s_{Z}\,\omega_{\varepsilon}^{n}=nDs_{Z}\wedge \omega_{\varepsilon}^{n-1},\] or equivalently \[\Delta_{\varepsilon}s_{Z}=\frac{n\ Ds_{Z}\wedge(\omega+\varepsilon \lambda)^{n-1}}{(\omega+\varepsilon\lambda)^{n}}.\] We note that \(\omega\) is a Kahler form on \(X\), but on \(X^{\prime}\) is degenerate along the fibre directions of the submanifold \(Z^{\prime}\). Then \((i^{*}\omega)^{m+1}=0\in\Omega^{2(m+1)}(Z^{\prime})\), and at \(x\in Z^{\prime}\subseteq X^{\prime}\), \(\omega^{m+2}=0\). Then, expanding \((\omega+\varepsilon\lambda)^{n-1}\) and \((\omega+\varepsilon\lambda)^{n}\) gives \[\iota^{*}\Delta_{\varepsilon}s_{Z}=(n-m-1)\varepsilon^{-1}\frac{ Ds_{Z}\wedge\omega^{m+1}\wedge\lambda^{n-m-2}}{\omega^{m+1}\wedge\lambda^{n-m- 1}}+\mathcal{O}(1).\] Restricting to \(Z^{\prime}\), the connection \(1\)-forms of \(A\) vanish, so \(\iota^{*}Ds_{Z}=i\partial\bar{\partial}\sigma_{Z}\), acting on the coefficient functions of \(\sigma_{Z}\). On the other hand, by considering a convenient orthonormal frame at \(x\in Z^{\prime}\), we see that \(\iota^{*}\Delta_{\varepsilon}\iota_{*}\sigma_{Z}=\varepsilon^{-1}\Delta_{ \mathcal{V}}\sigma_{Z}+\mathcal{O}(1)\). In the next lemma, we denote \(\Delta_{\varepsilon}s_{Z}=(\Delta_{\varepsilon}s_{Z})_{X}+(\Delta_{ \varepsilon}s_{Z})_{Z}\) the decomposition according to (2.8). **Lemma 2.7**.: _For \(s_{Z}=\iota_{*}\sigma_{Z}\) with \(\sigma_{Z}\in\Gamma(Z^{\prime},\operatorname{End}(E))\), we have_ \[(\Delta_{\varepsilon}s_{Z})_{X}=\mathcal{O}(1).\] Proof.: By definition, \((\Delta_{\varepsilon}s_{Z})_{X}=\pi^{*}\phi\) for some \(\phi\in\Gamma(X,\operatorname{End}(E))\). As we also have \[(\Delta_{\varepsilon}s_{Z})_{X} = (\operatorname{Id}-\iota_{*}(p_{0}\circ\iota^{*}))\Lambda_{ \varepsilon}Ds_{Z},\] we deduce that the section \(\phi\) is the continuous extension of \(\pi_{*}(\operatorname{Id}-\iota_{*}(p_{0}\circ\iota^{*}))\Lambda_{\varepsilon} Ds_{Z}\) across \(Z\subseteq X\). On \(X^{\prime}\setminus Z^{\prime}\) we have \[\Lambda_{\varepsilon}Ds_{Z} = n\frac{Ds_{Z}\wedge(\omega^{n-1}+\mathcal{O}(\varepsilon))}{ \omega^{n}+\mathcal{O}(\varepsilon)}=\mathcal{O}(1).\] As \(\pi_{*}(\operatorname{Id}-\iota_{*}(p_{0}\circ\iota^{*}))\) is \(\mathcal{O}(1)\), the result follows. From the previous two lemmas, in the decomposition \[s=s_{X}+s_{Z},\] \(\Delta_{\varepsilon}s_{Z}\) also lies in the subspace \(\Gamma_{0}(Z^{\prime},\operatorname{End}(E))\subseteq\Gamma(X^{\prime}, \operatorname{End}(E))\) to higher order in \(\varepsilon\). For \(s_{X}\in\Gamma(X,\operatorname{End}(E))\), \[\Delta_{\varepsilon}s_{X}=(\Delta_{\varepsilon}s_{X})_{X}+(\Delta_{ \varepsilon}s_{X})_{Z}\] where \((\Delta_{\varepsilon}s_{X})_{Z}=\iota_{*}(p_{0}\circ\iota^{*})\Delta_{ \varepsilon}s_{X}\). We first consider \(\iota^{*}\Delta_{\varepsilon}s_{X}\). **Lemma 2.8**.: _For \(s_{X}=\pi^{*}\sigma_{X}\in\Gamma(X,\operatorname{End}(E))\subseteq\Gamma(X^{ \prime},\operatorname{End}(E))\),_ \[\iota^{*}\Delta_{\varepsilon}s_{X}=(m+1)\frac{Ds_{X}\wedge\omega^{m}\wedge \lambda^{n-m-1}}{\omega^{m+1}\wedge\lambda^{n-m-1}}+\mathcal{O}(\varepsilon).\] Proof.: Firstly, \(s_{X}=\pi^{*}\sigma_{X}\), and the connection \(A\) is pulled back from \(X\), so \(Ds_{X}\) is basic for the projection to \(X\) and \(Ds\wedge\omega^{m+1}=0\) at points in \(Z^{\prime}\). Secondly, we note that \(\omega^{m+1}\wedge\lambda^{n-m-1}\) is a volume form on \(X^{\prime}\), in a neighbourhood of \(Z^{\prime}\). Then, the result follows similarly to the previous lemma. For the final term \((\Delta_{\varepsilon}s_{X})_{X}\), we introduce \(\Delta_{X}\) the Laplace operator of \(A\) on \(\operatorname{End}(E)\to(X,\omega)\): \[\begin{array}{ccc}\Delta_{X}:&\Gamma\left(X,\operatorname{End}(E)\right)& \to&\Gamma\left(X,\operatorname{End}(E)\right)\\ &\sigma&\mapsto&i\Lambda_{\omega}(\bar{\partial}_{A}\partial_{A}-\partial_{A} \partial_{A})\sigma.\end{array}\] **Lemma 2.9**.: _For \(s_{X}=\pi^{*}\sigma_{X}\in\Gamma(X,\operatorname{End}(E))\subseteq\Gamma(X^{ \prime},\operatorname{End}(E))\),_ \[(\Delta_{\varepsilon}s_{X})_{X}=\pi^{*}(\Delta_{X}\sigma_{X})+\mathcal{O}( \varepsilon).\] Proof.: There is \(\phi\in\Gamma(X,\operatorname{End}(E))\) such that \((\Delta_{\varepsilon}s_{X})_{X}=\pi^{*}\phi\). The element \(\phi\) can be identified as the lowest order term in the asymptotic expansion in \(\varepsilon\) of \((\Delta_{\varepsilon}\pi^{*}\sigma_{X})_{X}\). However, we have at \(x\in X^{\prime}\setminus Z^{\prime}\) : \[\Delta_{\varepsilon}\pi^{*}\sigma_{X}=n\frac{D\pi^{*}\sigma_{X}\wedge(\omega+ \varepsilon\lambda)^{n-1}}{(\omega+\varepsilon\lambda)^{n}}=n\pi^{*}\frac{D \sigma_{X}\wedge\omega^{n-1}}{\omega^{n}}+\mathcal{O}(\varepsilon)\] so we see that the lowest order term in the expansion of \((\Delta_{\varepsilon}\pi^{*}\sigma_{X})_{X}\) is \(\Delta_{X}\sigma_{X}\). Summarizing the above calculations, with respect to the decomposition \(s=s_{X}+s_{Z}\) produced by (2.8), the operator \(\Delta_{\varepsilon}\) takes the form \[\left(\begin{array}{cc}\Delta_{X}&0\\ \mathcal{L}&\varepsilon^{-1}\Delta_{\mathcal{V}}\end{array}\right) \tag{2.9}\] plus higher order terms, for some second order operator \(\mathcal{L}\). In the next section, we will apply the previous lemmas and the resulting form for \(\Delta_{\varepsilon}\) to the pullback of a HYM connection \(A_{0}\) on the graded object \(\operatorname{Gr}(E)\) of \(E\). ## 3. The perturbation argument The goal of this section is to reduce the problem of finding a zero for the operator \(s\mapsto i\Lambda_{\omega_{\varepsilon}}(F_{A^{\operatorname{exp}(s)}})-c_{ \varepsilon}\mathrm{Id}\) in a gauge group orbit to a finite dimensional problem. The ideas here go back to [13, 27], and our framework will be that of [5]. ### Kuranishi slice We start from a simple semi-stable and sufficiently smooth holomorphic vector bundle \(E\) on \((X,L)\), with \(L=[\omega]\). Denote by \(\operatorname{Gr}(E)=\bigoplus_{i=1}^{\ell}\mathcal{G}_{i}\) the associated polystable graded object, with stable components \(\mathcal{G}_{i}\). We let \(\overline{\partial}_{0}\) be the Dolbeault operator of \(\operatorname{Gr}(E)\). The automorphism group \(G:=\operatorname{Aut}(\operatorname{Gr}(E))\) is a reductive Lie group with Lie algebra \(\mathfrak{g}:=\mathfrak{aut}(\operatorname{Gr}(E))\) and compact form \(K\subset G\), with \(\mathfrak{k}:=\operatorname{Lie}(K)\). The Dolbeault operator \(\overline{\partial}_{E}\) on \(E\) is given by \[\overline{\partial}_{E}=\overline{\partial}_{0}+\gamma\] where \(\gamma\in\Omega^{0,1}(X,\operatorname{Gr}(E)^{*}\otimes\operatorname{Gr}(E))\) can be written \[\gamma=\sum_{i<j}\gamma_{ij}\] for (possibly vanishing) \(\gamma_{ij}\in\Omega^{0,1}(X,\mathcal{G}_{j}^{*}\otimes\mathcal{G}_{i})\). Elements \[g:=g_{1}\operatorname{Id}_{\mathcal{G}_{1}}+\ldots,+g_{\ell}\operatorname{Id }_{\mathcal{G}_{\ell}}\in G,\] for \((g_{i})\in(\mathbb{C}^{*})^{\ell}\), act on \(\overline{\partial}_{E}\) and produce isomorphic holomorphic vector bundles in the following way : \[g\cdot\overline{\partial}_{E}=\overline{\partial}_{0}+\sum_{i<j}g_{i}g_{j}^{- 1}\gamma_{ij}. \tag{3.1}\] In particular, for \(g=(t^{\ell},t^{\ell-1},\ldots,t)\), letting \(t\mapsto 0\), we can see \(E\) as a small complex deformation of \(\operatorname{Gr}(E)\). Our starting point to produce HYM connections on \(E^{\prime}=\pi^{*}E\) over \(X^{\prime}\) will then be the HYM connection \(A_{0}\) on \(\operatorname{Gr}(E)\to X\) given by the Chern connection of \((\overline{\partial}_{0},h_{0})\), where \(h_{0}\) is an Hermite-Einstein metric on the polystable bundle \(\operatorname{Gr}(E)\). Rather than working with the single bundle \(E\), we will consider the family of bundles given by the \(G\)-action on Dolbeault operators. This will require the following proposition, whose proof follows as in [19] (see also [5, 10] for a detailed treatment). We introduce the notation \[V:=H^{0,1}(X,\operatorname{End}(\operatorname{Gr}(E)))\] for the space of harmonic \((0,1)\)-forms with values in \(\operatorname{Gr}(E)\), where the metrics used to compute adjoints are \(\omega\) on \(X\) and \(h_{0}\) on \(\operatorname{Gr}(E)\). Note that the \(G\)-action on \(E\) induces a linear representation \(G\to\operatorname{GL}(V)\). **Proposition 3.1**.: _There exists a holomorphic \(K\)-equivariant map_ \[\Phi:B\to\Omega^{0,1}(X,\operatorname{End}(\operatorname{Gr}(E)))\] _from a ball around the origin \(B\subset V\) such that :_ 1. \(\Phi(0)=0\)_;_ 2. \(Z:=\{b\in B\,|\;(\overline{\partial}_{0}+\Phi(b))^{2}=0\}\) _is a complex subspace of_ \(B\)_;_ 3. _if_ \((b,b^{\prime})\in Z^{2}\) _lie in the same_ \(G\)_-orbit, then_ \(\overline{\partial}_{0}+\Phi(b)\) _and_ \(\overline{\partial}_{0}+\Phi(b^{\prime})\) _induce isomorphic holomorphic bundle structures;_ 4. _The_ \(\mathscr{G}^{\mathbb{C}}(\operatorname{Gr}(E))\)_-orbit of any small complex deformation of_ \(\operatorname{Gr}(E)\) _intersects_ \(\Phi(Z)\)_._ Here, \(\mathscr{G}^{\mathbb{C}}(\operatorname{Gr}(E))=\Gamma\left(\operatorname{GL} \left(\operatorname{Gr}(E),\mathbb{C}\right)\right)\) stands for the full gauge group of \(\operatorname{Gr}(E)\). The space \(Z\) corresponds to the space of integrable Dolbeault operators in the image of \(\Phi\), and \(\Phi(B)\) is a _slice_ for the gauge group action on the set of Dolbeault operators nearby \(\overline{\partial}_{0}\). We will then lift the slice to the space \(\Omega^{0,1}(X^{\prime},\operatorname{End}(\pi^{*}\operatorname{Gr}(E)))\) on the blown-up manifold \(X^{\prime}\), and denote \(\widehat{\Phi}\) the map \[\pi^{*}\circ\Phi:B\to\Omega^{0,1}(X^{\prime},\operatorname{End}(\operatorname{ Gr}(E))),\] where to ease notations we omitted \(\pi^{*}\) for the pulled back bundle. The map \(\widehat{\Phi}\) might no longer provide a slice for the gauge-group action on \(X^{\prime}\), but what matters for us is that its image will contain all elements in the \(G\)-orbit of \(\pi^{*}\overline{\partial}_{E}\) close to \(\pi^{*}\overline{\partial}_{0}\). ### Perturbing the slice The next step will be to perturb \(\widehat{\Phi}\) to reduce our problem to a finite dimensional one. The strategy to do this in family with respect to the parameter \(\varepsilon\) was inspired by [7, 8, 24]. Given the metrics \(\omega\) on \(X\setminus Z\), \(\lambda\) on \(Z^{\prime}\), and \(h=\pi^{*}h_{0}\) on \(E\), together with the covariant derivatives given by \(\nabla_{A_{0}}\), we can introduce \(L^{2,l}\) Sobolev norms on spaces of sections. We will denote by \(\mathcal{E}^{l}\) the \(L^{2,l}\) Sobolev completion of any space of sections \(\mathcal{E}\). In what follows, \(l\in\mathbb{N}^{*}\) will be assumed large enough for elements in \(\mathcal{E}^{l}\) to admit as much regularity as required. **Proposition 3.2**.: _Up to shrinking \(B\), there is \(\varepsilon_{0}>0\) and a continuously differentiable map_ \[\check{\Phi}:[0,\varepsilon_{0})\times B\to\Omega^{0,1}(X^{\prime}, \operatorname{End}(\operatorname{Gr}(E)))^{l}\] _such that for all \((\varepsilon,b)\in[0,\varepsilon_{0})\times B\), if \(\check{A}_{\varepsilon,b}\) is the Chern connection of \((\pi^{*}\overline{\partial}_{0}+\check{\Phi}(\varepsilon,b),h)\) :_ 1. \(\pi^{*}\overline{\partial}_{0}+\check{\Phi}(\varepsilon,b)\) _and_ \(\pi^{*}\overline{\partial}_{0}+\widehat{\Phi}(b)\) _induce isomorphic holomorphic structures._ 2. \(\Lambda_{\varepsilon}iF_{\check{A}_{\varepsilon,b}}\in\mathfrak{k}\)_._ **Remark 3.3**.: By elliptic regularity, elements in the image of \(\check{\Phi}\) will actually be smooth. However, regularity of the map \(\check{\Phi}\) is with respect to the \(L^{2,l}\) Sobolev norm. We will use the implicit function theorem to prove Proposition 3.2, and will need the following lemma, where we still denote \(A_{0}\) its pullback to \(\pi^{*}\operatorname{Gr}(E)\), and use the notation \(A_{0}^{s_{X}+\varepsilon s_{Z}}\) for \(A_{0}^{\exp(s_{X}+\varepsilon s_{Z})}\). **Lemma 3.4**.: _The map :_ \[\Psi:[0,\varepsilon_{0})\times\Gamma(X,\operatorname{End}_{H}(E) )^{l+2}\times\Gamma_{0}(Z^{\prime},\operatorname{End}_{H}(E))^{l+2} \longrightarrow \Omega^{0}(X^{\prime},\operatorname{End}_{H}(E))^{l},\] \[(\varepsilon\,,\,s_{X}\,,\,s_{Z}) \mapsto \Lambda_{\varepsilon}F_{A_{0}^{s_{X}+\varepsilon s_{Z}}}-c_{ \varepsilon}\mathrm{Id}\] _is continuously differentiable._ Above, the topological constants \(c_{\varepsilon}\) are given by \[c_{\varepsilon}=\frac{2\pi n}{\operatorname{vol}_{\omega_{\varepsilon}}(X^{ \prime})}\frac{\left(c_{1}(E)\cup[\omega_{\varepsilon}]^{n-1}\right)[X^{ \prime}]}{\operatorname{rank}(E)}.\] Proof.: Note first that for \(\varepsilon=0\), \(\Psi(0,s_{X},s_{Z})=\pi^{*}(\Lambda_{\omega}F_{A_{0}^{s_{X}}}-c_{0}\operatorname {Id}_{E})\) and is well defined. Then, recall that if \(f=\exp(s)\) for \(s\in\Gamma(X^{\prime},\operatorname{End}_{H}(E))\), the curvature of \(f\cdot A_{0}\) is given by \[F_{A_{0}^{s}}=F_{f\cdot A_{0}}=F_{A_{0}}+(\bar{\partial}\partial-\partial\bar{ \partial})s+(\partial s-\bar{\partial}s)\wedge(\partial s-\bar{\partial}s),\] where \(\partial\) and \(\bar{\partial}\) stand for the \((1,0)\) and \((0,1)\) components of \(d_{A_{0}}\) (see e.g. [5][Section 1]). In particular, taking \(s=s_{X}+\varepsilon s_{Z}\), \[F_{A_{0}^{*}} = F_{A_{0}}+(\bar{\partial}\partial-\partial\bar{\partial})s_{X}+ \varepsilon(\bar{\partial}\partial-\partial\bar{\partial})s_{Z}+(\partial s_{ X}-\bar{\partial}s_{X})\wedge(\partial s_{X}-\bar{\partial}s_{X})\] \[+\varepsilon(\partial s_{X}-\bar{\partial}s_{X})\wedge(\partial s _{Z}-\bar{\partial}s_{Z})+\varepsilon(\partial s_{Z}-\bar{\partial}s_{Z}) \wedge(\partial s_{X}-\bar{\partial}s_{X})\] \[+\varepsilon^{2}(\partial s_{Z}-\bar{\partial}s_{Z})\wedge( \partial s_{Z}-\bar{\partial}s_{Z}).\] That is, ignoring the first term \(F_{A_{0}}\), there are six remaining terms that we denote \(F_{A^{s}}^{i}\), for \(i=1,\ldots 6\). For each term we consider the factors coming from \(Z^{\prime}\) and from \(X\) (using (2.8)) in \(\Lambda_{\varepsilon}F_{A^{s}}^{i}\) and can conclude that \(\Psi\) is smooth. For example, for the term \(F_{A^{s}}^{2}=\varepsilon(\bar{\partial}\partial-\partial\bar{\partial})s_{Z}\), \[\Lambda_{\varepsilon}F_{A^{s}}^{2} = n\frac{\varepsilon Ds_{Z}\wedge(\omega+\varepsilon\lambda)^{n- 1}}{(\omega+\varepsilon\lambda)^{n}},\] \[\iota^{*}\Lambda_{\varepsilon}F_{A^{s}}^{2} = n\frac{\varepsilon Ds_{Z}\wedge\left(\binom{n-1}{(m+1)}\omega^{ m+1}\wedge(\varepsilon\lambda)^{n-m-2}+\mathcal{O}(\varepsilon^{n-m-1}) \right)}{\binom{n}{m+1}\omega^{m+1}\wedge(\varepsilon\lambda)^{n-m-1}+ \mathcal{O}(\varepsilon^{n-m})},\] \[= (n-m-1)\frac{Ds_{Z}\wedge\left(\omega^{m+1}\wedge\lambda^{n-m-1} +\mathcal{O}(\varepsilon)\right)}{\omega^{m+1}\wedge\lambda^{n-m-1}+\mathcal{O }(\varepsilon)},\] noting that here \(\mathcal{O}(\varepsilon)\) denotes a polynomial in \(\varepsilon\) with coefficients \(2n\)-forms on a neighbourhood of \(Z^{\prime}\), such that \(\mathcal{O}(0)=0\). We also note that \(\omega^{m+1}\wedge\lambda^{n-m-1}\) is a volume form on a neighbourhood of \(Z^{\prime}\). We conclude that \[(\Lambda_{\varepsilon}F_{A^{s}}^{2})_{Z}=\iota_{*}(p_{0}\circ\iota^{*}) \Lambda_{\varepsilon}F_{A^{s}}^{2}\] is a smooth function of \((\varepsilon,s_{Z})\) with values in \(\Gamma_{0}(Z^{\prime},\mathrm{End}(E))\). The \(X\)-component of \(\Lambda_{\varepsilon}F_{A^{s}}^{2}\), \[(\Lambda_{\varepsilon}F_{A^{s}}^{2})_{X} = (\mathrm{Id}-\iota_{*}(p_{0}\circ\iota^{*}))\Lambda_{\varepsilon} F_{A^{s}}^{2},\] is of the form \(\pi^{*}\phi\) for some \(\phi\in\Gamma(X,\mathrm{End}(E))\). The section \(\phi\) is given as the continuous extension of \(\pi_{*}(\mathrm{Id}-\iota_{*}(p_{0}\circ\iota^{*}))\Lambda_{\varepsilon}F_{A^{ s}}^{2}\) across \(Z\subseteq X\). On \(X^{\prime}\setminus Z^{\prime}\) we have \[\Lambda_{\varepsilon}F_{A^{s}}^{2} = n\frac{Ds_{Z}\wedge(\omega^{n-1}+\mathcal{O}(\varepsilon))}{ \omega^{n}+\mathcal{O}(\varepsilon)},\] which depends smoothly on \(s_{Z}\) and \(\varepsilon\). As \(\pi_{*}(\mathrm{Id}-\iota_{*}(p_{0}\circ\iota^{*}))\) is linear, \(\phi\) depends smoothly on these variables too. Using that \(s_{X}\) is a pulled back section, at points in \(Z^{\prime}\) we have \(Ds_{X}\wedge\omega^{m+1}=0\), from which we deduce \(\iota^{*}\Lambda_{\varepsilon}F_{A_{s}}^{1}=\mathcal{O}(1)\) and \(\iota^{*}\Lambda_{\varepsilon}F_{A_{s}}^{3}=\mathcal{O}(1)\). This shows, as for \((\Lambda_{\varepsilon}F_{A_{s}}^{2})_{Z}\), that \((\Lambda_{\varepsilon}F_{A_{s}}^{1})_{Z}\) and \((\Lambda_{\varepsilon}F_{A_{s}}^{3})_{Z}\) are \(\mathcal{C}^{1}\). The other terms \(F_{A^{s}}^{i}\) can be dealt with in a similar manner. Proof of Proposition 3.2.: For \(b\in B\), we will denote by \(A_{b}\) the Chern connection associated to \((\pi^{*}\overline{\partial}_{0}+\widehat{\Phi}(b),h)\), where \(h=\pi^{*}h_{0}\). Note that in particular \(A_{0}\) is the pullback of a HYM connection on \(\mathrm{Gr}(E)\). The aim is to apply the implicit function theorem to perturb \(A_{b}\) along gauge orbits in order to satisfy point (2) of the statement. The key will be to consider small perturbations along the exceptional divisor. Recall the splitting from Section 2.3.1 induced by the operator \(\iota_{*}\): \[i\Gamma(X^{\prime},\mathrm{End}_{H}(\mathrm{Gr}(E),h))=i\Gamma(X,\mathrm{End}_{ H}(\mathrm{Gr}(E),h))\oplus i\Gamma_{0}(Z^{\prime},\mathrm{End}_{H}( \mathrm{Gr}(E),h)),\] that we will simply denote \[\Gamma(X^{\prime})=\Gamma(X)\oplus\Gamma_{0}(Z^{\prime}).\] For \((s_{X},s_{Z})\in\Gamma(X)\oplus\Gamma_{0}(Z^{\prime})\), and \(\varepsilon\) small enough, we define \[A_{b}(\varepsilon,s_{X},s_{Z})=A_{b}^{s_{X}+\varepsilon s_{Z}},\] where \(s_{X}+\varepsilon s_{Z}\) stands for \(\pi^{*}s_{X}+\varepsilon\,\iota_{*}s_{Z}\in\Gamma(X^{\prime})\). By the regularity of \(\widehat{\Phi}\), the assignment \((b,\varepsilon,s_{X},s_{Z})\mapsto A_{b}(\varepsilon,s_{X},s_{Z})-A\) (resp. \((b,\varepsilon,s_{X},s_{Z})\mapsto F_{A_{b}(\varepsilon,s_{X},s_{Z})}\)) is smooth from \(B\times[0,\varepsilon_{0})\times\Gamma(X^{\prime})^{l}\) to \(\Omega^{1}(X^{\prime},\operatorname{End}(E))^{l-1}\) (resp. \(\Omega^{2}(X^{\prime},\operatorname{End}(E))^{l-2}\)), for any \(\varepsilon_{0}\) small enough. Arguing as in Lemma 3.4, using the fact that the perturbations along \(Z^{\prime}\) are \(\mathcal{O}(\varepsilon)\), we deduce that the operator \[\begin{array}{rcc}\widehat{\Psi}:&B\times[0,\varepsilon_{0})\times\Gamma(X ^{\prime})^{l}&\to&\Gamma(X^{\prime})^{l-2}\\ &(b,\varepsilon,s_{X},s_{Z})&\mapsto&\Lambda_{\varepsilon}iF_{A_{b}( \varepsilon,s_{X},s_{Z})}-c_{\varepsilon}\operatorname{Id}_{E}\end{array}\] is a \(\mathcal{C}^{1}\) map. As \(A_{0}\) is HYM on \(\operatorname{Gr}(E)\to X\), we have \(\widehat{\Psi}(0)=0\). By the various lemmas of Section 2.3.2, its differential in the \((s_{X},s_{Z})\) direction at zero is given by the map \[\begin{array}{rcc}\Gamma(X)^{l}\times\Gamma_{0}(Z^{\prime})^{l}&\to&\Gamma( X)^{l-2}\times\Gamma_{0}(Z^{\prime})^{l-2}\\ (s_{X},s_{Z})&\mapsto&\left[\begin{array}{cc}\Delta_{X}s_{X}&0\\ *&\Delta_{\mathcal{V}}s_{Z}\end{array}\right]\end{array}\] which, from Lemma 2.1 and Lemma 2.5, has cokernel \(i\mathfrak{k}\times\{0\}\). Then, by a standard projection argument onto some orthogonal complement of \(i\mathfrak{k}\), we can apply the implicit function theorem and obtain a \(\mathcal{C}^{1}\) map \((\varepsilon,b)\mapsto s(\varepsilon,b)\) such that \(\widehat{\Psi}(b,\varepsilon,s(\varepsilon,b))\) lies in \(\mathfrak{k}\), and conclude the proof by setting \[\check{\Phi}(\varepsilon,b)=(A_{b}(\varepsilon,s(\varepsilon,b)))^{0,1}-A^{0, 1}.\] We will now explain that for each \(\varepsilon\in[0,\varepsilon_{0})\), the map \[\begin{array}{rcc}\mu_{\varepsilon}:&B&\to&\mathfrak{k}\\ b&\mapsto&\Lambda_{\varepsilon}iF_{\hat{A}_{\varepsilon,b}}-c_{\varepsilon} \operatorname{Id}_{E}\end{array} \tag{3.2}\] is a moment map for the \(K\)-action on \(B\), for suitable symplectic forms \(\Omega_{\varepsilon}\) on \(B\). Recall from [4, 11] that for \(\varepsilon\in(0,\varepsilon_{0})\), the gauge action of \(\mathscr{G}^{\mathbb{C}}(\pi^{*}\operatorname{Gr}(E),h)\) on the affine space \(\overline{\partial}_{0}+\Omega^{0,1}(X^{\prime},\operatorname{End}( \operatorname{Gr}(E)))\) is hamiltonian for the symplectic form given, for \((a,b)\in\Omega^{0,1}(X^{\prime},\operatorname{End}(\operatorname{Gr}(E)))^{2}\), by \[\Omega_{\varepsilon}^{D}(a,b)=\int_{X^{\prime}}\operatorname{trace}(a\wedge b^ {*})\wedge\frac{\omega_{\varepsilon}^{n-1}}{(n-1)!}, \tag{3.3}\] with equivariant moment map \(\overline{\partial}\mapsto\Lambda_{\varepsilon}F_{A_{\overline{\partial}}}\) where \(A_{\overline{\partial}}\) stands for the Chern connection of \((\overline{\partial},h)\). Here, we identified the Lie algebra of \(\mathscr{G}^{\mathbb{C}}(\operatorname{Gr}(E),h)\) with its dual by mean of the invariant pairing \[\langle s_{1},s_{2}\rangle_{\varepsilon}:=\int_{X^{\prime}}\operatorname{ trace}(s_{1}\cdot s_{2}^{*})\ \frac{\omega_{\varepsilon}^{n}}{n!}. \tag{3.4}\] Note that the above expressions admit continuous extensions for \(\varepsilon=0\) when we restrict to the \(\mathscr{G}^{\mathbb{C}}(\operatorname{Gr}(E),h_{0})\) action on \(\overline{\partial}_{0}+\Omega^{0,1}(X,\operatorname{End}(\operatorname{Gr}(E )))\) and integrate over \((X,\omega)\). **Remark 3.5**.: We used above the Chern correspondence, for \(h\) fixed, between Dolbeault operators and hermitian connections to express the infinite dimensional moment map picture on the space of Dolbeault operators. **Proposition 3.6**.: _Up to shrinking \(\varepsilon_{0}\) and \(B\), for all \(\varepsilon\in[0,\varepsilon_{0})\), the map \(\overline{\partial}_{0}+\check{\Phi}(\varepsilon,\cdot)\) is a \(K\)-equivariant map from \(B\) to \(\overline{\partial}_{0}+\Omega^{0,1}(X^{\prime},\operatorname{End}(\operatorname {Gr}(E)))\) whose image is a symplectic submanifold for \(\Omega^{D}_{\varepsilon}\)._ Proof.: The equivariance follows easily from Proposition 3.1 and from the construction of \(\check{\Phi}\) in the proof of Proposition 3.2. For \(\varepsilon=0\), the map \(\check{\Phi}(0,\cdot)\) is obtained by perturbing \(\widehat{\Phi}=\pi^{*}\circ\Phi\). But \(\Phi\) is complex analytic with, by construction, injective differential at the origin (see e.g. the orginal proof [19] or [10]). So is \(\widehat{\Phi}\), and thus \(\widehat{\Phi}(B)\) is a complex subspace of \(\Omega^{0,1}(X^{\prime},\operatorname{End}(\pi^{*}\operatorname{Gr}(E)))\). We deduce that, up to shrinking \(B\), \(\widehat{\Phi}\) induces an embedding of \(B\) such that the restriction of \(\Omega^{D}_{0}\) to \(\widehat{\Phi}(B)\) is non-degenerate (recall that \(\Omega^{D}_{0}\) is a Kahler form on the space of Dolbeault operators on \(X\)). As \(\widehat{\Phi}(\varepsilon,\cdot)\) is obtained by a small and continuous perturbation of \(\widehat{\Phi}\), and as being a symplectic embedding is and open condition, the result follows. From this result, we deduce that the map \(\mu_{\varepsilon}\) defined in (3.2) is a moment map for the \(K\)-action on \(B\) with respect to the pulled back symplectic form \[\Omega_{\varepsilon}:=\check{\Phi}(\varepsilon,\cdot)^{*}\Omega^{D}_{ \varepsilon},\] and where we use the pairing \(\langle\cdot,\cdot\rangle_{\varepsilon}\) defined in (3.4) to identify \(\mathfrak{k}\) with its dual. From the discussion of Section 3.1, \(E\) is obtained as a small complex deformation of \(\operatorname{Gr}(E)\), and thus by Proposition 3.1, \(\overline{\partial}_{E}\) is gauge equivalent to an element \(\overline{\partial}_{b}:=\overline{\partial}_{0}+\Phi(b)\). Then, from properties of the maps \(\Phi\) and \(\check{\Phi}\), for all \(\varepsilon\in[0,\varepsilon_{0})\) and for all \(g\in G\), \(\pi^{*}\overline{\partial}_{E}\) will be gauge equivalent to \(\pi^{*}\overline{\partial}_{0}+\check{\Phi}(\varepsilon,g\cdot b)\), provided \(g\cdot b\in B\). As a zero of \(\mu_{\varepsilon}\) corresponds to a HYM connection on \((X^{\prime},\omega_{\varepsilon})\), we are left with the problem of finding a zero for \(\mu_{\varepsilon}\) in the \(G\)-orbit of \(b\). ## 4. Proof of the main theorem We carry on with notations from the last section, and our goal now is to prove Theorem 1.1. This is where we will need to assume that in \(\operatorname{Gr}(E)=\bigoplus_{i=1}^{\ell}\mathcal{G}_{i}\), all stable components \(\mathcal{G}_{i}\) are non isomorphic. This implies that \[\mathfrak{g}=\mathfrak{aut}(\operatorname{Gr}(E))=\bigoplus_{i=1}^{\ell} \mathbb{C}\cdot\operatorname{Id}_{\mathcal{G}_{i}}\] and thus its compact form \(\mathfrak{k}\) is abelian, with \(K\) a compact torus. ### The local convex cone associated to the \(K\)-action In order to prove the existence of a zero of \(\mu_{\varepsilon}\) in \(\mathcal{Z}:=G\cdot b\cap B\), we start by describing, at least locally, the images of \(\mathcal{Z}\) by the maps \((\mu_{\varepsilon})_{\varepsilon\in[0,\varepsilon_{0})}\). In this section, relying on [24], we will see that those images all contain translations of (a neighbourhood of the apex of) the same convex cone. By simplicity of \(E\), the stabiliser of \(b\) under the \(K\)-action is reduced to the \(S^{1}\)-action induced by gauge transformations of the form \(e^{i\theta}\operatorname{Id}_{E}\). As those elements fix all the points in \(B\), elements in \(S^{1}\cdot\operatorname{Id}_{E}\) will play no role in the arguments that follow. Hence, we will work instead with the quotient torus \(K_{0}:=K/S^{1}\cdot\operatorname{Id}_{E}\). Note that the constants \(c_{\varepsilon}\) that appear in the maps \(\mu_{\varepsilon}\) in (3.2) are chosen so that \(\langle\mu_{\varepsilon},\operatorname{Id}_{E}\rangle_{\varepsilon}=0\). As the \(\mu_{\varepsilon}\) take values in \(\mathfrak{k}\), this is equivalent to say \(\operatorname{trace}(\mu_{\varepsilon})=0\) Hence, setting \(\mathfrak{k}_{0}\subset\mathfrak{k}\) to be the set of trace free elements in \(\bigoplus_{i=1}^{\ell}i\mathbb{R}\cdot\operatorname{Id}_{\mathcal{G}_{i}}\), we will consider the family of moment maps \(\mu_{\varepsilon}:B\to\mathfrak{k}_{0}\) for the \(K_{0}\)-action, and we may, and will, assume that the stabiliser of \(b\) is trivial. Then, by using the inner product \(\langle\cdot,\cdot\rangle_{\varepsilon}\) to identify \(\mathfrak{k}_{0}\simeq\mathfrak{k}_{0}^{*}\), we can see the maps \(\mu_{\varepsilon}\) as taking values in \(\mathfrak{k}_{0}^{*}\) : \[\mu_{\varepsilon}^{*}:B\to\mathfrak{k}_{0}^{*}.\] There is a weight decomposition of \(V\) under the abelian \(K\)-action \[V:=\bigoplus_{m\in M}V_{m} \tag{4.1}\] for \(M\subset\mathfrak{k}_{0}^{*}\) the lattice of characters of \(K_{0}\). In the matrix blocks decomposition of \(V=H^{0,1}(X,\operatorname{End}(\operatorname{Gr}(E)))\) induced by \(\operatorname{Gr}(E)=\bigoplus_{i=1}^{\ell}\mathcal{G}_{i}\), using the product hermitian metric \(h_{0}\), we have \[V=\bigoplus_{1\leq i,j\leq\ell}H^{0,1}(X,\mathcal{G}_{i}^{*}\otimes\mathcal{ G}_{j}).\] The action of \(g\in K_{0}\) on \(V_{ij}:=H^{0,1}(X,\mathcal{G}_{i}^{*}\otimes\mathcal{G}_{j})\) is, by Equation (3.1): \[g\cdot\gamma_{ij}=g_{i}g_{j}^{-1}\gamma_{ij}. \tag{4.2}\] Thus, in the weight space decomposition (4.1), \(V_{ij}\) is the eigenspace with weight \[m_{ij}:=(0,\ldots,0,1,0,\ldots,0,-1,0,\ldots,0) \tag{4.3}\] where \(+1\) appears in \(i\)-th position and \(-1\) in the \(j\)-th position. If we decompose \(b\) accordingly as \[b=\sum_{ij}b_{ij}, \tag{4.4}\] where \(b_{ij}\in V_{ij}\) is non zero, as \(\overline{\partial}_{E}=\overline{\partial}_{0}+\gamma\) with \(\gamma\) upper triangular, or equivalently as \(E\) is obtained as successive extentions of the stable components \(\mathcal{G}_{i}\)'s, only indices \((i,j)\) with \(i<j\) will appear in (4.4). From now on, we will restrict our setting to \[B\cap\bigoplus_{b_{ij}\neq 0}V_{ij},\] which we still denote by \(B\). That is, we only consider weight spaces that appear in the decomposition of \(b\). Similarily, we use the notation \(V\) for \(\bigoplus_{b_{ij}\neq 0}V_{ij}\). To sum up, we are in the following setting : * The compact torus \(K_{0}\) acts effectively and holomorphically on the complex vector space \(V\); * There is a continous family of symplectic forms \((\Omega_{\varepsilon})_{0\leq\varepsilon<\varepsilon_{0}}\) on \(B\subset V\) around the origin, with respect to which the \(K_{0}\)-action is hamiltonian; * The point \(b\in B\) has trivial stabiliser, \(0\) in its \(K_{0}^{\mathbb{C}}\)-orbit closure, and for all weight \(m_{ij}\in M\) appearing in the weight space decomposition of \(V\), \(b_{ij}\neq 0\). * The restriction of the symplectic form \(\Omega_{0}\) to the \(K_{0}^{\mathbb{C}}\)-orbit of \(b\) is non-degenerate. This last point follows as in the proof of Proposition 3.6. We set \[\overline{\mathcal{Z}}:=B\cap(\overline{K_{0}^{\mathbb{C}}\cdot b}).\] We also introduce \[\sigma:=\sum_{b_{ij}\neq 0}\mathbb{R}_{+}\cdot m_{ij}\subset\mathfrak{k}_{0}^{*}\] with \(\{m_{ij},\,b_{ij}\neq 0\}\) the set of weights that appear in the decomposition of \(b\in V\), and for \(\eta>0\) \[\sigma_{\eta}:=\sum_{b_{ij}\neq 0}[0,\eta)\cdot m_{ij}\subset\mathfrak{k}_{0}^{*}.\] Note that by the local version of Atiyah and Guillemin-Sternberg's convexity theorem, there exists \(\eta>0\) such that \(\mu_{\varepsilon}^{*}(0)+\sigma_{\eta}\subset\mu_{\varepsilon}^{*}(B)\) for all \(\varepsilon\) small enough (see the equivariant Darboux Theorem [14, Theorem 3.2] combined with the local description of linear hamiltonian torus actions [14, Section 7.1]). By [24, Proposition 4.6], the properties \((R_{1})-(R_{4})\) listed above actually imply : **Proposition 4.1**.: _Up to shrinking \(B\) and \(\varepsilon_{0}\), there exists \(\eta>0\) such that for all \(\varepsilon\in[0,\varepsilon_{0})\),_ \[\mu_{\varepsilon}^{*}(0)+\operatorname{Int}(\sigma_{\eta})\subset\mu_{ \varepsilon}^{*}(\mathcal{Z})\] _and_ \[\mu_{\varepsilon}^{*}(0)+\sigma_{\eta}\subset\mu_{\varepsilon}^{*}(\overline{ \mathcal{Z}}).\] **Remark 4.2**.: The fact that the interior of \(\mu_{\varepsilon}^{*}(0)+\sigma_{\eta}\) is included in the image of the \(K_{0}^{\mathbb{C}}\)-orbit of \(b\) by \(\mu_{\varepsilon}^{*}\) is not stated explicitely in [24], but follows from the discussion at the beginning of the proof of [24, Proposition 4.6]. ### Solving the problem From Proposition 4.1, to prove the existence of a zero of \(\mu_{\varepsilon}\) in \(\mathcal{Z}\), it is enough to show that \(-\mu_{\varepsilon}^{*}(0)\in\operatorname{Int}(\sigma_{\eta})\), which reduces to \(-\mu_{\varepsilon}^{*}(0)\in\operatorname{Int}(\sigma)\) for small enough \(\varepsilon\). Arguing as in [24, Lemma 4.8], \(\sigma\) and its dual \[\sigma^{\vee}:=\{v\in\mathfrak{k}_{0}\mid\langle m,v\rangle\geq 0\ \forall m\in\sigma\}\] are strongly convex rationnal polyhedral cones of dimension \(\ell-1\). Note that here the pairing \(\langle\cdot,\cdot\rangle\) is the natural duality pairing. By duality, \(\sigma=(\sigma^{\vee})^{\vee}\), and we are left with proving \[-\mu_{\varepsilon}^{*}(0)\in\operatorname{Int}((\sigma^{\vee})^{\vee}).\] The cone \(\sigma^{\vee}\) can be written \[\sigma^{\vee}=\sum_{\underline{a}\in\mathcal{A}}\mathbb{R}_{+}\cdot v_{ \underline{a}}\] for a finite set of generators \(\{v_{\underline{a}}\}_{\underline{a}\in\mathcal{A}}\subset\mathfrak{k}_{0}\). Hence, our goal now is to show that for all \(\underline{a}\in\mathcal{A}\), \(\langle\mu_{\varepsilon}^{*}(0),v_{\underline{a}}\rangle<0\), which by construction is equivalent to \[\langle\mu_{\varepsilon}(0),v_{\underline{a}}\rangle_{\varepsilon}<0, \tag{4.5}\] under the asumption that for any \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\), \[\mu_{L_{\varepsilon}}(\mathcal{F})\underset{\varepsilon\to 0}{<}\mu_{L_{ \varepsilon}}(E). \tag{4.6}\] We will then study in more details Equations (4.5) and (4.6). In order to simplify the notations, in what follows, we will assume that all the stable components of \(\operatorname{Gr}(E)\) have rank one, so that \(\operatorname{trace}(\operatorname{Id}_{\mathcal{G}_{i}})=1\) for \(1\leq i\leq\ell\). The general case can easily be adapted, and is left to the reader. #### 4.2.1. Condition (4.5) : generators of the dual cone We will give here a more precise form for the generators \(\{v_{\underline{a}}\}_{\underline{a}\in\mathcal{A}}\) of \(\sigma^{\vee}\). Recall from [15, Section 1.2] the method to find such generators : as \(\sigma\) is \(\ell-1\)-dimensional, each of its facets is generated by \(\ell-2\) elements amongst its generators \((m_{ij})\). Then, a generator \(v_{\underline{a}}\) for \(\sigma^{\vee}\) will be an "inward pointing normal" to such a facet. Hence, if \[v_{\underline{a}}=\sum_{i=1}^{\ell}a_{i}\operatorname{Id}_{\mathcal{G}_{i}}\] is a generator of \(\sigma^{\vee}\), there exists a set \(\mathcal{S}:=\{m_{ij}\}\) of \(\ell-2\) generators of \(\sigma\) such that \[\forall\;m_{ij}\in\mathcal{S},\;\langle m_{ij},v_{\underline{a}}\rangle=0.\] Moreover, \(v_{\underline{a}}\in\mathfrak{k}_{0}\) should be trace free, and as we assume here \(\operatorname{rank}(\mathcal{G}_{i})=1\) for all stable components, it gives \[\sum_{i=1}^{\ell}a_{i}=0.\] **Lemma 4.3**.: _Up to scaling \(v_{\underline{a}}\), there exists a partition \(\{1,\ldots,\ell\}=I^{-}\cup I^{+}\) such that for all \(i\in I^{-}\), \(a_{i}=-\frac{1}{\sharp I^{-}}\) and for all \(i\in I^{+}\), \(a_{i}=\frac{1}{\sharp I^{+}}\), where \(\sharp\) stands for the cardinal of a set._ Proof.: The key is to observe that if \(m_{ij},m_{jk}\in\mathcal{S}^{2}\), then \(m_{ik}\notin\mathcal{S}\). Indeed, by (4.3), \(m_{ij}+m_{jk}=m_{ik}\), and those are generators of the cone. Equivalently, if \(m_{ij},m_{ik}\in\mathcal{S}^{2}\), then \(m_{jk}\notin\mathcal{S}\). We then assign an oriented graph \(G_{\underline{a}}\) to \(v_{\underline{a}}\). The vertices are labelled \(a_{1}\) to \(a_{\ell}\), and we draw an oriented edge from \(a_{i}\) to \(a_{j}\) if \(a_{i}=a_{j}\) and \(i<j\). For each \(m_{ij}\in\mathcal{S}\), \(\langle m_{ij},v_{\underline{a}}\rangle=0\) gives \(a_{i}=a_{j}\). Hence, \(G_{\underline{a}}\) has at least \(\ell-2\) edges. To prove the result, it is enough to show that \(G_{\underline{a}}\) has \(2\) connected components. Indeed, we can then set \(I^{-}=\{i\,|a_{i}<0\}\) and \(I^{+}=\{i\,|a_{i}>0\}\). All elements \(a_{i}\) for \(i\in I^{-}\) will correspond to the same connected component and be equal, and similarily with \(i\in I^{+}\). As \(\sum_{i=1}^{\ell}a_{i}=0\), we obtain the result by rescaling. Proving that \(G_{\underline{a}}\) has two connected components is then routine. It has \(\ell\) vertices and \(\ell-2\) oriented edges, with the rule that if there is an edge from \(a_{i}\) to \(a_{j}\) and an edge from \(a_{i}\) to \(a_{k}\), then there is no edge from \(a_{j}\) to \(a_{k}\). We consider the number of edges that start from \(a_{1}\). If there are \(\ell-2\) of those, then the connected component of \(a_{1}\) has at least \(\ell-1\) vertices, and we are left with at most \(1\) singleton for the other component. The fact that \(v_{\underline{a}}\) is trace free imposes that there are at least \(2\) connected components, and we are done in that case. Then, if there are \(\ell-2-k\) edges from \(a_{1}\), its connected component has at least \(\ell-1-k\) elements, and we are left with at most \(k+1\) vertices and \(k\) edges for the other components. But its easy to show, by induction on \(k\), that the rule stated above implies that there will be at most \(1\) connected component for such a graph with \(k+1\) vertices and \(k\) edges, and we are done. We can now translate condition (4.5), by Lemma 4.3, it is equivalent to \[\frac{\sum_{i\in I^{+}}\langle\mu_{\varepsilon}(0),\operatorname{Id}_{ \mathcal{G}_{i}}\rangle_{\varepsilon}}{\sharp I^{+}}<\frac{\sum_{i\in I^{-}} \langle\mu_{\varepsilon}(0),\operatorname{Id}_{\mathcal{G}_{i}}\rangle_{ \varepsilon}}{\sharp I^{-}}. \tag{4.7}\] #### 4.2.2. Condition (4.6) : one parameter degenerations We will associate to each generator \(v_{\underline{a}}\) of \(\sigma^{\vee}\) a subsheaf \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\). Geometrically, the idea is that \(v_{\underline{a}}\in\mathfrak{k}_{0}\) generates a one-parameter subgroup of \(K_{0}\) and a degeneration of \(E\) to \(\mathcal{F}\oplus E/\mathcal{F}\), to which is assigned the Hilbert-Mumford weight \(\mu_{L_{\varepsilon}}(\mathcal{F})-\mu_{L_{\varepsilon}}(E)<0\). We let \(v_{\underline{a}}=\sum_{i=1}^{\ell}a_{i}\operatorname{Id}_{\mathcal{G}_{i}} \in\sigma^{\vee}\) a generator as above, and define \[\mathcal{F}_{\underline{a}}=\bigoplus_{i\in I^{+}}\mathcal{G}_{i},\] as a _smooth_ complex vector bundle, and will show that \(\overline{\partial}_{E}(\mathcal{F}_{\underline{a}})\subset\Omega^{0,1}(X^{ \prime},\mathcal{F}_{\underline{a}})\). This implies that \(\mathcal{F}_{\underline{a}}\in\mathfrak{E}_{[\omega]}\) as a _holomorphic_ vector bundle, with Dolbeault operator the restriction of \(\overline{\partial}_{E}\). Recall that \(\overline{\partial}_{E}=\overline{\partial}_{0}+\gamma=\overline{\partial}_{ 0}+\sum_{b_{ij}\neq 0}\gamma_{ij}\), that is, by choice of \(b\), the weights that appear in the weight decomposition of \(\gamma\) are the same as those that appear in the decomposition of \(b\). In the matrix blocks decomposition given by \(\bigoplus_{i=1}^{\ell}\mathcal{G}_{i}\), the operator \(\overline{\partial}_{0}\) is diagonal, and thus sends \(\mathcal{F}_{\underline{a}}\) to \(\Omega^{0,1}(X^{\prime},\mathcal{F}_{\underline{a}})\). We need to show that for each \(j\in I^{+}\), \(\gamma(\mathcal{G}_{j})\subset\Omega^{0,1}(X^{\prime},\mathcal{F}_{\underline {a}})\). As \(v_{\underline{a}}\in\sigma^{\vee}\), it satisfies, for all generator \(m_{ij}\) of \(\sigma\) : \[\langle m_{ij},v_{\underline{a}}\rangle\geq 0,\] that is, for all \((i,j)\) with \(i<j\) and \(b_{ij}\neq 0\), \[a_{i}-a_{j}\geq 0.\] As \(j\in I^{+}\), this implies \(a_{i}\geq a_{j}>0\). Hence, if \(i<j\) is such that \(b_{ij}\neq 0\), then \(i\in I^{+}\). Equivalently, for \(i<j\), \(i\in I^{-}\) implies \(\gamma_{ij}=0\), and thus we see that \(\gamma(\mathcal{G}_{j})\subset\Omega^{0,1}(X^{\prime},\mathcal{F}_{\underline {a}})\), and hence \(\overline{\partial}_{E}(\mathcal{F}_{\underline{a}})\subset\Omega^{0,1}(X^{ \prime},\mathcal{F}_{\underline{a}})\). Then we have \(\mathcal{F}_{\underline{a}}\in\mathfrak{E}_{[\omega]}\) and Condition (4.6) gives \[\mu_{L_{\varepsilon}}(\mathcal{F}_{\underline{a}})\underset{\varepsilon\to 0}{< }\mu_{L_{\varepsilon}}(E),\] which, by the see-saw property of slopes (see e.g. [25, Corollary 3.5] ), gives \[\mu_{L_{\varepsilon}}(\mathcal{F}_{\underline{a}})\underset{\varepsilon\to 0}{< }\mu_{L_{\varepsilon}}(E/\mathcal{F}_{\underline{a}})\] and thus (recall we assume \(\operatorname{rank}(\mathcal{G}_{i})=1\)): \[\frac{\sum_{i\in I^{+}}\mu_{L_{\varepsilon}}(\mathcal{G}_{i})}{\sharp I^{+}} \underset{\varepsilon\to 0}{<}\frac{\sum_{i\in I^{-}}\mu_{L_{\varepsilon}}( \mathcal{G}_{i})}{\sharp I^{-}}. \tag{4.8}\] #### 4.2.3. Conclusion Recall that Equation (4.8) means that in the \(\varepsilon\)-expansion of \(\frac{\sum_{i\in I^{+}}\mu_{L_{\varepsilon}}(\mathcal{G}_{i})}{\sharp I^{+}} -\frac{\sum_{i\in I^{-}}\mu_{L_{\varepsilon}}(\mathcal{G}_{i})}{\sharp I^{-}}\), the first non-zero term is strictly negative. By Chern-Weyl theory, using the fact that \(A_{0}\) and \(\check{A}_{\varepsilon,0}\) are gauge-equivalent by point (2) of Proposition 3.2, we have \[\begin{array}{rcl}\mu_{L_{\varepsilon}}(\mathcal{G}_{i})&=&c_{1}(\mathcal{G }_{i})\cdot[\omega_{\varepsilon}]^{n-1}\\ &=&\frac{1}{2\pi}\langle\mu_{\varepsilon}(0),\operatorname{Id}_{\mathcal{G}_{ i}}\rangle_{\varepsilon}+\frac{c_{\varepsilon}}{2\pi}\langle\operatorname{Id}_{E}, \operatorname{Id}_{\mathcal{G}_{i}}\rangle_{\varepsilon}.\end{array}\] Hence Inequality (4.8) implies Inequality (4.7), for \(\varepsilon\) small enough, which concludes the existence of \(b_{\varepsilon}\in\mathcal{Z}\) such that \(\mu_{\varepsilon}(b_{\varepsilon})=0\). Then, by construction, the associated connections \(\check{A}_{\varepsilon,b_{\varepsilon}}\) provide HYM connections with respect to \(\omega_{\varepsilon}\) on bundles gauge equivalent to \(E\), where the gauge equivalences are given by elements in the finite dimensional Lie group \(\operatorname{Aut}(\operatorname{Gr}(E))\). To conclude the proof of Theorem 1.1, it then remains to show that the connections \(\check{A}_{\varepsilon,b_{\varepsilon}}\) converge to \(\pi^{*}A_{0}=\check{A}_{0,0}\) in any \(L^{2,l}\) Sobolev norm. By construction of \(\check{A}_{\varepsilon,b}\) in Proposition 3.2, it is enough to prove that \(b_{\varepsilon}\) converges to \(0\) when \(\varepsilon\to 0\). Recall from [14, Theorem 3.2 and Section 7.1] that \(B\) can be chosen so that \(\mu_{\varepsilon}^{*}\) is given by \[\mu_{\varepsilon}^{*}(b^{\prime})=\mu_{\varepsilon}^{*}(0)+\sum_{ij}||b^{ \prime}_{ij}||_{\varepsilon}^{2}\cdot m_{ij}, \tag{4.9}\] for some norm \(||\cdot||_{\varepsilon}\) that depends continously on \(\varepsilon\). As \(\mu_{\varepsilon}(0)\underset{\varepsilon\to 0}{\rightarrow}\mu_{0}(0)=0\), the equation \(\mu_{\varepsilon}^{*}(b_{\varepsilon})=0\) implies that for all \((i,j)\), \(||(b_{\varepsilon})_{ij}||_{\varepsilon}\underset{\varepsilon\to 0}{\rightarrow}0\). As the norms \(||\cdot||_{\varepsilon}\) vary continuously, they are mutually bounded, and thus \(b_{\varepsilon}\underset{\varepsilon\to 0}{\rightarrow}0\), which concludes proof of Theorem 1.1. #### 4.2.4. Proof of the corollaries We comment now on the various corollaries stated in the introduction. First, Corollary 1.3 is a direct application of Theorem 1.1, where \(E=\operatorname{Gr}(E)\) as a single stable component. Corollary 1.4 also follows directly, using Formula (1.2). What remains is to show Corollary 1.5. The only remaing case to study is when for all \(\mathcal{F}\in\mathfrak{E}_{[\omega]}\), \(\mu_{L_{\varepsilon}}(\mathcal{F})\underset{\varepsilon\to 0}{\leq}\mu_{L_{ \varepsilon}}(E)\), with at least one equality. In that situation, the discussion in the last two sections shows that \(-\mu_{\varepsilon}(0)\in\sigma\) will lie in the boundary of \(\sigma\). Hence, by Proposition 4.1, there is a boundary point \(b^{\prime}\in\overline{\mathcal{Z}}\) in the orbit closure of \(b\) with \(\mu_{\varepsilon}(b^{\prime})=0\). This point corresponds to a HYM connection on a vector bundle that is then polystable for the holomorphic structure given by \(\check{A}_{\varepsilon,b^{\prime}}^{0,1}\), with respect to \(L_{\varepsilon}\). As this bundle correspond to a boundary point in the complex orbit of \(b\), it admits a small complex deformation to \(\pi^{*}E\). As semi-stability is an open condition, we deduce that \(\pi^{*}E\) is itself semi-stable for \(L_{\varepsilon}\).
2308.03527
Exploring ChatGPT's Empathic Abilities
Empathy is often understood as the ability to share and understand another individual's state of mind or emotion. With the increasing use of chatbots in various domains, e.g., children seeking help with homework, individuals looking for medical advice, and people using the chatbot as a daily source of everyday companionship, the importance of empathy in human-computer interaction has become more apparent. Therefore, our study investigates the extent to which ChatGPT based on GPT-3.5 can exhibit empathetic responses and emotional expressions. We analyzed the following three aspects: (1) understanding and expressing emotions, (2) parallel emotional response, and (3) empathic personality. Thus, we not only evaluate ChatGPT on various empathy aspects and compare it with human behavior but also show a possible way to analyze the empathy of chatbots in general. Our results show, that in 91.7% of the cases, ChatGPT was able to correctly identify emotions and produces appropriate answers. In conversations, ChatGPT reacted with a parallel emotion in 70.7% of cases. The empathic capabilities of ChatGPT were evaluated using a set of five questionnaires covering different aspects of empathy. Even though the results show, that the scores of ChatGPT are still worse than the average of healthy humans, it scores better than people who have been diagnosed with Asperger syndrome / high-functioning autism.
Kristina Schaaff, Caroline Reinig, Tim Schlippe
2023-08-07T12:23:07Z
http://arxiv.org/abs/2308.03527v3
# Exploring ChatGPT's Empathic Abilities ###### Abstract Empathy is often understood as the ability to share and understand another individual's state of mind or emotion. With the increasing use of chatbots in various domains, e.g., children seeking help with homework, individuals looking for medical advice, and people using the chatbot as a daily source of everyday companionship, the importance of empathy in human-computer interaction has become more apparent. Therefore, our study investigates the extent to which ChatGPT based on GPT-3.5 can exhibit empathetic responses and emotional expressions. We analyzed the following three aspects: (1) understanding and expressing emotions, (2) parallel emotional response, and (3) ampatible personality. Thus, we not only evaluate ChatGPT on various empathy aspects and compare it with human behavior but also show a possible way to analyze the empathy of chatbots in general. Our results show, that in 91.7% of the cases, ChatGPT was able to correctly identify emotions and produces appropriate answers. In conversations, ChatGPT reacted with a parallel emotion in 70.7% of cases. The empathic capabilities of ChatGPT were evaluated using a set of five questionnaires covering different aspects of empathy. Even though the results indicate that the empathic abilities of ChatGPT are still below the average of healthy humans, the scores are better than those of people who have been diagnosed with Asperger syndrome / high-functioning autism. empathy, chatbot, ChatGPT, emotions + Footnote †: publicationid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid:id: pubid: pubid: pubid:id: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid:id: pubid:id: pubid:id: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pub can produce highly realistic text almost indistinguishable from human writing [18]. ### _Definition of Empathy_ Empathy is a crucial component of effective communication, especially in social interactions, as it allows humans to understand and share another person's feelings [19]. However, there is no consensus on the definition of empathy [20]. One possible definition is to distinguish between _cognitive_ and _affective empathy_[20]. _Cognitive empathy_ is the ability to understand and identify another individual's thoughts, feelings, and perspectives without necessarily experiencing the same emotions, i.e., the capability of mental _perspective taking_. It involves the capacity to recognize and interpret social cues, facial expressions, body language, and verbal communication to comprehend and infer the mental and emotional states of others [21]. _Affective empathy_, on the other hand, facilitates a deeper connection and understanding with others and involves a more visceral and personal connection to another's emotions [22]. _Affective empathy_ can be divided into _parallel emotional response_ and _reactive emotional response_. Parallel emotional responses involve responding with the same emotion as the other individual, while reactive emotional responses go beyond matching emotions, such as sympathy or compassion [12]. In our work, we evaluated ChatGPT's understanding and expression of emotions. Next, we analyzed its parallel emotional response. Finally, we covered the other aspects of empathy with standardized questionnaires to evaluate empathy. ### _Measuring Empathy in Chatbots_ While standardized metrics exist to measure empathy in individuals, there are currently no standardized or valid methods for measuring empathy in chatbots [23]. A possible solution is to evaluate a chatbot's level of empathy by human evaluation, such as A/B tests or human ratings [24]. In A/B tests, the annotator chooses which response is more empathic, often used when comparing the level of empathy between two models. In human ratings, the annotator chooses the level of empathy based on a scale. Another way is to conduct a feature- or system-level evaluation instead. The feature-level evaluation involves assessing each component and capability of a chatbot to provide an incremental understanding of its empathic behavior, e.g., by testing the chatbot on its level of emotional communication. On the other hand, the system-level evaluation focuses on measuring the chatbot's overall perception of empathy, e.g., by conducting self-assessment empathy tests [23]. In our work, we conduct a feature-level evaluation of ChatGPT's performance in showing parallel emotional responses, further explained in Sections III and IV. Furthermore, we perform system-level evaluations by conducting four standardized empathy tests and one autism test. Several studies have found that individuals with autism may have difficulty with the cognitive component of empathy, such as _perspective taking_ and understanding others' mental states, while still being able to experience emotions and show affective empathy, such as feeling concerned or compassion for others [25]. ## III Understanding and Expressing Emotions To analyze ChatGPT's ability to understand and generate emotions, our first goal was to evaluate its proficiency in rephrasing neutral sentences to express a particular emotion. ### _Experimental Setup_ For our analyses, we instructed ChatGPT to rephrase neutral sentences into six emotional sentences of the following categories: _joy_, _anger_, _fear_, _love_, _sadness_, and _surprise_. These emotions were selected following the basic emotions from the _Junto's Wheel of Emotions_[26] for consistency with the experiments described in Section IV, where we trained a classifier using the CARER dataset [27], which contains these categories. To check ChatGPT's ability to handle neutral sentences from different domains, we used 10 sentences from self-produced everyday sentences, Wikipedia, and Amendments to the United States Constitution. We instructed ChatGPT to rephrase each neutral sentence six times--each time with a different emotion category--resulting in a total of 60 emotional sentences. Figure 1 illustrates how ChatGPT rephrased the sentence 'We are celebrating my grandmother's 80th birthday today." to express _joy_ and _anger_. ### _Experiment and Results_ To evaluate whether the prompts generated by ChatGPT match the intended emotion category, we asked three people to label each of the 60 ChatGPT-produced prompts with the most suitable emotion category out of the six categories provided. Based on the human labels, we produced the reference emotion categories using a majority voting as follows: If two annotators agreed on the same emotion for one prompt, we took this emotion as the final emotion category. In case our three annotators assigned completely different emotion categories to one prompt, they discussed their decisions until they agreed on one emotion category. Fig. 1: ChatGPT’s Rephrasing of a Neutral Prompt into _Joy_ and _Anger_. The annotator agreement on our six emotion categories is listed in Table I. A complete agreement occurred in 71.7% of the cases. In 26.6% of the prompts, exactly two annotators assigned the same emotion category. In only 1.7% of the prompts, three different emotion categories were assigned. In the next step, we compared the results from the manual annotation of the prompt generated by ChatGPT to the intended emotion category. The results of the experiment are illustrated in Figure 2. The green line indicates the average classification accuracy over our six emotion categories. We see that when it comes to expressing emotions, ChatGPT can express the desired emotion with an accuracy of 91.7%. The reference was labeled differently in only 5 out of the 60 generated sentences. _Anger_, _fear_ and _love_ were produced with an accuracy of 100%, _surprise_ with 91%, _joy_ with 83%, and _sadness_ with 80%. ## IV Parallel Emotional Responding Empathic behavior in a conversation consists of two components [28]: First the emotion category of the conversational partner is identified (_cognitive empathy_). After that, a response is generated that addresses the emotion category of the conversational partner (_affective empathy_). A _parallel emotional response_ is defined as an emotional response where one individual shows the same emotion as another individual in response to a particular situation or stimulus [12]. This response can be observed when individuals share a similar emotional experience, leading to the concurrent manifestation of the same emotion in both individuals [12]. For instance, feeling _joy_ when another person expresses _joy_ is a common example of a parallel emotional response. In the following study, we focus on the analysis of ChatGPT's ability to generate parallel emotional responses. ### _Experimental Setup_ For these experiments, we used 20.3k initial prompts from conversations in the _EmpatheticDialogues_ dataset from Facebook Research1[29]--named as _Speaker_ prompts--to trigger ChatGPT with initial emotional prompts and generate a response, which we then classified and evaluated. Figure 3 demonstrates an initial joyful prompt from _EmpatheticDialogues_ and ChatGPT's parallel emotional response. Footnote 1: [https://github.com/facebookresearch/EmpatheticDialogues](https://github.com/facebookresearch/EmpatheticDialogues) Since it was impossible to manually classify the large number of 20.3k ChatGPT-generated responses into emotion categories, we used an emotion classification system, based on the Bidirectional Encoder Representations from Transformers (BERT) [30]. The system was fine-tuned on the 16k training and 2k validation sentences of the CARE dataset2[27]. CARER consists of tweets labeled with our 6 emotion categories _love_, _joy_, _anger_, _fear_, _sadness_, and _surprise_. Our fine-tuning reached convergence after 8 epochs, resulting in a performance of 63% when applied to our 60 manually annotated prompts used in Section III-B. Analyzing this system demonstrated that the category _love_ reduced the system performance by 15% absolute. Therefore, we removed the prompts labeled with _love_ in the CARER training and validation sets and re-trained the system resulting in an accuracy of 78%. To contribute to the improvement of empathic chatbots, we share _ChatGPTsEmpatheticDialogues_--our corpus, which consists of _EmpatheticDialogues_' initial prompts, ChatGPT's responses, and the corresponding emotion categories--with the research community3. Footnote 2: [https://huggingface.co/datasets/dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion) Footnote 3: [https://github.com/iu-ai-research/ChatGPTsEmpatheticDialogues](https://github.com/iu-ai-research/ChatGPTsEmpatheticDialogues) ### _Experiment and Results_ In our analysis, we focused on parallel emotional responses, i.e., to what percentage ChatGPT reacts with the same emotional category as in the initial prompts. Table II illustrates the distribution of emotional responses to each emotion category based on our classification system's output. The results indicate that the emotional responses are strongly biased towards \begin{table} \begin{tabular}{l r} \hline **Agreement** & **Percentage** \\ \hline all annotators agree & 71.7\% \\ two annotators agree & 26,6\% \\ all annotators disagree & 1.7\% \\ \hline \end{tabular} \end{table} TABLE I: Annotator Agreement for the 6 Emotion Categories. Fig. 3: Joyful Prompt from _EmpatheticDialogues_ and ChatGPT’s Parallel Emotional Response. Fig. 2: Accuracy of Understanding and Expressing Emotions. relying with _joy_. In 96.1% of the cases, ChatGPT's emotional response to a prompt categorized as _joy_ was _joy_. Moreover, for _anger_ and _surprise_, we observe even more responses categorized as _joy_ than for the original emotion category. About half of the initial prompts with _sadness_ and _fear_ are answered with the same emotion category. Overall, in 70.7% (20,237 responses), ChatGPT responds with the same emotion category as the initial prompt. Figure 4 visualizes the distribution of all emotion categories produced by ChatGPT. With 40% of responses categorized as _joy_, we observe a strong tendency of ChatGPT to reply in a positive way. ## V How Empathic is ChatGPT's Personality? To learn more about ChatGPT's empathic capabilities, we conducted system-level evaluations using psychologically acknowledged questionnaires to evaluate ChatGPT's empathy level in different aspects. We used five standardized questionnaires to assess empathy: _Interpersonal Reactivity Index_, _Empathy Quotient_, _Toronto Empathy Questionnaire_, _Perth Empathy Scale_, and _Autism Spectrum Quotient_. In this section, we will describe the content of each questionnaire and how we used it to gather further insights about ChatGPT's empathic capabilities. To get ChatGPT's answer to each question in the questionnaires, we used the questions as initial prompts and then evaluated ChatGPT's response with respect to the emotion category as follows: For each of ChatGPT's answers, we had our three annotators decide which possible answer in the questionnaire it matched using the same rules for majority voting as described in Section III-B. We had to perform this procedure as ChatGPT did not directly provide us with the responses expected in the questionnaire, such as _strongly agree_ or _strongly disagree_. As an alternative to manually matching ChatGPT's answers and the answers in the questionnaire, we tried the following sentence vector-based approach: We converted ChatGPT's answer and the answers in the questionnaire to word embeddings using Sentence-BERT4[31], and then mapped ChatGPT's answer to the answer with the smallest distance in the semantic vector space. However, we had to discard this approach as it did not perform well, with an accuracy of 38.5%, i.e., only 70 of 182 tested answers could be mapped correctly. Footnote 4: [https://github.com/UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers) In the following paragraphs, we will present the results of the questionnaires. ### _Interpersonal Reactivity Index_ The _Interpersonal Reactivity Index_ (IRI) is a widely utilized self-report measure for assessing empathy in individuals [32]. We chose to have ChatGPT conduct the IRI since it has been used in various research and clinical settings to understand empathy better and develop interventions to improve empathy skills, e.g. [33] or [34]. In addition, the questionnaire covers the categories of _fantasy_, _personal distress_, _perspective taking_, and _empathic concern_. #### V-A1 Experimental Setup The IRI comprises 28 questions that evaluate the following four components of empathy measured in subscales which are part of the overall scale of the questionnaire: _perspective taking_, _empathic concern_, _personal distress_, and _fantasy_. _Perspective taking_ refers to an individual's ability to understand the perspectives of others, _empathic concern_ to the ability to feel compassion and concern for others, _personal distress_ to the tendency to experience anxiety or discomfort in response to others' negative experiences, and _fantasy_ to the tendency to imagine oneself in fictional situations. The level of agreement with each statement is rated on a 5-point Likert scale ranging from _does not describe me well_ to _describes me very well_. The scores for each subscale of the IRI are obtained by summing the responses to the questions that belong to that subscale, resulting in a score that ranges from 0 to 28. Higher scores on the _perspective taking_ and _empathic concern_ subscales indicate greater empathy, while lower scores on the _personal distress_ subscale suggest better emotional regulation. #### V-A2 Experiment and Results Figure 5 visualizes ChatGPT's performance on the four IRI subscales and the mean performance of males and females on [32]. For a comparison with the other questionnaires, the absolute scores are not displayed in the figure, but the percentage achieved compared to the possible total score of 28. Comparing ChatGPT's absolute score for _fantasy_ reveals interesting results: while the score of 17 is significantly higher than the mean score of healthy males (15.73, \(SD=5.6\), \(t(578)=5.46\), \(p<.001\)) it is significantly lower than the score of healthy females (18.75, \(SD=5.17\), \(t(581)=-8.17\), \(p<.001\)). For _perspective taking_ the absolute score of 16 is significantly lower than the \begin{table} \begin{tabular}{l l r r r r r} \hline \hline \multicolumn{2}{c}{} & \multicolumn{3}{c}{**Emotional Response (ChatGPT)**} \\ \cline{2-7} \multicolumn{2}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \cline{3-7} \multicolumn{2}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \cline{3-7} \multicolumn{2}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \hline \hline \end{tabular} \end{table} TABLE II: Emotional Responses by ChatGPT (normalized) Fig. 4: Distribution of ChatGPT’s Emotional Responses. mean score of males (16.78, \(SD=4.72\), \(t(578)=3.98\), \(p<.001\)) and females (17.96, \(SD=4.85\), \(t(581)=-9.75\), \(p<.001\)) demonstrating that ChatGPT has lower abilities to take the perspective of others and understand their feelings than healthy humans. ChatGPT's score for _empathic concern_ (11) is much lower than the mean scores of males (19.04, \(SD=4.21\), \(t(578)=-45.95\), \(p<.001\)) and females (21.67, \(SD=3.83\), \(t(581)=-67.21\), \(p<.001\)) indicating that ChatGPT has a significantly lower level of emotional response to others. Finally, also the absolute score for personal distress of 9 is significantly lower than the mean of healthy males (9.46, \(SD=4.55\), \(t(578)=-2.43\), \(p<.05\)) and females (12.28, \(SD=5.01\), \(t(581)=-15.79\), \(p<.001\)). Taken together, in almost all dimensions of the IRI ChatGPT performs significantly worse than healthy humans. ### _Empathy Quotient_ The _Empathy Quotient_ (EQ) is a self-reported questionnaire that has been specifically designed to assess an individual's ability to comprehend and respond to others' emotions [25]. We chose to have ChatGPT conduct the EQ since the questionnaire was conducted on two groups: one group with Asperger syndrome / high-functioning autism (AS/HFA) and another group of healthy humans. In the evaluation study, a clear threshold was identified to differ between the two groups, which allows us to assign the score we achieve with ChatGPT to one of those groups. #### Iv-B1 Experimental Setup The EQ comprises 60 questions, each with four possible responses: _strongly agree_, _agree_, _disagree_, and _strongly disagree_. The questions cover a variety of topics related to _social interaction_, _emotional recognition_, and _communication_. Scores on the EQ can range from 0 to 80, with higher scores indicating a greater capacity for empathy. #### Iv-B2 Experiment and Results Figure 6 displays ChatGPT's EQ performance (green line) compares to healthy males' and females' average EQ scores. The bars indicate the standard deviations. The mean scores reported by [25] are 41.8 (\(SD=11.2\)) for healthy males and 47.2 (\(SD=10.2\)) for healthy females. According to [25], more than 80% of people diagnosed with AS/HFA obtained a score below 30. Thus, with an EQ score of 34, ChatGPT performs significantly lower than males (\(t(70)=-5.62\), \(p<.001\)) and females (\(t(125)=-14.53\), \(p<.001\)). Moreover, it scores higher than an average person with AS/HFA. ### _Toronto Empathy Questionnaire_ Another tool to measure self-reported empathy is the _Toronto Empathy Questionnaire_ (TEQ) [35]. We decided to use the TEQ, as the questionnaire tries to establish a general agreement between previous questionnaires such as the IRI, the Autism Quotient, and many more. #### Iv-C1 Experimental Setup The TEQ consists of 16 questions that measure different components of empathy, including _affective empathy_ (the ability to experience and understand the emotions of others) and _cognitive empathy_ (the ability to understand the thoughts and perspectives of others). The TEQ also includes questions assessing an individual's tendency to take another person's perspective and willingness to help others. Scores on the TEQ can range from 0 to 64, with higher scores indicating a greater capacity for empathy. #### Iv-C2 Experiment and Results Figure 7 illustrates how ChatGPT compares to the scores from a validation study, differentiated by males and females. The bars indicate the standard deviations. ChatGPT achieved a total of 41. In the validation study with 65 students from the University of Toronto, students achieved a mean score of 46.96 (SD = 7.47) [35]. Also, in the TEQ, females scored higher than males (48.33 vs. 43.63). ChatGPT's score of 41 is only slightly lower than the score Fig. 5: ChatGPT’s IRI Results Compared to Males/Females. Fig. 6: ChatGPT’s EQ Results Compared to Males/Females. Fig. 7: ChatGPT’s TEQ Results Compared to Males/Females. of males (\(t(18)=-1.45\), \(p=.165\)) and significantly lower than the score of females (\(t(45)=-7.20\), \(p<.001\)). ### _Perth Empathy Scale_ The _Perth Empathy Scale_ (PES) is a recently published self-report questionnaire consisting of 20 questions to assess empathy in adults and adolescents [36]. In addition to other existing scales, it covers the cognitive and affective components of empathy and the positive and negative dimensions of affective empathy. We selected this questionnaire as the splitting into positive and negative empathy can be seen as additional information not covered by the other analyzed questionnaires. #### Iv-E1 Experimental Setup Each category of the PES includes five sentences that cover 10 emotions, including the five of the basic emotions described by [37] (i.e., _happiness_, _sadness_, _anger_, _scared_, _disgust_), the self-conscious emotions of _embarrassment_ and _pride_, and the positive emotions of _amusement_, _calmness_, and _enthusiasm_. Respondents rate their level of agreement or disagreement with each statement on a 5-point Likert scale ranging from _never_ to _always_. The PES yields a general empathy score from 0 to 100, calculated by adding the scores from the four scales. The questions for affective empathy ask if the emotions belong to someone else, while for cognitive empathy, the questions ask about someone else's feelings, indicating a self-other distinction. The higher the total score, the higher the level of empathy of an individual. #### Iv-E2 Experiment and Results ChatGPT scored 40 out of 100 possible points on the PES, which is significantly below the score of healthy individuals (males: 64.1 (\(SD=10.92\)), \(t(187)=-30.26\), \(p<.001\), females: 66.9 (\(SD=11.27\)), \(t(450)=-50.63\), \(p<.001\)) [36]. Figure 8 shows how the score distributes amongst the subscales of the PES and how ChatGPT compares to the mean scores of healthy humans. As a validation study showed that positive and negative cognitive empathy are highly correlated, both values are summed up in the figure. As observed in Section IV-B, we detect a higher tendency toward positive empathy than toward negative empathy. ### _Autism Spectrum Quotient_ Several studies have found that individuals with autism may have difficulties with the cognitive component of empathy, such as _perspective taking_ and understanding others' mental states, while still being able to experience emotions and show affective empathy, such as feeling concerned or compassion for others [25]. Therefore, we decided to additionally analyze the Autism Spectrum Quotient (AQ)--a questionnaire that measures the autistic traits in individuals who may or may not have a formal diagnosis of autism [38]. The AQ has been shown to be inversely correlated with the EQ [25]. #### Iv-E1 Experimental Setup On the AQ, respondents rate their level of agreement with each statement on a 4-point Likert scale ranging from _definitely agree_ to _definitely disagree_. The AQ measures five different skills: _communication_ (verbal and nonverbal communication), _social_ (social interaction and understanding social cues), _imagination_ (imaginative and flexible thinking), _local details_ (tendency to focus on details and a preference for structured and predictable environments), and _attention switching_ (changing focus from one topic to another). The value of the AQ score ranges from 0 to 50, with 10 points for each skill. In contrast to the previously presented questionnaires, a higher score on the AQ refers to a lower level of empathy. #### Iv-E2 Experiment and Results As shown in Figure 9, in our experiment, ChatGPT achieved a total score of 19, which is only slightly higher than the mean scores of healthy males (17.8, \(SD=6.8\), \(t(75)=1.54\), \(p=.128\)) but significantly higher than the mean scores of healthy females (15.4, \(SD=5.7\), \(t(97)=6.25\), \(p<.001\)). Moreover, people diagnosed with AS/HFA show a mean score of 35.8 [38]. As for the previous questionnaires, ChatGPT's scores are worse than average healthy humans but are still far away from the mean score a person diagnosed with AS/HFA would achieve. Figure 10 illustrates how the scores are distributed amongst the respective skills compared to the average male and female scores. In the figure, the achieved scores are shown as a percentage in relation to the maximum possible score. The scores for ChatGPT show that, especially for _social skills_, the score is higher than for healthy adults, which is in line with the score for empathic concern of the IRI. This can be seen as an indicator that ChatGPT has difficulties fully Fig. 8: ChatGPT’s PES Results Compared to Males/Females. Fig. 9: ChatGPT’s AQ Compared to Males/Females/AS/HFA. connecting to other people's feelings and feeling concerned or compassionate for them. Moreover, _imagination skills_ are worse than the average for healthy humans. If it comes to _attention switching_, ChatGPT performs considerably well, while the skill to focus on details and the _communicative skills_ of ChatGPT is quite similar to those of healthy humans. ## VI Conclusion and Future Work In the studies presented, we investigated the empathic capabilities of ChatGPT. In our first study, we demonstrated that ChatGPT is able to rephrase a sentence to express a particular emotion with an accuracy of 91.7%. This shows that ChatGPT has the potential to be used as a tool for expressing emotions on demand which can help in the interaction with humans--be it in a learning environment or when being used as a source of everyday companionship. In a second study, we additionally demonstrated that ChatGPT can generate parallel emotional responses with 70.7% accuracy, meaning that it is able to respond with the same emotion as the initial prompt in many cases. Furthermore, our results show that ChatGPT has a strong tendency to reply with _joy_. In our last study, we used five questionnaires to test the empathic capabilities of ChatGPT. The questionnaires indicated that ChatGPT is able to interpret the emotions of others and take their perspective but still has some difficulties showing a higher level of empathy compared to healthy humans. All scores from the questionnaires in comparison to healthy males and females are summarized in Table III. While [9] concluded that GPT-3 shows a significant lack of empathy based on the psychopathy section of the SD-3 [10] questionnaire, in our empathy-focused studies we demonstrated that ChatGPT expresses empathy in several aspects. With our research, we show a possible way to proceed with analyzing chatbots in the future. Further research should focus on developing more sophisticated models that can more accurately grasp the emotional context of a conversation, as well as on the development of methods to measure the emotional capabilities of a chatbot. In addition, studies should be conducted to explore how ChatGPT can be used as a tool to support people more compassionately. Finally, it is important to consider the ethical implications of using chatbots such as ChatGPT. This is particularly important because they often interact with people who may not be aware that they are interacting with a computer program. Developing methods for assessing the ethical implications of using chatbots can help ensure that they are used ethically and that potential harm is minimized. ## Ethical Impact Statement Individuals were asked to label the text data we collected for our data annotation. The participants who supported us were not dependent on the authors and participated voluntarily and free of charge. There was no conflict of interest between the supporters and the authors. For privacy reasons, the names of the supporters are not disclosed. The collected corpus is made freely available to the community. The collected text data of the corpus are extracts from the _EmpatheticDialogues_ dataset from Facebook Research [29] and produced by ChatGPT. Usually, ChatGPT produces text appropriate for the general public, but it cannot be ruled out that the content is not suitable for everyone. The text data contains emotional sentences in various forms. But this is the essence of a corpus that can be used to evaluate a chatbot's output realistically. Fig. 10: ChatGPT’s AQ Results Compared to Males/Females. \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{1}{c}{Total} & \multicolumn{3}{c}{Mean (SD)} & \\ & ChatGPT & Males & Females & Range & \(\Delta_{ChatGPT:Males}\) & \(\Delta_{ChatGPT:Females}\) \\ \hline \hline IRI & Fantasy & 17 & 15.73 (5.60) & 18.75 (5.17) & 0-28 & 8\% higher & 10\% higher \\ & Perspective Taking & 16 & 16.78 (4.72) & 17.96 (4.85) & 0-28 & 5\% lower & 12\% lower \\ & Empathic Concern & 11 & 19.04 (4.21) & 21.67 (3.83) & 0-28 & 73\% lower & 97\% lower \\ & Personal Distress & 9 & 9.46 (4.55) & 12.28 (5.01) & 0-28 & 5\% lower & 36\% lower \\ EQ & 34 & 41.8 (11.7) & 4.72 (10.2) & 0-80 & 23\% lower & 39\% lower \\ TEQ & 41 & 43.63 (7.93) & 48.33 (6.90) & 0-64 & 6\% lower & 18\% lower \\ PES & General Cognitive Empathy & 22 & 37.6 (7.13) & 38.4(7.06) & 0-50 & 71\% lower & 75\% lower \\ & Positive Affective Empathy & 11 & 15.4 (4.05) & 16.1 (3.94) & 0-25 & 40\% lower & 46\% lower \\ & Negative Affective Empathy & 7 & 11.1 (3.39) & 12.4 (3.71) & 0-25 & 59\% lower & 77\% lower \\ & General Empathy & 40 & 64.1 (10.92) & 66.9 (11.27) & 0-100 & 60\% lower & 67\% lower \\ AQ & 19 & 17.8 (6.8) & 15.4 (5.7) & 0-50 & 7\% higher & 23\% higher \\ \hline \hline \end{tabular} \end{table} TABLE III: ChatGPT’s Empathy Scores Compared to the Mean Score of Males and Females. (Note: In contrast to the scores of the other questionnaires, a higher AQ score refers to a lower level of empathy.)
2310.06615
BMO estimates for Hodge-Maxwell systems with discontinuous anisotropic coefficients
We prove up to the boundary $\mathrm{BMO}$ estimates for linear Maxwell-Hodge type systems for $\mathbb{R}^{N}$-valued differential $k$-forms $u$ in $n$ dimensions \begin{align*} \left\lbrace \begin{aligned} d^\ast \left( A(x) du \right) &= f &&\text{ in } \Omega, d^\ast \left( B(x) u\right) &= g &&\text{ in } \Omega, \end{aligned} \right. \end{align*} with $ \nu\wedge u$ prescribed on $\partial\Omega,$ where the coefficient tensors $A,B$ are only required to be bounded measurable and in a class of `small multipliers of BMO'. This class neither contains nor is contained in $C^{0}.$ Since the coefficients are allowed to be discontinuous, the usual Korn's freezing trick can not be applied. As an application, we show BMO estimates hold for the time-harmonic Maxwell system in dimension three for a class of discontinuous anisotropic permeability and permittivity tensors. The regularity assumption on the coefficient is essentially sharp.
Dharmendra Kumar, Swarnendu Sil
2023-10-10T13:31:40Z
http://arxiv.org/abs/2310.06615v3
# BMO estimates for Hodge-Maxwell systems with discontinuous anisotropic coefficients ###### Abstract We prove up to the boundary BMO estimates for linear Maxwell-Hodge type systems for \(\mathbb{R}^{N}\)-valued differential \(k\)-forms \(u\) in \(n\) dimensions \[\left\{\begin{aligned} d^{*}\left(A(x)du\right)&=f& \text{ in }\Omega,\\ d^{*}\left(B(x)u\right)&=g&\text{ in } \Omega,\end{aligned}\right.\] with \(\nu\wedge u\) prescribed on \(\partial\Omega\), where the coefficient tensors \(A,B\) are only required to be bounded measurable and in a class of'small multipliers of BMO'. This class neither contains nor is contained in \(C^{0}\). Since the coefficients are allowed to be discontinuous, the usual Korn's freezing trick can not be applied. As an application, we show BMO estimates hold for the time-harmonic Maxwell system in dimension three for a class of discontinuous anisotropic permeability and permittivity tensors. The regularity assumption on the coefficient is essentially sharp. + Footnote †: _Key words and phrases._ Boundary regularity, elliptic system, Campanato method, Hodge Laplacian, Maxwell system, BMO estimate. 2020 _Mathematics subject classification._ 35J57, 35B65, 35Q60 ## 1 Introduction As far as systems of PDEs for a differential \(k\)-form \(u\) on a bounded domain \(\Omega\subset\mathbb{R}^{n}\) are concerned, one of the most important first order systems, if not the most, are the so-called Hodge systems \[\left\{\begin{aligned} du&=f&\text{ in }\Omega,\\ d^{*}u&=g&\text{ in }\Omega, \\ \nu\wedge u&=\nu\wedge u_{0}&\text{ on }\partial \Omega,\end{aligned}\right.\qquad\text{and}\qquad\left\{\begin{aligned} du&=f&\text{ in }\Omega,\\ d^{*}u&=g&\text{ in }\Omega,\\ \nu\lrcorner u&=\nu\lrcorner u_{0}&\text{ on }\partial \Omega.\end{aligned}\right.\] These systems, which are dual to each other by Hodge duality, are classical and have been studied extensively in a variety of contexts, e.g. Poincare lemma, Cauchy-Riemann operators, Dirac operators, Gaffney-Friedrich's inequalities, div-curl lemmas etc, just to name a few. Elliptic regularity results for these systems can be derived from the Hodge-Morrey decomposition, which itself a consequence of the regularity results for the second order Hodge Laplacian systems ( see [4], [9], [7] etc ). Similar remarks are valid for the Hodge-Maxwell systems \[\left\{\begin{aligned} d^{*}du&=f&\text{ in }\Omega,\\ d^{*}u&=g&\text{ in }\Omega,\\ \nu&\wedge u&=\nu\wedge u_{0}& \text{ on }\partial\Omega,\end{aligned}\right.\qquad\text{and}\qquad\left\{ \begin{aligned} dd^{*}u&=f&\text{ in }\Omega,\\ du&=g&\text{ in }\Omega,\\ \nu&\lrcorner u&=\nu\lrcorner u_{0}& \text{ on }\partial\Omega.\end{aligned}\right.\] However, their natural 'rotated' analogue, \[\left\{\begin{aligned} d^{*}\left(A\left(x\right)du \right)&=f&\text{ in }\Omega,\\ d^{*}\left(B\left(x\right)u\right)&=g& \text{ in }\Omega,\\ \nu&\wedge u&=\nu\wedge u_{0}& \text{ on }\partial\Omega,\end{aligned}\right. \tag{1}\] where \(A,B\) are bounded measurable, uniformly elliptic matrices, have received hardly any attention. This is rather surprising, as this is really the system that is relevant for applications. Indeed, one can already see this for the case of time-harmonic Maxwell system itself. The time-harmonic Maxwell's system in bounded domain in \(\mathbb{R}^{3}\) can be written as a second order system in \(E\) as follows \[\left\{\begin{aligned} \operatorname{curl}(\mu^{-1} \operatorname{curl}E)&=\omega^{2}\varepsilon E-i\omega J_{e}+ \operatorname{curl}\left(\mu^{-1}J_{m}\right)&\text{ in }\Omega,\\ \operatorname{div}(\varepsilon E)&=\frac{i}{\omega} \operatorname{div}J_{e}&\text{ in }\Omega,\\ \nu&\times E&=\nu\times E_{0}& \text{ on }\partial\Omega,\end{aligned}\right.\] which has the same form as (1), with \(A=\mu^{-1}\) and \(B=\varepsilon\). Regularity results for the system (1) for a vector-valued differential \(k\)-form, along with a variety of related linear systems, are studied systematically in [10], where the matrices \(A,B\) are assumed to be sufficiently regular. The techniques employed there is perturbative in nature, i.e. regularity results are derived by using Korn's freezing trick to freeze the coefficients at a point and then deriving the regularity estimates for the case when \(A,B\) are constant matrices. In the present article, we are interested in deriving BMO estimates for the system \[\left\{\begin{aligned} d^{*}\left(A\left(x\right)du \right)&=d^{*}f&\text{ in }\Omega,\\ d^{*}\left(B\left(x\right)u\right)&=d^{*}g& \text{ in }\Omega,\\ \nu&\wedge u&=\nu\wedge u_{0}& \text{ on }\partial\Omega,\end{aligned}\right. \tag{2}\] when \(A,B\) are allowed to be discontinuous, so that the techniques of [10] can no longer be applied. For \(0\)-forms, the system (2) reduces to \[\left\{\begin{aligned} \operatorname{div}\left(A\left(x\right) \nabla u\right)&=\operatorname{div}f&\text{ in }\Omega,\\ u&=u_{0}&\text{ on }\partial\Omega.\end{aligned}\right.\] Acquistapace in [1] derived BMO estimates for \(Du\) assuming \(f\) in BMO when the coefficients of \(A\) are in the class \(L^{\infty}\cap\operatorname{V}\!\mathscr{L}_{\left(1+\left|\log r\right|\right) ^{-1}}^{2,n}\), where \(\operatorname{V}\!\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\) is a space of Campanato-Spanne-Sarason-Janson type, consisting of functions which are'small multipliers of BMO'. This space neither contains nor is contained in \(C^{0}.\) See Section 3.4 and [12], [8], [6] for more on this. The hypothesis on the coefficient is essentially sharp, as Acquistapace discusses at length in [1]. The main results of the present article are Theorem 24, Theorem 27 and Theorem 29. Theorem 24 should be viewed as a generalization of Acquistapace's results to system (2) for vector-valued \(k\) forms in the case when \(B\) is assumed to be the identity. Theorem 27 states that when \(A,B\) are uniformly Legendre-elliptic and coefficients of \(A,B\) are in the class \(L^{\infty}\cap\operatorname{V}\!\mathscr{L}_{\left(1+\left|\log r\right| \right)^{-1}}^{2,n}\), then \(du\) and \(u\) are in BMO as soon as \(f,g,u_{0},du_{0}\) are BMO. We derive the regularity result for systems with a linear zeroth order term in Theorem 29 as a consequence of our estimates. As an application, we have the following theorem for time-harmonic Maxwell system. **Theorem 1**.: _Let \(\Omega\subset\mathbb{R}^{3}\) be open, bounded with \(C^{2}\) boundary. Let \(E_{0}\in L^{2}\left(\Omega;\mathbb{R}^{3}\right)\) be such that \(\operatorname{curl}E_{0}\in L^{2}\left(\Omega;\mathbb{R}^{3}\right).\) Let \(J_{e},J_{m}\in L^{2}\left(\Omega;\mathbb{R}^{3}\right)\) and suppose \(J_{e},J_{m},E_{0},\operatorname{curl}E_{0}\in\operatorname{BMO}\left(\Omega; \mathbb{R}^{3}\right).\) Let \(\varepsilon,\mu\in L^{\infty}\cap\operatorname{V}\!\mathscr{L}_{\left(1+ \left|\log r\right|\right)^{-1}}^{2,n}\left(\Omega;\mathbb{R}^{3\times 3}\right)\) be uniformly Legendre-elliptic with ellipticity constants \(\gamma_{1},\gamma_{2}>0,\) respectively. Let \(E,H\in L^{2}\left(\Omega;\mathbb{R}^{3}\right)\) be a weak solution to_ \[\left\{\begin{aligned} \operatorname{curl}H&=i\omega \varepsilon E+J_{e}&\text{ in }\Omega,\\ \operatorname{curl}E&=-i\omega\mu H+J_{m}& \text{ in }\Omega,\\ \nu\times E&=\nu\times E_{0}&\text{ on }\partial\Omega.\end{aligned}\right. \tag{3}\] _Then \(E,H\in\operatorname{BMO}\left(\Omega;\mathbb{R}^{3}\right)\) and there exists a constant \(C>0,\) depending only on \(\gamma_{1},\gamma_{2},\omega,\Omega\) and corresponding norms and moduli of \(\varepsilon,\mu\) such that we have_ \[\left[\left(E,H\right)\right]_{\operatorname{BMO}\left(\Omega \right)}\] \[\leq C\left(\left\|\left(E,H,J_{e},J_{m},E_{0},\operatorname{ curl}E_{0}\right)\right\|_{L^{2}\left(\Omega\right)}+\left\|\left(J_{e},J_{m},E_{0}, \operatorname{curl}E_{0}\right)\right\|_{\operatorname{BMO}\left(\Omega \right)}\right).\] The hypotheses of the theorem is satisfied when \(\varepsilon,\mu\) are Dini-continuous, so the result holds for such coefficient tensors as well. As far as we are aware, unless the tensors \(\varepsilon,\mu\) are assumed to be isotropic, the BMO estimate for this system was known only for Holder continuous \(\varepsilon,\mu,\) which can be easily deduced from [10]. To the best of our knowledge, our result is new even for anisotropic Dini-continuous coefficients. The crux of the difficulty to adapt Acquistapace's technique to our setup is twofold. The second order part of our system is \(d^{*}\left(A\left(x\right)du\right).\) As adding closed form to a solution \(u\) would yield another solution unless \(u\) is a \(0\)-form, this operator has an infinite dimensional kernel and thus is neither elliptic nor Fredholm. To extract the most information about the regularity of solutions locally, it is necessary to use the 'gauge invariance' in a clever way. The novelty of the present contribution lies primarily in this judicious exploitation of 'gauge freedom'. At the technical level, this is realized via switching weak formulations as needed to have corresponding Poincare inequalities to use at different stages of the argument. The other technical point is that though \(d\) commutes with the pullback via diffeomorphisms, \(d^{*}\) does not. Thus, the usual 'flattening the boundary' step involves an additional'second order' term \(\operatorname{div}\left(S\nabla u\right).\) In [10], a similar term which appeared was tackled using the regularity of the coefficients. Here we needed to achieve the same without using any regularity for \(A.\) The rest of the article is organized as follows. We detail our notations in Section 2. Section 3 describe the function spaces used and collect some facts about these spaces. Section 4 proves our main up to the boundary estimates. Section 5 is concerned with proving our main results using these estimates. ## 2 Notations We now fix the notations, for further details we refer to [4]. Let \(n\geq 2,\)\(N\geq 1\) and \(0\leq k\leq n\) be integers. * We write \(\Lambda^{k}\mathbb{R}^{n}\) to denote the vector space of all alternating \(k-\)linear maps \(f:\underbrace{\mathbb{R}^{n}\times\cdots\times\mathbb{R}^{n}}_{k-\text{times}}\to \mathbb{R}.\) For \(k=0,\) we set \(\Lambda^{0}\mathbb{R}^{n}=\mathbb{R}.\) Note that \(\Lambda^{k}\mathbb{R}^{n}=\{0\}\) for \(k>n\) and, for \(k\leq n,\)\(\dim\left(\Lambda^{k}\mathbb{R}^{n}\right)=\binom{n}{k}.\) * For any two finite dimensional vector spaces \(X,Y,\) we use the notation \(\mathcal{L}\left(X,Y\right)\) to denote the vector space of all linear maps from \(X\) to \(Y.\) * We would be dealing with vector-valued forms a lot, we introduce some shorthand notation to avoid clutter. The integers \(n\geq 2,\)\(N\geq 1\) would remain fixed but arbitrary for the rest. The only relevant point is the degree of the form. To this end, for any integer \(0\leq k\leq n-1,\) we denote \[\Lambda^{k}:=\Lambda^{k}\mathbb{R}^{n}\otimes\mathbb{R}^{N}.\] * \(\wedge,\,\lrcorner\,,\,\langle\ ;\ \rangle\) and, respectively, \(*\) denote the exterior product, the interior product, the scalar product and, respectively, the Hodge star operator, extended componentwise in the obvious fashion to vector-valued forms. * Let \(\{e_{1},\cdots,e_{n}\}\) be the standard basis of \(\mathbb{R}^{n}.\) The dual basis \(\left\{e^{1},\cdots,e^{n}\right\}\) is a basis for \(\Lambda^{1}\mathbb{R}^{n}\) and \(\left\{e^{i_{1}}\wedge\cdots\wedge e^{i_{k}}:1\leq i_{1}<\cdots<i_{k}\leq n\right\}\) is a basis of \(\Lambda^{k}\mathbb{R}^{n}.\) An element \(\xi\in\Lambda^{k}\) will therefore be written as \[\xi=\sum_{j=1}^{N}\sum_{I\in\mathcal{T}^{k}}\xi_{I,j}\;e^{I}\otimes e_{j}= \sum\sum\left(e^{i_{1}}\wedge\cdots\wedge e^{i_{k}}\right)\otimes e_{j}\] where \(\mathcal{T}^{k}=\left\{I=(i_{1}\,,\cdots,i_{k})\in\mathbb{N}^{k}:1\leq i_{1}<\cdots <i_{k}\leq n\right\}.\) * Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded and with \(C^{1}\) boundary. \(\nu\) will always denote the outward unit normal field to \(\partial\Omega\), which would be identified with abuse of notation, with the \(1\)-form with same components. An \(\mathbb{R}^{N}\)-valued differential \(k\)-form \(\omega\) on \(\Omega\) is a measurable function \(\omega:\Omega\rightarrow\varLambda^{k}.\) The usual Lebesgue, Sobolev and Holder spaces are defined componentwise in the usual way and are denoted by their usual symbols. For any measurable subset \(A\subset\mathbb{R}^{n}\) with \(\left|A\right|<\infty\), we use the notation \(\left(\cdot\right)_{A}\) to denote the integral average over the set \(A,\) i.e. \[\left(f\right)_{A}:=\frac{1}{\left|A\right|}\int_{A}f\left(x\right)\ \mathrm{d}x:=\fint_{A}f\left(x\right)\ \mathrm{d}x\qquad\text{ for any }f\in L^{1}\left(A\right).\] This notation is also extended componentwise to vector-valued functions. * Two important differential operators on differential forms are **Definition 2**.: _A \(\mathbb{R}^{N}\)-valued differential \(\left(k+1\right)\)-form \(\varphi\in L^{1}_{loc}(\Omega;\varLambda^{k+1})\) is called the exterior derivative of \(\omega\in L^{1}_{loc}\left(\Omega;\varLambda^{k}\right),\) denoted by \(d\omega\), if_ \[\int_{\Omega}\eta\wedge\varphi=(-1)^{n-k}\int_{\Omega}d\eta\wedge\omega, \qquad\text{ for all }\eta\in C^{\infty}_{c}\left(\Omega;\varLambda^{n-k-1}\right).\] _The Hodge codifferential of \(\omega\in L^{1}_{loc}\left(\Omega;\varLambda^{k}\right)\) is an \(\mathbb{R}^{N}\)-valued \(\left(k-1\right)\)-form, denoted \(d^{*}\omega\in L^{1}_{loc}\left(\Omega;\varLambda^{k-1}\right),\) defined as_ \[d^{*}\omega:=(-1)^{nk+1}*d*\omega.\] See [4] for the properties of these operators. * We shall use the following two ellipticity conditions for matrix fields. **Definition 3**.: _A linear map \(A:\varLambda^{k}\otimes\mathbb{R}^{n}\rightarrow\varLambda^{k}\otimes \mathbb{R}^{n}\) is said to satisfy the **Legendre-Hadamard condition** if there exists a constant \(\gamma>0\) such that_ \[\left\langle A(a\otimes b)\ ;\ a\otimes b\right\rangle\geq\gamma\left|a \right|^{2}\left|b\right|^{2},\qquad\text{ for every }a\in\mathbb{R}^{n},b\in\varLambda^{k}.\] **Definition 4**.: _A bounded measurable map \(A\in L^{\infty}\left(\Omega;\mathcal{L}(\varLambda^{k},\varLambda^{k})\right)\) is called **uniformly Legendre elliptic** if there exists a constant \(\gamma>0\) such that we have_ \[\gamma\left|\xi\right|^{2}\leq\left\langle A(x)\xi\ ;\ \xi\right\rangle\leq\left\|A \right\|_{L^{\infty}}\left|\xi\right|^{2}\quad\text{ for every }\xi\in\varLambda^{k}\text{ and for a.e. }x\in\Omega.\] * For any \(x\in\mathbb{R}^{n}\), any \(\rho>0\), \(B_{\rho}\left(x\right)\) denotes the open ball of radius \(\rho>0\) around \(x.\) If \(x\in\partial\mathbb{R}^{n}_{+}\), \(B^{+}_{\rho}\left(x\right)\) will denote the half-ball centered around \(x\) in the upper half space, i.e \[B^{+}_{\rho}\left(x\right)=\{y\in\mathbb{R}^{n}:y_{n}>0,\left|y-x\right|<\rho\}.\] Let \(\Gamma_{\rho}\left(x\right)\) and \(C_{\rho}\left(x\right)\) denote the flat part and the curved part, respectively, of the boundary of the half ball \(B_{\rho}^{+}\left(x\right).\) Also, for any open set \(\Omega\subset\mathbb{R}^{n},\) we denote the set \[\Omega_{\rho}\left(x\right):=\Omega\cap B_{\rho}\left(x\right)\text{ \ \ \ and \ \ }\Omega_{\rho}^{+}\left(x\right):=\Omega\cap B_{\rho}^{+}\left(x\right) \text{(when }x\in\partial\mathbb{R}_{+}^{n}).\] We suppress writing the center when \(x=0\in\mathbb{R}^{n}.\) * We reserve the notation \(\theta\) to denote the function \(\theta:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+},\) defined as \[\theta\left(r\right):=\sup_{0<\rho\leq r}\rho\left(1+\left|\log\rho\right| \right).\] ## 3 Function spaces ### Gaffney and Poincare type inequalities Let \(\Omega\subset\mathbb{R}^{n}\) open, bounded,\(C^{2}.\) The spaces \(W_{T}^{d,2}\left(\Omega;\varLambda^{k}\right)\) and \(W_{T}^{1,2}\left(\Omega;\varLambda^{k}\right)\) are defined as ( see [4] ) \[W_{T}^{d,2}\left(\Omega;\varLambda^{k}\right) =\left\{\omega\in L_{T}^{2}\left(\Omega;\varLambda^{k}\right):du \in L_{T}^{2}\left(\Omega;\varLambda^{k+1}\right)\text{ and }\nu\wedge\omega=0\text{ on } \partial\Omega\right\},\] \[W_{T}^{1,2}\left(\Omega;\varLambda^{k}\right) =\left\{\omega\in W_{T}^{1,2}\left(\Omega;\varLambda^{k}\right): \nu\wedge\omega=0\text{ on }\partial\Omega\right\}.\] The subspaces \(W_{d^{*},T}^{1,2}(\Omega;\varLambda^{k})\) and \(\mathcal{H}_{T}^{k}\left(\Omega;\varLambda^{k}\right)\) are defined as \[W_{d^{*},T}^{1,2}(\Omega;\varLambda^{k}) =\left\{\omega\in W_{T}^{1,2}(\Omega;\varLambda^{k}):d^{*}\omega =0\text{ in }\Omega\right\}\] \[\mathcal{H}_{T}^{k}\left(\Omega;\varLambda^{k}\right) =\left\{\omega\in W_{T}^{1,2}\left(\Omega;\varLambda^{k}\right):d \omega=0\text{ and }d^{*}\omega=0\text{ in }\Omega\right\}\] For half-balls, we need the following subspace. \[W_{T,\text{flat}}^{1,2}(B_{R}^{+}\left(x_{0}\right);\varLambda^ {k})\] \[\qquad=\left\{\psi\in W^{1,2}(B_{R}^{+}\left(x_{0}\right);\varLambda ^{k}):e_{n}\wedge\psi=0\text{ on }\Gamma_{R}\left(x_{0}\right),\psi=0\text{ on }C_{R}\left(x_{0}\right)\right\}.\] The following Gaffney inequality follows from the standard Gaffney inequality by a contradiction argument ( see Step 1 of the proof of Theorem 6.7 in [4] ). **Proposition 5** (Gaffney inequality).: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded, \(C^{2}.\) There exists a constant \(C=C\left(\Omega,n,N,k\right)>0\) such that_ \[\left\|\nabla u\right\|_{L^{2}\left(\Omega\right)}^{2}\leq C\left(\left\|du \right\|_{L^{2}\left(\Omega\right)}^{2}+\left\|d^{*}u\right\|_{L^{2}\left( \Omega\right)}^{2}\right)\text{ \ \ \ for all }u\in W_{T}^{1,2}\cap\left(\mathcal{H}_{T}^{k}\right)^{\perp}. \tag{4}\] We shall also need following Poincare type inequality. **Proposition 6** (Poincare inequality).: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded, \(C^{2}.\) There exists a constant \(C=C\left(\Omega,n,N,k\right)>0\) such that_ \[\left\|u\right\|_{W^{1,2}\left(\Omega;\varLambda^{k}\right)}^{2}\leq C\left\|du \right\|_{L^{2}\left(\Omega;\varLambda^{k+1}\right)}^{2}\text{ ### Weak formulations and gauge fixing Our results depend crucially on our ability to switch weak formulations. Note that \(C_{c}^{\infty}\left(\Omega\right)\) is not dense in \(W_{d^{*},T}^{1,2}\left(\Omega\right)\) and hence it is far from obvious that weak formulations \(W_{d^{*},T}^{1,2}\) can be localized. **Proposition 7**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded, \(C^{2}\) boundary. Let \(F\in L^{2}\left(\Omega;\varLambda^{k+1}\right)\) and \(A\in L^{\infty}\left(\Omega;\mathcal{L}\left(\varLambda^{k+1};\varLambda^{k+1} \right)\right).\) The following are all equivalent._ 1. \(u\in W^{1,2}\left(\Omega;\varLambda^{k}\right).\) _satisfies_ \[\int_{\Omega}\langle A(x)du;d\phi\rangle=\int_{\Omega}\langle F;d\phi\rangle \quad\text{ for all }\phi\in W_{d^{*},T}^{1,2}\left(\Omega;\varLambda^{k}\right).\] 2. \(u\in W^{1,2}\left(\Omega;\varLambda^{k}\right).\) _satisfies_ \[\int_{\Omega}\langle A(x)du;d\psi\rangle=\int_{\Omega}\langle F;d\psi\rangle \quad\text{ for all }\psi\in W_{0}^{1,2}\left(\Omega;\varLambda^{k}\right).\] 3. \(u\in W^{1,2}\left(\Omega;\varLambda^{k}\right).\)__ \[\int_{\Omega}\langle A(x)du;d\psi\rangle=\int_{\Omega}\langle F;d\psi\rangle \quad\text{ for all }\psi\in W_{T}^{1,2}\left(\Omega;\varLambda^{k}\right).\] Proof.: **(a) \(\Rightarrow\) (b)** Given \(\psi\in W_{0}^{1,2}\left(\Omega;\varLambda^{k}\right),\) we first find \(\alpha\in W^{2,2}\left(\Omega;\varLambda^{k}\right)\) which solves the system \[\left\{\begin{aligned} d^{*}d\alpha&=-d^{*}\psi& \text{ in }\Omega,\\ d^{*}\alpha&=0&\text{ in }\Omega,\\ \nu\wedge\alpha&=0&\text{ on }\partial\Omega, \end{aligned}\right.\] Since since \(\nu\wedge\alpha=0\) on \(\partial\Omega\) implies \(\nu\wedge d\alpha=0\) on \(\partial\Omega.\) the result follows by setting \(\phi=\psi+d\alpha.\) **(b) \(\Rightarrow\) (c)** Given any \(\phi\in W_{T}^{1,2}\left(\Omega;\varLambda^{k}\right),\) using Theorem 8.16 in [4], we find \(\psi\in W_{0}^{1,2}\left(\Omega;\varLambda^{k}\right)\) by solving the system \[\left\{\begin{aligned} d\psi&=d\phi& \text{ in }\Omega,\\ \psi&=0&\text{ on }\partial\Omega.\end{aligned}\right.\] Since \(W_{d^{*},T}^{1,2}\subset W_{T}^{1,2},\) (c) \(\Rightarrow\) (a) is trivial and the proof is finished. **Remark 8**.: _Proposition 7 holds precisely because the equation is gauge-invariant, i.e. invariant under translation by kernel of \(d\) and the following equality_ \[d\left(W_{d^{*},T}^{1,2}\right)=d\left(W_{0}^{1,2}\right)=d\left(W_{T}^{1,2} \right).\] ### Existence results We record here an existence result that we would need. **Proposition 9**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded and Lipschitz. Let \(A:\Lambda^{k+1}\to\Lambda^{k+1}\) be Legendre elliptic with constant \(\gamma>0.\) For any \(f\in L^{2}\left(\Omega;\Lambda^{k}\right)\) and any \(F\in L^{2}\left(\Omega;\Lambda^{k}\otimes\mathbb{R}^{n}\right),\) there exists unique weak solution to the following_ \[\left\{\begin{aligned} d^{*}\left(Adu\right)+dd^{*}u& =f+\operatorname{div}F&&\text{in }\Omega,\\ u&=0&&\text{on }\partial\Omega.\end{aligned}\right.\] Proof.: Define the linear map \(\tilde{A}:\Lambda^{k}\otimes\mathbb{R}^{n}\to\Lambda^{k}\otimes\mathbb{R}^{n}\) by the pointwise condition \[\left\langle\tilde{A}\left(a_{1}\otimes b_{1}\right);a_{2}\otimes b_{2} \right\rangle:=\left\langle A\left(a_{1}\wedge b_{1}\right);a_{2}\wedge b_{2} \right\rangle+\left\langle a_{1}\lrcorner b_{1};a_{2}\lrcorner b_{2}\right\rangle\] for every \(a_{1},a_{2}\in\mathbb{R}^{n},\)\(b_{1},b_{2}\in\Lambda^{k},\) extended by linearity. Using with the algebraic identity ( see Proposition 2.16 in [4] ), \(a\lrcorner(a\wedge b)+a\wedge(a\lrcorner b)=\left|a\right|^{2}b,\) it is easy to check that the constant tensor \(\tilde{A}\) is Legendre-Hadamard elliptic. Now standard arguments establish existence and uniqueness of \(u\in W_{0}^{1,2}\left(\Omega;\Lambda^{k}\right)\) such that \[\int_{\Omega}\left\langle\tilde{A}\nabla u,\nabla\phi\right\rangle=\int_{ \Omega}\left\langle f,\phi\right\rangle+\int_{\Omega}\left\langle F,\nabla \phi\right\rangle\qquad\text{ for all }\phi\in W_{0}^{1,2}\left(\Omega;\Lambda^{k}\right).\] But this completes the proof as we have \[\int_{\Omega}\left\langle\tilde{A}\nabla u,\nabla v\right\rangle=\int_{ \Omega}\left\langle Adu,dv\right\rangle+\int_{\Omega}\left\langle d^{*}u,d^{* }v\right\rangle\qquad\text{ for all }u,v\in W_{0}^{1,2}\left(\Omega;\Lambda^{k}\right).\] This can be proved by Fourier transform when \(u,v\in C_{c}^{\infty}\left(\Omega;\Lambda^{k}\right)\) and the general case follows by density. ### Mean Oscillation spaces Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded and Lipschitz. For \(1\leq p<\infty\) and \(0\leq\lambda\leq n+2,\)\(\mathscr{L}^{p,\lambda}\left(\Omega\right)\) denotes the Campanato space ( see [2] ) of all \(f\in L^{p}\left(\Omega\right)\) such that \[\left[f\right]_{\mathscr{L}^{p,\lambda}\left(\Omega\right)}^{p}:=\sup_{\rho>0,x\in\Omega}\rho^{-\lambda}\fint_{\Omega_{\rho}\left(x\right)}\left|f\left(y \right)-\left(f\right)_{\Omega_{\rho}\left(x\right)}\right|^{p}\,\,\mathrm{d}y <\infty,\] where endowed with the norm \(\left\|f\right\|_{\mathscr{L}^{p,\lambda}\left(\Omega\right)}:=\left\|f\right\| _{L^{p}\left(\Omega\right)}+\left[f\right]_{\mathscr{L}^{p,\lambda}\left( \Omega\right)}.\) The space \(\mathscr{L}^{1,n}\left(\Omega\right)\) is the space \(\mathrm{BMO}\left(\Omega\right)\) and \(\left[f\right]_{\mathrm{BMO}\left(\Omega\right)}=\left[f\right]_{\mathscr{L}^ {1,n}\left(\Omega\right)}.\) By John-Nirenberg inequality, \(\mathscr{L}^{p,n}\left(\Omega\right)\simeq\mathrm{BMO}\left(\Omega\right)\) for any \(1\leq p<\infty,\) with equivalent seminorms ( see [5] ). We record two estimates about integral averages of \(\mathrm{BMO}\) functions. **Proposition 10** (Proposition 1.15, [1]).: _Let \(\Omega\subset\mathbb{R}^{n}\) be open and bounded and let \(\Omega^{{}^{\prime}}\subset\subset\Omega\) be open. Let \(f\in\mathrm{BMO}\left(\Omega\right).\) Then there exists a constant \(C=C\left(n\right)>0\) such that for any \(0<\sigma_{0}<\min\left\{2,\mathrm{dist}\left(\Omega^{{}^{\prime}},\partial \Omega\right)/16\right\},\) we have_ \[\left|\left(f\right)_{B_{r}\left(x\right)}\right|\leq C\left\{\left(1+\left| \log r\right|\right)\left[f\right]_{\mathrm{BMO}\left(\Omega\right)}+\sigma_{0} ^{-\frac{n}{2}}\left\|f\right\|_{L^{2}\left(\Omega\right)}\right\},\] _for all \(0<r<\sigma_{0}\) and for all \(x\in\Omega^{{}^{\prime}}.\)_ **Proposition 11** (Proposition 1.16, [1]).: _Let \(x_{0}\in\partial\mathbb{R}_{+}^{n}\) and \(R>0.\) Let \(f\in\mathrm{BMO}\left(B_{R}^{+}\left(x_{0}\right)\right).\) Then there exists a constant \(C=C\left(n\right)>0\) such that for any \(0<\sigma_{0}<R/8,\) we have_ \[\left|\left(f\right)_{B_{r}^{+}\left(x\right)}\right|\leq C\left\{\left(1+ \left|\log r\right|\right)\left[f\right]_{\mathrm{BMO}\left(B_{R}^{+}\left(x_{ 0}\right)\right)}+\sigma_{0}^{-\frac{n}{2}}\left\|f\right\|_{L^{2}\left(B_{R}^ {+}\left(x_{0}\right)\right)}\right\},\] _for all \(0<r<\sigma_{0}\) and for all \(x\in\Gamma_{R/2}\left(x_{0}\right).\)_ We would crucially use the following generalized Campanato type spaces. **Definition 12**.: _We define the space \(\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\left(\Omega\right)\) as the vector space of all functions \(f\in L^{2}\left(\Omega\right)\) such that_ \[\left[f\right]_{\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n} \left(\Omega\right)}:=\sup_{\rho>0,x\in\Omega}\left(1+\left|\log\rho\right| \right)\left[\fint_{\Omega_{\rho}\left(x\right)}\left|f\left(y\right)-\left(f \right)_{\Omega_{\rho}\left(x\right)}\right|^{2}\ \mathrm{d}y\right]^{\frac{1}{2}}<\infty,\] _equipped with the norm_ \[\left\|f\right\|_{\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n} \left(\Omega\right)}:=\left\|f\right\|_{L^{2}\left(\Omega\right)}+\left[f \right]_{\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\left( \Omega\right)}.\] _For any \(f\in\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\left(\Omega \right),\) the mean oscillation'modulus' of \(f\) is defined as the function \(\Theta^{f}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+},\) defined by_ \[\Theta^{f}\left(r\right):=\sup_{0<\rho\leq r,x\in\Omega}\left(1+\left|\log \rho\right|\right)\left[\fint_{\Omega_{\rho}\left(x\right)}\left|f\left(y \right)-\left(f\right)_{\Omega_{\rho}\left(x\right)}\right|^{2}\ \mathrm{d}y\right]^{\frac{1}{2}}\] _We denote by \(\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\left( \Omega\right)\) the subspace defined by_ \[\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\left( \Omega\right):=\left\{f\in\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1 }}^{2,n}\left(\Omega\right):\lim_{r\to 0}\Theta^{f}\left(r\right)=0 \right\}.\] **Remark 13**.: _Let us now record a few facts about these spaces._ * \(\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\) _is Banach and_ \(\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\) _is a proper subspace and_ \[\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\left( \Omega\right)=\overline{C^{\infty}\left(\overline{\Omega}\right)}^{\left\| \cdot\right\|_{\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n} \left(\Omega\right)}}.\] _Moreover, if_ \(f\in L^{\infty}\left(\Omega\right)\cap\mathrm{V}\mathscr{L}_{\left(1+\left| \log r\right|\right)^{-1}}^{2,n}\left(\Omega\right),\) _then there exists a sequence_ \(\left\{f_{s}\right\}_{s\in\mathbb{N}}\subset C^{\infty}\left(\overline{\Omega}\right)\) _such that_ \[\left\|f_{s}\right\|_{L^{\infty}\left(\Omega\right)} \leq\left\|f\right\|_{L^{\infty}\left(\Omega\right)} \text{for all }s\in\mathbb{N},\] \[\Theta^{f_{s}}\left(\sigma\right) \leq c\left(\Theta^{f}\left(\sigma\right)+\left\|f\right\|_{L^{ \infty}\left(\Omega\right)}\sigma^{n}\right) \text{for all }s\in\mathbb{N},\] _for some constant_ \(c=c\left(n\right)>0\) _and_ \[\lim_{s\rightarrow\infty}\left(\left\|f_{s}-f\right\|_{\mathscr{L}_{\left(1+ \left|\log r\right|\right)^{-1}}^{2,n}\left(\Omega\right)}+\left\|f_{s}-f \right\|_{L^{p}\left(\Omega\right)}\right)=0 \text{for all }1\leq p<\infty.\] _See Proposition_ 1.2 _and Remark_ 1.5 _in_ _[_1_]_ _for proofs._ _._ 2. _The space of'multipliers' of_ \(\mathrm{BMO}\left(\Omega\right)\) _is_ \(L^{\infty}\left(\Omega\right)\cap\mathscr{L}_{\left(1+\left|\log r\right|\right) ^{-1}}^{2,n}\left(\Omega\right),\) _i.e._ \[\mathcal{M}\left(\mathrm{BMO}\left(\Omega\right)\right)\simeq L^{\infty}\left( \Omega\right)\cap\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n} \left(\Omega\right)\] _with equivalent norms. See Theorem 2 in_ _[_6_]__._ 3. _The sets_ \(C\left(\overline{\Omega}\right)\setminus\mathscr{L}_{\left(1+\left|\log r \right|\right)^{-1}}^{2,n}\left(\Omega\right)\) _and_ \(L^{\infty}\left(\Omega\right)\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r \right|\right)^{-1}}^{2,n}\left(\Omega\right)\setminus C\left(\overline{ \Omega}\right)\) _are both non-empty. See Proposition 1.9 in_ _[_1_]__._ 4. _The set of Dini continuous functions in_ \(\overline{\Omega}\) _is a proper subset of_ \(C\left(\overline{\Omega}\right)\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r \right|\right)^{-1}}^{2,n}\left(\Omega\right).\) _See Proposition_ 1.10 _in_ _[_1_]__._ **Proposition 14** (Proposition 1.13, [1]).: _Let \(\Omega\subset\mathbb{R}^{n}\) be open and bounded and let \(\Omega^{{}^{\prime}}\subset\subset\Omega\) be open. Let \(f\in\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1}}^{2,n}\left(\Omega \right).\) Then for any \(1\leq p<\infty,\) there exists a constant \(C=C\left(n,p\right)>0\) such that for any \(0<\sigma_{0}<\mathrm{dist}\left(\Omega^{{}^{\prime}},\partial\Omega\right)/16,\) we have_ \[\left(1+\left|\log r\right|\right)\left[\fint_{B_{r}\left(x\right)}\left|f- \left(f\right)_{B_{r}\left(x\right)}\right|^{p}\right]^{\frac{1}{p}}\leq C \Theta^{f}\left(\sigma_{0}\right),\] _for all \(0<r<\sigma_{0}\) and for all \(x\in B_{R/2}^{+}\left(x_{0}\right)\) such that \(B_{r}\left(x\right)\subset\subset B_{R}^{+}\left(x_{0}\right),\) we have_ \[\left(1+\left|\log r\right|\right)\left[\fint_{B_{r}\left(x\right)}\left|f- \left(f\right)_{B_{r}\left(x\right)}\right|^{p}\right]^{\frac{1}{p}}\leq C \Theta^{f}\left(\sigma_{0}\right).\] 2. _For all_ \(0<r<\sigma_{0}\) _and for all_ \(x\in\Gamma_{R/2}\left(x_{0}\right),\) _we have_ \[\left(1+\left|\log r\right|\right)\left[\fint_{B_{r}^{+}\left(x\right)}\left|f -\left(f\right)_{B_{r}^{+}\left(x\right)}\right|^{p}\right]^{\frac{1}{p}}\leq C \Theta^{f}\left(\sigma_{0}\right).\] The spaces we defined here, along with their claimed properties, extend componentwise to functions that take values in finite dimensional vector spaces. **Lemma 16**.: _Let \(A\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right) ^{-1}}^{2,n}\left(\Omega;\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right)\right)\) is uniformly Legendre-elliptic. Define \(A^{-1}:\Omega\rightarrow\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right)\) by \(A^{-1}\left(x\right)=\left[A\left(x\right)\right]^{-1}\) for a.e. \(x\in\Omega.\) Then \(A^{-1}\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right| \right)^{-1}}^{2,n}\) is uniformly Legendre elliptic._ Proof.: By equivalence of norms on finite dimensional spaces, \(A^{-1}\) is bounded and uniformly Legendre elliptic. Note the estimate in the operator norms \[\left\|A^{-1}\left(x\right)-\left(A\right)_{\Omega_{\rho}\left(x_{0}\right)}^{-1 }\right\|_{\mathrm{op}}\leq\left\|A^{-1}\left(x\right)\right\|_{\mathrm{op}} \left\|A\left(x\right)-\left(A\right)_{\Omega_{\rho}\left(x_{0}\right)}\right\| _{\mathrm{op}}\left\|\left(A\right)_{\Omega_{\rho}\left(x_{0}\right)}^{-1} \right\|_{\mathrm{op}}.\] From this, by minimality of integral averages and again equivalence of norms, for any \(x\in\Omega\) and any \(\rho>0\), we have \[\fint_{\Omega_{\rho}\left(x\right)}\left|A^{-1}\left(y\right)- \left(A^{-1}\right)_{\Omega_{\rho}\left(x\right)}\right|^{2}\ \mathrm{d}y \leq\fint_{\Omega_{\rho}\left(x\right)}\left|A^{-1}\left(y\right)- \left(A\right)_{\Omega_{\rho}\left(x\right)}^{-1}\right|^{2}\ \mathrm{d}y\] \[\leq\frac{c}{\gamma^{4}}\fint_{\Omega_{\rho}\left(x\right)}\left| A\left(y\right)-\left(A\right)_{\Omega_{\rho}\left(x\right)}\right|^{2}\ \mathrm{d}y.\] The claimed result follows. The following lemma is easy to establish ( see [2] ). **Lemma 17**.: _Let \(\Omega_{1},\Omega_{2}\subset\mathbb{R}^{n}\) be bounded open subsets and let \(\Phi:\overline{\Omega_{1}}\rightarrow\overline{\Omega_{2}}\) be a \(C^{2}\) diffeomorphism. Let \(A\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^{ -1}}^{2,n}\left(\Omega_{2};\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right) \right).\) Define_ \[\tilde{A}\left(y\right)=\left|\det D\Phi(y)\right|A\left(\Phi(y)\right)\ _and a constants \(C,c_{1}>0,\) depending only on \(x_{0},n,k,N,\gamma,\Omega,\) such that we have_ \[\left[\nabla u\right]_{\mathrm{BMO}\left(\Phi\left(B_{R/2}^{+} \right)\right)}^{2}\\ \leq C\left\{\left(\left[\Theta^{A}\left(2c_{1}\sigma\right) \right]^{2}+\left[\theta\left(\sigma\right)\right]^{2}\left(\left\|A\right\|_{ L^{\infty}}^{2}+1\right)\right)\left[\nabla u\right]_{\mathrm{BMO}\left(\Omega \right)}^{2}+\sigma^{-n}\kappa\right\}, \tag{6}\] _for every \(0<\sigma<R/8,\) where_ \[\kappa:=\left(\left\|A\right\|_{L^{\infty}\left(\Omega\right)}^{ 2}+\left[A\right]_{\frac{\mathcal{L}^{2,n}}{1+\left\|\log r\right|}}^{2}\left( \Omega\right)+1\right)\left\|\nabla u\right\|_{L^{2}\left(\Omega\right)}^{2}+ \left\|u\right\|_{L^{2}\left(\Omega\right)}^{2}\\ +\left\|F\right\|_{L^{2}\left(\Omega\right)}^{2}+\left[F\right] _{\mathrm{BMO}\left(\Omega\right)}^{2}.\] Proof.: For any \(x_{0}\in\partial\Omega,\) we use Lemma 20 to deduce the existence a radius \(0<R_{0}<1\) and a neighborhood \(U\) of \(x_{0}\) in \(\mathbb{R}^{n}\) and an admissible boundary coordinate system \(\Phi\in\mathrm{Diff}^{2}(\overline{B_{R_{0}}};\overline{U}),\) satisfying \[\Phi(0)=x_{0},D\Phi(0)\in\mathbb{SO}\left(n\right),\Phi(B_{R_{0}}^{+})=\Omega \cap U\text{ and }\Phi(\Gamma_{R_{0}})=\partial\Omega\cap U,\] such that \(\omega=\Phi^{\ast}\left(u\right)\in W^{1,2}(B_{R_{0}}^{+};\varLambda^{k})\) satisfies \(e_{n}\wedge\omega=0\) on \(\Gamma_{R_{0}}\) and \[\int_{B_{R_{0}}^{+}}\left\langle\tilde{A}\left(x\right)d\omega;d \psi\right\rangle+\int_{B_{R_{0}}^{+}}\left\langle d^{\ast}\omega;d^{\ast} \psi\right\rangle-\int_{B_{R_{0}}^{+}}\left\langle\tilde{F};d\psi\right\rangle \\ +\int_{B_{R_{0}}^{+}}\left\langle\mathrm{P}\omega+\mathrm{R} \nabla\omega;\psi\right\rangle+\int_{B_{R_{0}}^{+}}\left\langle\mathrm{Q} \omega;\nabla\psi\right\rangle+\int_{B_{R_{0}}^{+}}\left\langle\mathrm{S} \nabla\omega;\nabla\psi\right\rangle=0\] for all \(\psi\in W^{1,2}_{T,\mathrm{flat}}(B_{R_{0}}^{+};\varLambda^{k}),\) where \(\tilde{A},\tilde{F},\mathrm{P},\mathrm{Q},\mathrm{R},\mathrm{S}\) are as in Lemma 20. Now let \(0<R<R_{0}.\) We are going to choose \(R\) later ( see (17) and (22) ). The constants in all the estimates that we would derive from here onward may depend on \(R_{0},\) but does not depend on \(R.\) Fix \(0<\sigma<R/16.\) Let \(y=\left(y^{\prime},y_{n}\right)\in B_{R/2}^{+}.\) With abuse of notation, we denote the point \(\left(y^{\prime},0\right)\in\partial\mathbb{R}_{+}^{n}\) by \(y^{\prime}.\) Exactly one of the following can happen. * If \(y_{n}>\sigma,\) then \(B_{\sigma}\left(y\right)\subset\subset B_{R}^{+}.\) * or \(0\leq y_{n}\leq\sigma\) and then \(B_{\sigma}\left(y\right)\cap B_{R}^{+}\subset B_{2\sigma}^{+}\left(y^{\prime} \right)\subset B_{3R/4}^{+}.\) **Case (I):** By Proposition 9, we can find \(\alpha\in W^{1,2}_{0}\left(B\left(y,\sigma\right),\varLambda^{k}\right),\) the unique weak solution to the following Dirichlet BVP \[\int_{B_{\sigma}\left(y\right)}\left\langle\left(\tilde{A} \right)_{B\left(y,\sigma\right)}d\alpha;d\phi\right\rangle+\int_{B_{\sigma} \left(y\right)}\left\langle d^{\ast}\alpha;d^{\ast}\phi\right\rangle\\ =\int_{B_{\sigma}\left(y\right)}\left\langle\tilde{F};d\phi \right\rangle-\int_{B_{\sigma}\left(y\right)}\left\langle G;\nabla\phi\right\rangle -\int_{B_{\sigma}\left(y\right)}\left\langle g;\phi\right\rangle \tag{7}\] for all \(\phi\in W_{0}^{1,2}\left(B_{\sigma}\left(y\right),\Lambda^{k}\right)\), where \(g:=\mathrm{P}\omega+\mathrm{R}\nabla\omega\) and \[G:=\mathrm{Q}\omega+\mathrm{S}\nabla\omega+\left[\tilde{A}\left(x\right)-\left( \tilde{A}\right)_{B_{\sigma}\left(y\right)}\right]d\omega:=G_{1}+G_{2}+G_{3}.\] Using Proposition 5 and (7), we have, \[\int_{B_{\sigma}\left(y\right)}\left|\nabla\alpha\right|^{2} \leq C\left(\int_{B_{\sigma}\left(y\right)}\left|d\alpha\right|^{2 }+\int_{B_{\sigma}\left(y\right)}\left|d^{*}\alpha\right|^{2}\right)\] \[\leq C\left(\int_{B_{\sigma}\left(y\right)}\left\langle\left( \tilde{A}\right)_{B_{\sigma}\left(y\right)}d\alpha;d\alpha\right\rangle+\int_ {B_{\sigma}\left(y\right)}\left|d^{*}\alpha\right|^{2}\right)\] \[=C\left(\int_{B_{\sigma}\left(y\right)}\left\langle\tilde{F};d \alpha\right\rangle-\int_{B_{\sigma}\left(y\right)}\left\langle G;d\alpha \right\rangle-\int_{B_{\sigma}\left(y\right)}\left\langle g;\alpha\right\rangle \right). \tag{8}\] Now we estimate each term, starting with the easy ones. Using Poincare-Sobolev, Holder and Young's inequality with \(\varepsilon>0\), we deduce \[\left|\int_{B_{\sigma}\left(y\right)}\left\langle g;\alpha\right\rangle \right| \leq C\left(\int_{B_{\sigma}\left(y\right)}\left|g\right|^{\frac{ 2n}{n+2}}\right)^{\frac{n+2}{2n}}\left(\int_{B_{\sigma}\left(y\right)}\left| \nabla\alpha\right|^{2}\right)^{\frac{1}{2}}\] \[\leq C_{\varepsilon}\sigma^{2}\left[\left\|\mathrm{P}\right\|_{L ^{\infty}}^{2}\int_{B_{\sigma}\left(y\right)}\left|\omega\right|^{2}+\left\| \mathrm{R}\right\|_{L^{\infty}}^{2}\int_{B_{\sigma}\left(y\right)}\left| \nabla\omega\right|^{2}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\varepsilon\int_{B_{\sigma }\left(y\right)}\left|\nabla\alpha\right|^{2}. \tag{9}\] Similarly, we deduce \[\left|\int_{B_{\sigma}\left(y\right)}\left\langle G_{1};d\alpha\right\rangle \right|\leq C\varepsilon\int_{B_{\sigma}\left(y\right)}\left|\nabla\alpha \right|^{2}+C_{\varepsilon}\left\|\mathrm{Q}\right\|_{L^{\infty}}^{2}\int_{B_ {\sigma}\left(y\right)}\left|\omega\right|^{2}. \tag{10}\] Using Young's inequality with \(\varepsilon>0\), we have \[\left|\int_{B_{\sigma}\left(y\right)}\left\langle\tilde{F};d \alpha\right\rangle\right| =\left|\int_{B_{\sigma}\left(y\right)}\left\langle\tilde{F}-\left( \tilde{F}\right)_{B_{\sigma}\left(y\right)};d\alpha\right\rangle\right|\] \[\leq C\varepsilon\int_{B_{\sigma}\left(y\right)}\left|\nabla \alpha\right|^{2}+C_{\varepsilon}\int_{B_{\sigma}\left(y\right)}\left|\tilde{F} -\left(\tilde{F}\right)_{B_{\sigma}\left(y\right)}\right|^{2}. \tag{11}\] Now we estimate the tricky terms. By Young's inequality with \(\varepsilon>0\), we deduce \[\left|\int_{B_{\sigma}\left(y\right)}\left\langle G_{2};d\alpha \right\rangle\right| =\left|\int_{B_{\sigma}\left(y\right)}\left\langle G_{2}-\left(G_{ 2}\right)_{B_{\sigma}\left(y\right)};d\alpha\right\rangle\right|\] \[\leq C\varepsilon\int_{B_{\sigma}\left(y\right)}\left|\nabla \alpha\right|^{2}+C_{\varepsilon}\int_{B_{\sigma}\left(y\right)}\left|G_{2}- \left(G_{2}\right)_{B_{\sigma}\left(y\right)}\right|^{2}.\] Note that for scalar functions \(f,g\), where \(f\in C^{1}\) and \(g\in\text{BMO}\), we have, \[\int_{B_{\sigma}(y)}\left|fg-\left(fg\right)_{B_{\sigma}(y)}\right| ^{2}\] \[\leq\int_{B_{\sigma}(y)}\left|fg-\left(f\right)_{B_{\sigma}(y)} \left(g\right)_{B_{\sigma}(y)}\right|^{2}\] \[\leq\left\|f\right\|_{L^{\infty}}^{2}\int_{B_{\sigma}(y)}\left|g- \left(g\right)_{B_{\sigma}(y)}\right|^{2}+\left\|\nabla f\right\|_{L^{\infty} }^{2}\sigma^{n+2}\left|\left(g\right)_{B_{\sigma}(y)}\right|^{2}.\] Using this for each component, and using properties of S, we easily estimate, \[\int_{B_{\sigma}(y)}\left|\text{S}\nabla\omega-\left(\text{S} \nabla\omega\right)_{B_{\sigma}(y)}\right|^{2}\leq CR^{2}\int_{B_{\sigma}(y)} \left|\nabla\omega-\left(\nabla\omega\right)_{B_{\sigma}(y)}\right|^{2}\\ +C\sigma^{n+2}\left|\left(\nabla\omega\right)_{B_{\sigma}(y)} \right|^{2}.\] Applying Proposition 10, we have \[\int_{B_{\sigma}(y)}\left|G_{2}-\left(G_{2}\right)_{B_{\sigma}(y) }\right|^{2}\leq C\sigma^{n+2}\left(1+\left|\log\sigma\right|\right)^{2}\left[ \nabla\omega\right]_{\text{BMO}\left(B_{R}^{+}\right)}^{2}\\ +CR^{2}\int_{B_{\sigma}(y)}\left|\nabla\omega-\left(\nabla\omega \right)_{B_{\sigma}(y)}\right|^{2}+\sigma^{2}\left\|\nabla\omega\right\|_{L^{2 }\left(B_{R}^{+}\right)}^{2}.\] Hence, we arrive at \[\left|\int_{B_{\sigma}(y)}\left\langle G_{2};d\alpha\right\rangle \right|\leq C_{\varepsilon}\sigma^{n}\left(\theta\left(\sigma \right)\right)^{2}\left[\nabla\omega\right]_{\text{BMO}\left(B_{R}^{+}\right) }^{2}+C_{\varepsilon}\sigma^{2}\left\|\nabla\omega\right\|_{L^{2}\left(B_{R}^{ +}\right)}^{2}\\ +CR^{2}\int_{B_{\sigma}(y)}\left|\nabla\omega-\left(\nabla\omega \right)_{B_{\sigma}(y)}\right|^{2}+C\varepsilon\int_{B_{\sigma}(y)}\left| \nabla\alpha\right|^{2}. \tag{12}\] Finally, we have \[\left|\int_{B_{\sigma}(y)}\left\langle G_{3};d\alpha\right\rangle\right|\leq C \varepsilon\int_{B_{\sigma}(y)}\left|\nabla\alpha\right|^{2}+C_{\varepsilon} \int_{B_{\sigma}(y)}\left|G_{3}\right|^{2}. \tag{13}\] Now we estimate the last integral on the right in the last inequality. We have \[\int_{B_{\sigma}(y)}\left|G_{3}\right|^{2} =\int_{B_{\sigma}(y)}\left|\left[\tilde{A}\left(x\right)-\left( \tilde{A}\right)_{B_{\sigma}(y)}\right]d\omega\right|^{2}\] \[\leq\left(\int_{B_{\sigma}(y)}\left|\tilde{A}\left(x\right)-\left( \tilde{A}\right)_{B_{\sigma}(y)}\right|^{4}\right)^{\frac{1}{2}}\left(\int_{B_ {\sigma}(y)}\left|d\omega\right|^{4}\right)^{\frac{1}{2}}.\] By Proposition 14, we have \[\left(\int_{B_{\sigma}(y)}\left|\tilde{A}\left(x\right)-\left(\tilde{A} \right)_{B_{\sigma}(y)}\right|^{4}\right)^{\frac{1}{2}}\leq\frac{c\sigma^{ \frac{n}{2}}}{\left(1+\left|\log\sigma\right|\right)^{2}}\left[\Theta^{\tilde{A }}\left(\sigma\right)\right]^{2}.\] Moreover, using Proposition 14 and Proposition 10, we also have \[\left(\int_{B_{\sigma}(y)}\left|d\omega\right|^{4}\right)^{\frac{1}{4}} \leq c\sigma^{\frac{n}{4}}\left[\left(\fint_{B_{\sigma}(y)}\left| d\omega-\left(d\omega\right)_{B_{\sigma}(y)}\right|^{4}\right)^{\frac{1}{4}}+ \left|\left(d\omega\right)_{B_{\sigma}(y)}\right|\right]\] \[\leq c\sigma^{\frac{n}{4}}\left[\left(1+\left|\log\sigma\right| \right)\left[d\omega\right]_{\text{BMO}\left(B_{R}^{+}\right)}+\sigma^{-\frac{n }{2}}\left\|d\omega\right\|_{L^{2}\left(B_{R}^{+}\right)}\right].\] Combining the estimates, we finally obtain \[\int_{B_{\sigma}(y)}\left|G_{3}\right|^{2}\leq\frac{c}{\left(1+ \left|\log\sigma\right|\right)^{2}}\left[\tilde{A}\right]_{\mathscr{L}^{2, \frac{1}{1+\left|\log\tau\right|}}\left(B_{R}^{+}\right)}^{2}\left\|\nabla \omega\right\|_{L^{2}\left(B_{R}^{+}\right)}^{2}\\ +c\sigma^{n}\left[\Theta^{\tilde{A}}\left(\sigma\right)\right]^{ 2}\left[\nabla\omega\right]_{\text{BMO}\left(B_{R}^{+}\right)}^{2}. \tag{14}\] Thus, combining the estimates (8), (9), (11), (10), (12), (13), (14) and choosing \(\varepsilon>0\) small enough, we deduce \[\int_{B_{\sigma}(y)}\left|\nabla\alpha\right|^{2}\leq c\sigma^{n }\left(\sigma^{2}\left(1+\left|\log\sigma\right|\right)^{2}+c\left[\Theta^{ \tilde{A}}\left(\sigma\right)\right]^{2}\right)\left[\nabla\omega\right]_{ \text{BMO}\left(B_{R}^{+}\right)}^{2}\\ +CR^{2}\int_{B_{\sigma}(y)}\left|\nabla\omega-\left(\nabla\omega \right)_{B_{\sigma}(y)}\right|^{2}+\sigma^{n}\kappa_{1}, \tag{15}\] where \[\kappa_{1}:=\fint_{B_{\sigma}(y)}\left|\tilde{F}-\left(\tilde{F} \right)_{B_{\sigma}(y)}\right|^{2}+\sigma^{-n}\left(\left\|\text{Q}\right\|_ {L^{\infty}}^{2}+\left\|\text{P}\right\|_{L^{\infty}}^{2}\right)\left\|\omega \right\|_{L^{2}\left(B_{R}^{+}\right)}^{2}\\ +\sigma^{-n}\left(1+\left\|\text{R}\right\|_{L^{\infty}}^{2}+ \left[\tilde{A}\right]_{\mathscr{L}^{2,n}\frac{1}{1+\left|\log\tau\right|} \left(B_{R}^{+}\right)}^{2}\right)\left\|\nabla\omega\right\|_{L^{2}\left(B_{ R}^{+}\right)}^{2}.\] Now, \(\beta=\omega-\alpha\), satisfies, for all \(\phi\in W_{0}^{1,2}\left(B_{\sigma}\left(y\right),\Lambda^{k}\right),\) \[\int_{B_{\sigma}(y)}\left\langle\left(\tilde{A}\right)_{B(y,\sigma)}d\beta;d \phi\right\rangle+\int_{B_{\sigma}(y)}\left\langle d^{*}\beta;d^{*}\phi \right\rangle=0.\] Standard arguments, using decay estimates for \(\beta\) implies, for any \(0<\rho<\sigma\), \[\int_{B_{\rho}(y)}\left|\nabla\omega-\left(\nabla\omega\right)_{ B_{\rho}(y)}\right|^{2}\\ \leq c\left(\frac{\rho}{\sigma}\right)^{n+2}\int_{B_{\sigma}(y)} \left|\nabla\omega-\left(\nabla\omega\right)_{B_{\sigma}(y)}\right|^{2}+c\int _{B_{\sigma}(y)}\left|\nabla\alpha\right|^{2}.\] Defining \(\psi\left(r\right):=\int_{B_{r}(y)}\left|\nabla\omega-\left(\nabla\omega \right)_{B_{r}(y)}\right|^{2}\), and using (15), we have \[\psi\left(\rho\right)\leq c\left[\left(\frac{\rho}{\sigma}\right)^ {n+2}+CR^{2}\right]\psi\left(\sigma\right)\\ +C\sigma^{n}\left(\sigma^{2}\left(1+\left|\log\sigma\right|\right) ^{2}+\left[\Theta^{\tilde{A}}\left(\sigma\right)\right]^{2}\right)\left[ \nabla\omega\right]_{\text{BMO}\left(B_{R}^{+}\right)}^{2}+C\sigma^{n}\kappa_{1}. \tag{16}\] Now we choose \(0<R<1\) in (16) such that \[CR^{2}<\varepsilon_{0}, \tag{17}\] where \(\varepsilon_{0}>0\) is the smallness parameter given by the standard iteration lemma ( Lemma 5.13 in [5] ). Thus, the iteration lemma implies \[\frac{1}{\rho^{n}}\psi\left(\rho\right)\leq C\left[\frac{1}{\sigma^{n}}\psi \left(\sigma\right)+\left(\theta\left(\sigma\right)^{2}+\left[\Theta^{\tilde{A }}\left(\sigma\right)\right]^{2}\right)\left[\nabla\omega\right]_{\mathrm{BMO} \left(B_{R}^{+}\right)}^{2}+\kappa_{1}\right] \tag{18}\] for every \(0<\rho<\sigma.\) **Case II:** Again by Proposition 9, we can find \(\alpha\in W_{0}^{1,2}\left(B_{\sigma}\left(y\right),\varLambda^{k}\right)\) which is the unique weak solution to the following Dirichlet BVP \[\begin{cases}d^{*}\left(\left(\tilde{A}\right)_{B_{2\sigma}^{+} \left(y^{\prime}\right)}d\alpha\right)+dd^{*}\alpha&=d^{*}\tilde{F}-\mathrm{ div}\,G-g&\text{ in }B_{2\sigma}^{+}\left(y^{\prime}\right),\\ \alpha&=0&\text{on }\partial B_{2\sigma}^{+}\left(y^{\prime}\right), \end{cases} \tag{19}\] where \(g,G\) are the same as in Case I. Arguing exactly as before, but using Proposition 15 and Proposition 11 in place of Proposition 14 and Proposition 10, respectively, we deduce the estimate \[\int_{B_{2\sigma}^{+}\left(y^{\prime}\right)}\left|\nabla\alpha \right|^{2}\leq c\sigma^{n}\left(\sigma^{2}\left(1+\left|\log\sigma\right| \right)^{2}+c\left[\Theta^{\tilde{A}}\left(\sigma\right)\right]^{2}\right) \left[\nabla\omega\right]_{\mathrm{BMO}\left(B_{R}^{+}\right)}^{2}\\ +CR^{2}\int_{B_{2\sigma}^{+}\left(y^{\prime}\right)}\left|\nabla \omega-\left(\nabla\omega\right)_{B_{2\sigma}^{+}\left(y^{\prime}\right)} \right|^{2}+c\sigma^{n}\kappa_{2}, \tag{20}\] where \[\kappa_{2}:=\fint_{B_{2\sigma}^{+}\left(y^{\prime}\right)}\left| \tilde{F}-\left(\tilde{F}\right)_{B_{2\sigma}^{+}\left(y^{\prime}\right)} \right|^{2}+\sigma^{-n}\left(\left\|\mathrm{Q}\right\|_{L^{\infty}}^{2}+\left\| \mathrm{P}\right\|_{L^{\infty}}^{2}\right)\left\|\omega\right\|_{L^{2}\left(B_ {R}^{+}\right)}^{2}\\ +\sigma^{-n}\left(1+\left\|\mathrm{R}\right\|_{L^{\infty}}^{2}+ \left[\tilde{A}\right]_{\frac{1}{1+\left|\log r\right|}\left(B_{R}^{+}\right) }^{2}\right)\left\|\nabla\omega\right\|_{L^{2}\left(B_{R}^{+}\right)}^{2}.\] Now \(\beta=\omega-\alpha\in W^{1,2}\left(B_{2\sigma}^{+}\left(y^{\prime}\right); \varLambda^{k}\right)\) satisfies \(e_{n}\wedge\beta=0\) on \(\Gamma_{2\sigma}\left(y^{\prime}\right)\) and \[\int_{B_{2\sigma}^{+}\left(y^{\prime}\right)}\left\langle\left(\tilde{A} \right)_{B_{2\sigma}^{+}\left(y^{\prime}\right)}d\beta;d\phi\right\rangle+ \int_{B_{2\sigma}^{+}\left(y^{\prime}\right)}\left\langle d^{*}\beta;d^{*} \phi\right\rangle=0\] for all \(\phi\in W_{0}^{1,2}\left(B_{2\sigma}^{+}\left(y^{\prime}\right);\varLambda^{k }\right).\) Now since for any \(0<\rho<\sigma,\) we have \(\eta^{2}\beta\in W_{0}^{1,2}\left(B_{2\sigma}^{+}\left(y^{\prime}\right); \varLambda^{k}\right)\) for any \(\eta\in C_{c}^{\infty}\left(B_{2\sigma}^{+}\left(y^{\prime}\right)\right),\) we can plug this as the test function above. Then, arguing exactly as in Theorem 1, Theorem 2 and Theorem 3 of [10], for any \(0<\rho<\sigma,\) we have the following decay estimate \[\int_{B_{2\rho}^{+}\left(y^{\prime}\right)}\left|\nabla\beta-\left(\nabla\beta \right)_{B_{2\rho}^{+}\left(y^{\prime}\right)}\right|^{2}\leq c\left(\frac{ \rho}{\sigma}\right)^{n+2}\int_{B_{2\sigma}^{+}\left(y^{\prime}\right))}\left| \nabla\beta-\left(\nabla\beta\right)_{B_{2\sigma}^{+}\left(y^{\prime}\right)} \right|^{2}.\] Note that since \(\beta\) does not vanish on \(\Gamma_{2\sigma}\left(y^{\prime}\right),\) this decay estimate can not be derived from the usual one for Dirichlet BVP. Now by standard arguments and (20), setting \(\psi\left(r\right):=\int_{B_{2r}^{+}\left(y^{\prime}\right)}\left|\nabla\omega- \left(\nabla\omega\right)_{B_{2r}^{+}\left(y^{\prime}\right)}\right|^{2},\) we deduce \[\psi\left(\rho\right)\leq c\left[\left(\frac{\rho}{\sigma}\right) ^{n+2}+CR^{2}\right]\psi\left(\sigma\right)\\ +C\sigma^{n}\left(\theta\left(\sigma\right)^{2}+\left[\Theta^{ \tilde{A}}\left(\sigma\right)\right]^{2}\right)\left[\nabla\omega\right]_{ \mathrm{BMO}\left(B_{R}^{+}\right)}^{2}+C\sigma^{n}\kappa_{2}. \tag{21}\] Now we choose \(0<R<1\) in (21) such that \[CR^{2}<\varepsilon_{0}, \tag{22}\] where \(\varepsilon_{0}>0\) is the smallness parameter given by the standard iteration lemma ( Lemma 5.13 in [5] ). Thus, using the iteration lemma and the fact that \(B_{\rho}\left(y\right)\cap B_{R}^{+}\subset B_{2\rho}^{+}\left(y^{\prime} \right),\) we obtain \[\frac{1}{\rho^{n}}\int_{B_{\rho}\left(y\right)\cap B_{R}^{+}}\left| \nabla\omega-\left(\nabla\omega\right)_{B_{\rho}\left(y\right)\cap B_{R}^{+}} \right|^{2}\\ \leq C\left[\frac{1}{\sigma^{n}}\psi\left(\sigma\right)+\left( \theta\left(\sigma\right)^{2}+\left[\Theta^{\tilde{A}}\left(\sigma\right) \right]^{2}\right)\left[\nabla\omega\right]_{\mathrm{BMO}\left(B_{R}^{+} \right)}^{2}+\kappa_{3}\right] \tag{23}\] On the other hand, clearly we have \[\frac{1}{\rho^{n}}\int_{B_{R}^{+}\cap B\left(y,\rho\right)}\left|\nabla\omega -\left(\nabla\omega\right)_{B_{R}^{+}\cap B\left(y,\rho\right)}\right|^{2} \leq C\sigma^{-n}\left\|\nabla\omega\right\|_{L^{2}\left(B_{R}^{+}\right)}^{2},\] for any \(\rho\geq\sigma\) and any \(y\in B_{R/2}^{+}.\) Combining this with (18) and (23) and taking supremum over all \(\rho>0\) and \(y\in B_{R/2}^{+},\) we arrive at \[\left[\nabla\omega\right]_{\mathrm{BMO}\left(B_{R/2}^{+}\right)}^{2}\leq C \left(\theta\left(\sigma\right)^{2}+\left[\Theta^{\tilde{A}}_{\frac{1}{1+ \log r!}}\left(\sigma\right)\right]^{2}\right)\left[\nabla\omega\right]_{ \mathrm{BMO}\left(B_{R}^{+}\right)}^{2}+\sigma^{-n}\tilde{\kappa},\] where \[\tilde{\kappa}:=\left[\tilde{F}\right]_{\mathrm{BMO}\left(B_{R}^ {+}\right)}^{2} +\sigma^{-n}\left(\left\|\mathrm{Q}\right\|_{L^{\infty}}^{2}+ \left\|\mathrm{P}\right\|_{L^{\infty}}^{2}\right)\left\|\omega\right\|_{L^{2} \left(B_{R}^{+}\right)}^{2}\] \[+\sigma^{-n}\left(1+\left\|\mathrm{R}\right\|_{L^{\infty}}^{2}+ \left[\tilde{A}\right]_{\mathscr{L}^{\frac{1}{1+\left|\log r\right|}}\left(B_{ R}^{+}\right)}^{2}\right)\left\|\nabla\omega\right\|_{L^{2}\left(B_{R}^{+} \right)}^{2}.\] Since \(u=\left(\Phi^{-1}\right)^{*}\omega,\) (6) follows and this finishes the proof. ### Approximation **Lemma 19** (Approximation lemma).: _If Theorem 21 holds with the additional assumption that \(A\) is smooth, then Theorem 21 holds._ Proof.: Let \(A\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\frac{1}{1+|\log r|}}^{2}\left(\Omega; \mathcal{L}\left(A^{k+1};\varLambda^{k+1}\right)\right)\) be uniformly Legendre-elliptic with ellipticity constant \(\gamma>0\) and let \(F\in\mathrm{BMO}\left(\Omega;\varLambda^{k+1}\right)\). By Remark 13 (i), there exists a sequence \(\left\{A_{s}\right\}_{s\in\mathbb{N}}\subset C^{\infty}(\overline{\Omega}, \mathcal{L}\left(\varLambda^{k+1};\varLambda^{k+1}\right)\) such that \(A_{s}\) is uniformly Legendre-elliptic with constant \(\gamma/2\), and we have the strong convergences \[A_{s}\to A\qquad\text{ in }\mathscr{L}_{\frac{1}{1+|\log r|}}^{2,n}\quad\text{ and }\quad\text{ in }L^{p}\text{ for every }1\leq p<\infty,\] along with the uniform estimates \(\left\|A_{s}\right\|_{L^{\infty}}\leq\left\|A\right\|_{L^{\infty}},\) and \[\Theta^{A_{s}}\left(\rho\right)\leq c\left(\Theta^{A}\left(\rho\right)+\rho^{ n}\left\|A\right\|_{L^{\infty}}\right)\qquad\text{ for all }s\in\mathbb{N}.\] For every \(s\in\mathbb{N}\), using Theorem 9 and Remark 12 in [10], we can find \(u_{s}\in W_{d^{*},T}^{1,2}\cap\left(\mathcal{H}_{T}^{h}\right)^{\perp}\), the unique weak solution for the following \[\left\{\begin{aligned} d^{*}\left(A_{s}\left(x\right)du_{s} \right)&=d^{*}F&\text{ in }\Omega,\\ d^{*}u_{s}&=0&\text{ in }\Omega,\\ \nu\wedge u_{s}&=0&\text{ on }\partial \Omega.\end{aligned}\right. \tag{24}\] Putting \(\phi=u_{s}\) in the weak formulation, using Young's inequality with \(\varepsilon>0\) small enough and Proposition 6, we deduce \[\left\|u_{s}\right\|_{W^{1,2}\left(\Omega\right)}^{2}\leq C\left\|F\right\|_{L ^{2}\left(\Omega\right)}^{2}\qquad\text{ for every }s\in\mathbb{N}.\] Thus, up to the extraction of a subsequence, we have \[u_{s}\rightharpoonup u\qquad\text{ in }W_{d^{*},T}^{1,2}\left(\Omega; \varLambda^{k}\right)\qquad\text{ for some }u\in W_{d^{*},T}^{1,2}\cap\left(\mathcal{H}_{T}^{k}\right)^{\perp}.\] Now, for any \(1\leq q<2\), we have \[\left\|\left[A_{s}\left(x\right)-A\left(x\right)\right]du_{s}\right\|_{L^{q} \left(\Omega\right)}\leq\left\|A_{s}-A\right\|_{L^{\frac{2q}{2-q}}\left(\Omega \right)}\left\|du_{s}\right\|_{L^{2}\left(\Omega\right)}\to 0. \tag{25}\] Now by Lemma 7, (24) implies that in particular, we also have \[\int_{\Omega}\left\langle A_{s}\left(x\right)du_{s};d\eta\right\rangle=\int_{ \Omega}\left\langle F;d\eta\right\rangle\qquad\text{ for every }\eta\in C_{c}^{\infty}\left(\Omega;\varLambda^{k}\right). \tag{26}\] Thus, since \(du_{s}\rightharpoonup du\) in \(L^{2}\), in view of (25), for any \(\eta\in C_{c}^{\infty}\left(\Omega;\varLambda^{k}\right),\) we deduce \[\int_{\Omega}\left\langle A\left(x\right)du;d\eta\right\rangle- \int_{\Omega}\left\langle F;d\eta\right\rangle\] \[\qquad=\int_{\Omega}\left\langle A\left(x\right)\left[du-du_{s} \right];d\eta\right\rangle-\int_{\Omega}\left\langle\left[A_{s}\left(x \right)-A\left(x\right)\right]du_{s};d\eta\right\rangle\to 0.\] By density and Lemma 7, this implies \(u\in W_{d^{*},T}^{1,2}\cap\left(\mathcal{H}_{T}^{k}\right)^{\perp}\) is the unique weak solution of \[\left\{\begin{aligned} d^{*}\left(A\left(x\right)du \right)&=d^{*}F&\text{ in }\Omega,\\ d^{*}u&=0&\text{ in }\Omega,\\ \nu\wedge u&=0&\text{ on }\partial \Omega,\end{aligned}\right. \tag{27}\] From (24) and (27), for every \(\phi\in W_{d^{*},T}^{1,2},\) we have \[\int_{\Omega}\left\langle A_{s}\left(x\right)\left[du_{s}-du\right];d\phi\right\rangle =\int_{\Omega}\left\langle\left[A\left(x\right)-A_{s}\left(x\right)\right]du;d\phi\right\rangle\] Putting \(\phi=u_{s}-u,\) using Young's inequality with \(\varepsilon>0\) small enough and by Proposition 6 we deduce \[\left\|u_{s}-u\right\|_{W^{1,2}\left(\Omega\right)}^{2}\leq C\left\|\left[A \left(x\right)-A_{s}\left(x\right)\right]du\right\|_{L^{2}\left(\Omega\right)} ^{2}\text{for every }s\in\mathbb{N}. \tag{28}\] But since we have \[\left\|\left[A_{s}\left(x\right)-A\left(x\right)\right]du\right\|_{L^{q}\left( \Omega\right)}\leq\left\|A_{s}-A\right\|_{L^{\frac{2q}{2-q}}\left(\Omega \right)}\left\|du\right\|_{L^{2}\left(\Omega\right)}\to 0,\] for any \(1\leq q<2,\) RHS of (28) converges to \(0\) by dominated convergence. Now if Theorem 21 holds for smooth coefficients, we have the estimates \[\left[\nabla u_{s}\right]_{\text{BMO}\left(\Omega\right)}\leq C\left(\left[F \right]_{\text{BMO}\left(\Omega\right)}+\left\|F\right\|_{L^{2}\left(\Omega \right)}+\left\|u_{s}\right\|_{L^{2}\left(\Omega\right)}\right)\text{for all }s\in\mathbb{N}.\] Now fix \(x\in\Omega\) and \(r>0.\) We have \[\fint_{\Omega_{r}\left(x\right)}\left|\nabla u-\left(\nabla u \right)_{\Omega_{r}\left(x\right)}\right|^{2}\] \[\leq 2\left[\nabla u_{s}\right]_{\text{BMO}\left(\Omega\right)}^ {2}+2\left|\Omega_{r}\left(x\right)\right|\int_{\Omega_{r}\left(x\right)} \left|\nabla u-\nabla u_{s}\right|^{2}\] \[\leq C\left(\left[F\right]_{\text{BMO}\left(\Omega\right)}^{2}+ \left\|F\right\|_{L^{2}\left(\Omega\right)}^{2}+\left\|u_{s}\right\|_{L^{2} \left(\Omega\right)}\right)^{2}+2\left|\Omega_{r}\left(x\right)\right|\int_{ \Omega_{r}\left(x\right)}\left|\nabla u-\nabla u_{s}\right|^{2}\] for every \(s\in\mathbb{N}.\) Since \(u_{s}\to u\) in \(W^{1,2},\) letting \(s\to\infty\) and taking supremum, \[\left[\nabla u\right]_{\text{BMO}\left(\Omega\right)}\leq C\left(\left[F \right]_{\text{BMO}\left(\Omega\right)}+\left\|F\right\|_{L^{2}\left(\Omega \right)}+\left\|u\right\|_{L^{2}\left(\Omega\right)}\right).\] The estimate for \(\left[du\right]_{\text{BMO}\left(\Omega\right)}\) follows similarly. ### Flattening the boundary **Lemma 20** (Flattening lemma).: _Let \(\partial\Omega\) is of class \(C^{2}\) and let \(F\in L^{2}\left(\Omega;\varLambda^{k+1}\right).\) Let \(A\in L^{\infty}\cap\text{\rm{V}}\mathscr{L}_{\frac{1}{1+\left|\log r\right|} }^{2,n}\left(\Omega;\mathcal{L}\left(\varLambda^{k+1};\varLambda^{k+1}\right)\right)\) be uniformly Legendre elliptic with ellipticity constant \(\gamma\). Let \(\omega\in W_{d^{*},T}^{1,2}\left(\Omega;\varLambda^{k}\right)\) satisfy_ \[\int_{\Omega}\langle A(x)d\omega;d\phi\rangle=\int_{\Omega}\langle F;d\phi \rangle\text{\quad for all }\phi\in W_{T}^{1,2}\left(\Omega;\varLambda^{k}\right). \tag{29}\] _Let \(x_{0}\in\partial\Omega.\) Then there exists a neighborhood \(U\) of \(x_{0}\) in \(\mathbb{R}^{n}\) and a positive number \(0<R_{0}<1,\) such that there exists an admissible boundary coordinate system \(\Phi\in\text{\rm{Diff}}^{2}(\overline{B_{R_{0}}};\overline{U})\), such that_ \[\Phi(0)=x_{0},\quad D\Phi(0)\in\mathbb{SO}\left(n\right),\quad\Phi(B_{R_{0}}^ {+})=\Omega\cap U,\quad\Phi(\Gamma_{R_{0}})=\partial\Omega\cap U,\] _and \(u=\Phi^{*}\left(\omega\right)\in W^{1,2}(B_{R_{0}}^{+};\Lambda^{k})\) satisfies \(e_{n}\wedge u=0\) on \(\Gamma_{R_{0}},\) and_ \[\int_{B_{R_{0}}^{+}}\left\langle\tilde{A}\left(x\right)du;d\psi \right\rangle+\int_{B_{R_{0}}^{+}}\left\langle d^{*}u;d^{*}\psi\right\rangle- \int_{B_{R_{0}}^{+}}\left\langle\tilde{F};d\psi\right\rangle\\ +\int_{B_{R_{0}}^{+}}\left\langle\mathrm{P}u+\mathrm{R}\nabla u; \psi\right\rangle+\int_{B_{R_{0}}^{+}}\left\langle\mathrm{Q}u;\nabla\psi\right \rangle+\int_{B_{R_{0}}^{+}}\left\langle\mathrm{S}\nabla u;\nabla\psi\right\rangle =0 \tag{30}\] _for all \(\psi\in W_{T,\text{flat}}^{1,2}(B_{R_{0}}^{+};\Lambda^{k}),\) where \(\tilde{A}\in L^{\infty}\cap\mathbb{\mathscr{L}}^{2,n}_{\frac{1}{1+\left|\log r \right|}}\left(B_{R_{0}}^{+};\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1} \right)\right)\) is uniformly Legendre-elliptic, \(\tilde{F}\in L^{2}\left(B_{R_{0}}^{+};\Lambda^{k+1}\right),\)_ \[\mathrm{P}\in C\left(\overline{B_{R_{0}}^{+}};\mathcal{L}\left( \Lambda^{k};\Lambda^{k}\right)\right),\mathrm{Q}\in C\left(\overline{B_{R_{0} }^{+}};\mathcal{L}\left(\Lambda^{k};\Lambda^{k}\otimes\mathbb{R}^{n}\right) \right),\] \[\mathrm{R}\in C\left(\overline{B_{R_{0}}^{+}};\mathcal{L}\left( \Lambda^{k}\otimes\mathbb{R}^{n};\Lambda^{k}\right)\right)\text{ and }\mathrm{S}\in C^{1}\left(\overline{B_{R_{0}}^{+}};\mathcal{L}\left( \Lambda^{k}\otimes\mathbb{R}^{n};\Lambda^{k}\otimes\mathbb{R}^{n}\right) \right).\] _Furthermore, there exist constants \(0<c_{0}<1,\)\(c_{1},c_{2}>0\) and \(C>0,\) all depending only on \(x_{0},n,k,N,\Omega,R_{0}\), such that_ \[\left[\tilde{A}\right]_{\mathscr{L}^{2,n}} \leq C\left[A\right]_{\mathscr{L}^{\frac{1}{1+\left|\log r\right|} }},\] \[\left\|\tilde{A}\right\|_{L^{\infty}} \leq C\left\|A\right\|_{L^{\infty}},\] \[\left\|\tilde{F}\right\|_{L^{2}} \leq C\left\|F\right\|_{L^{2}},\] \[\left\|\mathrm{P}\right\|_{L^{\infty}},\left\|\mathrm{Q}\right\|_{ L^{\infty}},\left\|\mathrm{R}\right\|_{L^{\infty}} \leq C,\] \[\left\|\mathrm{S}\right\|_{L^{\infty}\left(B_{r}^{+}\right)} \leq Cr\qquad\text{ for all }0<r\leq R_{0},\] _and_ \[\Theta^{\tilde{A}}\left(\rho\right)\leq c_{1}\left(\Theta^{A}\left(c_{2}\rho \right)+\theta\left(\rho\right)\right)\] _for all \(0<\rho<\min\left\{R_{0},R_{0}/c_{2}\right\}.\) Moreover, if in addition, \(F\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right),\) then \(\tilde{F}\in\mathrm{BMO}\left(B_{R_{0}}^{+};\Lambda^{k+1}\right)\) as well and we have the estimate_ \[\left[\tilde{F}\right]_{\mathrm{BMO}\left(B_{R_{0}}^{+};\Lambda^{k+1}\right)} \leq C\left[F\right]_{\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right)}.\] Proof.: Since \(\partial\Omega\) is \(C^{2},\) for any \(x_{0}\in\partial\Omega,\) there exists a neighborhood \(U_{1}\) of \(x_{0}\) in \(\mathbb{R}^{n}\) and a positive number \(0<R_{1}<1\) such that there exists an admissible boundary coordinate system \(\Phi\in\mathrm{Diff}^{2}(\overline{B_{R_{1}}};\overline{U_{1}}),\) such that \[\Phi(0)=x_{0},\quad D\Phi(0)\in\mathbb{SO}\left(n\right),\quad\Phi(B_{R_{1}}^ {+})=\Omega\cap U_{1},\quad\Phi(\Gamma_{R_{1}})=\partial\Omega\cap U_{1}.\] This is well-known. See Lemma B.7 in [3] for a detailed proof. Now pick \(0<R_{0}<R_{1}\) and define \(U:=\Phi\left(B_{R_{0}}\right).\) Now choose an open set such that \(\partial\Omega\cap U\subset\partial\Omega\cap U\) and \(\tilde{\Omega}:=\Omega\cap U\) has \(C^{2}\) boundary. Now, by Proposition 7, (29) is equivalent to \[\int_{\Omega}\langle A(x)d\omega;d\phi\rangle=\int_{\Omega}\langle F;d\phi \rangle\quad\text{ for all }\phi\in W_{0}^{1,2}\left(\Omega;\Lambda^{k}\right).\] This in particular implies \[\int_{\tilde{\Omega}}\langle A(x)d\omega;d\phi\rangle=\int_{\tilde{\Omega}} \langle F;d\phi\rangle\quad\text{ for all }\phi\in C_{c}^{\infty}\left(\tilde{\Omega}; \Lambda^{k}\right).\] As \(\tilde{\Omega}\) is open, bounded and has \(C^{2}\) boundary, applying Proposition 7 again, we deduce \[\int_{\tilde{\Omega}}\langle A(x)d\omega;d\phi\rangle=\int_{\tilde{\Omega}} \langle F;d\phi\rangle\quad\text{ for all }\phi\in W_{T}^{1,2}\left(\tilde{\Omega};\Lambda^{k}\right).\] But since \(d^{*}\omega=0\) in \(\Omega\), this can also be written as \[\int_{\tilde{\Omega}}\langle A(x)d\omega;d\phi\rangle+\int_{\tilde{\Omega}} \langle d^{*}\omega;d^{*}\phi\rangle=\int_{\tilde{\Omega}}\langle F;d\phi \rangle\quad\text{ for all }\phi\in W_{T}^{1,2}\left(\tilde{\Omega};\Lambda^{k}\right).\] Note that \(\nu\wedge\omega=0\) on \(\partial\Omega\) and thus, in particular, on \(\partial\Omega\cap U.\) Hence, we have \[e_{n}\wedge u=e_{n}\wedge\Phi^{*}\left(\omega\right)=\Phi^{*}\left(\nu\right) \wedge\Phi^{*}\left(\omega\right)=\Phi^{*}\left(\nu\wedge\omega\right)=0\qquad \text{ on }\Gamma_{R_{0}}.\] Now, \(\phi:=\left(\Phi^{-1}\right)^{*}\left(\psi\right)\in W_{T}^{1,2}\left(\tilde{ \Omega};\Lambda^{k}\right)\) for any \(\psi\in W_{T,\text{flat}}^{1,2}(B_{R_{0}}^{+};\Lambda^{k}).\) Thus, \[\int_{\Phi\left(B_{R_{0}}^{+}\right)}\langle A(x)d\left((\Phi^{-1 })^{*}u\right);d\left((\Phi^{-1})^{*}\psi\right)\rangle\] \[\qquad+\int_{\Phi\left(B_{R_{0}}^{+}\right)}\langle d^{*}\left(( \Phi^{-1})^{*}u\right);d^{*}\left((\Phi^{-1})^{*}\psi\right)\rangle=\int_{ \Phi\left(B_{R_{0}}^{+}\right)}\langle F;d\left((\Phi^{-1})^{*}\psi\right)\rangle,\] for all \(\psi\in W_{T,\text{flat}}^{1,2}(B_{R_{0}}^{+};\Lambda^{k}).\) Set \[\begin{cases}\tilde{A}\left(y\right)=\left|\det D\Phi(y)\right|A\left(\Phi(y) \right),\\ \tilde{F}\left(y\right)=\left|\det D\Phi(y)\right|F\left(\Phi(y)\right),\end{cases} \text{ for a.e. }y\in B_{R_{0}}^{+}. \tag{31}\] Since \(d\) commutes with pullback, by change of variables formula, we deduce \[\int_{\Phi\left(B_{R_{0}}^{+}\right)}\left\langle A(x)d\left((\Phi ^{-1})^{*}u\right);d\left((\Phi^{-1})^{*}\psi\right)\right\rangle\] \[\qquad=\int_{\Phi\left(B_{R_{0}}^{+}\right)}\left\langle A(x)( \Phi^{-1})^{*}du;(\Phi^{-1})^{*}d\psi\right\rangle\] \[\qquad=\int_{B_{R_{0}}^{+}}\langle A\left(\Phi(y)\right)du(y);d \psi(y)\rangle\left|\det D\Phi(y)\right|\quad=\int_{B_{R_{0}}^{+}}\left\langle \tilde{A}\left(y\right)du(y);d\psi(y)\right\rangle.\] Similarly, we have \[\int_{\Phi\left(B_{R_{0}}^{\pm}\right)}\left\langle F;d\left((\Phi^{-1})^{*}\psi \right)\right\rangle=\int_{B_{R_{0}}^{+}}\left\langle\tilde{F};d\psi\right\rangle.\] Now, by computing out the derivatives, using change of variables formula to transfer all integrals to \(B_{R_{0}}^{+}\) and grouping the terms, we can write \[\int_{\Phi\left(B_{R_{0}}^{+}\right)}\left\langle d^{*}\left(( \Phi^{-1})^{*}u\right);d^{*}\left((\Phi^{-1})^{*}\psi\right)\right\rangle-\int_ {B_{R_{0}}^{+}}\left\langle d^{*}u;d^{*}\psi\right\rangle\\ =\int_{B_{R_{0}}^{+}}\langle\mathrm{P}u+\mathrm{R}\nabla u;\psi \rangle+\int_{B_{R_{0}}^{+}}\langle\mathrm{Q}u;\nabla\psi\rangle+\int_{B_{R_{0 }}^{+}}\langle\mathrm{S}\nabla u;\nabla\psi\rangle.\] Since these coefficients contains up to second order derivatives of \(\Phi\), with \(\mathrm{S}\) containing only up to first order derivatives of \(\Phi\), the claimed regularity properties follow. Note that since \(\mathrm{S}\) is \(C^{1}\), to show the estimate \[\left\|\mathrm{S}\right\|_{L^{\infty}\left(B_{r}^{+}\right)}\leq Cr\qquad \text{for any }0<r<R_{0},\] it is enough to show that \(\mathrm{S}\left(0\right)=0.\) See Lemma 4.17 in [3] for a proof. This follows from the fact for any \(M\in\mathbb{SO}\left(n\right),\) if we set \(\Psi(y)=My\) in \(B_{R}\), then \[\left\langle d^{*}\left((\Psi^{-1})^{*}u\right);d^{*}\left((\Psi^{-1})^{*}\psi \right)\right\rangle-\left\langle d^{*}u\circ\Psi^{-1};d^{*}\psi\circ\Psi^{-1} \right\rangle=0.\] The estimates for \(\tilde{A}\) and \(\tilde{F}\) follow from (31). ### Global estimates **Theorem 21**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded with \(\partial\Omega\in C^{2}.\) Let \(A\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^ {-1}}^{2,n}\left(\Omega;\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right)\right)\) be uniformly Legendre-elliptic with ellipticity constant \(\gamma>0\) and let \(F\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right)\). If \(u\in W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\) is a weak solution for the following_ \[\left\{\begin{aligned} d^{*}\left(A\left(x\right)du \right)&=d^{*}F&&\text{in }\Omega,\\ d^{*}u&=0&&\text{in }\Omega,\\ \nu\wedge u&=0&&\text{on }\partial\Omega, \end{aligned}\right. \tag{32}\] _then \(\nabla u\in\mathrm{BMO}\left(\Omega,;\Lambda^{k+1}\right)\) and there exists a constant_ \[C=C\left(n,k,N,\gamma,\Omega,\Theta_{\frac{1}{1+\left|\log r\right|}}^{A}, \left\|A\right\|_{L^{\infty}\left(\Omega\right)},\left[A\right]_{\mathrm{BMO }^{2}\frac{1}{1+\left|\log r\right|}}\left(\Omega\right)\right)>0\] _such that we have the estimates_ \[\left[du\right]_{\mathrm{BMO}\left(\Omega\right)}\leq C\left(\left[F\right]_{ \mathrm{BMO}\left(\Omega\right)}+\left\|F\right\|_{L^{2}\left(\Omega\right)}\right) \tag{33}\] _and_ \[\left[\nabla u\right]_{\mathrm{BMO}\left(\Omega\right)}\leq C\left(\left[F \right]_{\mathrm{BMO}\left(\Omega\right)}+\left\|F\right\|_{L^{2}\left(\Omega \right)}+\left\|u\right\|_{L^{2}\left(\Omega\right)}\right). \tag{34}\] **Remark 22**.: _By standard contradiction-compactness arguments, the \(L^{2}\) norm of \(u\) on the RHS of (34) can be dropped if uniqueness holds for (32). Proposition 6 and ellipticity of \(A\) implies that uniqueness holds if and only if \(\mathcal{H}_{T}^{k}=\left\{0\right\}.\)_ Proof.: We can assume \(\mathcal{H}_{T}^{k}\left(\Omega;\Lambda^{k}\right)=\left\{0\right\}.\) Indeed, if not, then for any \(u\in W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\) we have the orthogonal decomposition \[u=v+h,\qquad\text{ and }\left\|u\right\|_{L^{2}}^{2}=\left\|v\right\|_{L^{2}}^{ 2}+\left\|h\right\|_{L^{2}}^{2},\] where \(v\in W_{d^{*},T}^{1,2}\cap\left(\mathcal{H}_{T}^{k}\right)^{\bot}\) and \(h\in\mathcal{H}_{T}^{k}.\) Since \(h\) is a harmonic field, \(dh=0\) and we have the estimate \(\left\|\nabla h\right\|_{L^{\infty}\left(\Omega;\Lambda^{k}\right)}\leq C \left\|h\right\|_{L^{2}\left(\Omega;\Lambda^{k}\right)}.\) Thus if the estimates (33) and (34) for \(v,\) then they also hold for \(u.\) By Lemma 19, we can also assume that \(A\) is smooth. The regularity results ( see e.g [10] ) for smooth coefficients implies \(\nabla u\) is \(BMO\) in \(\Omega.\) So we show the estimates (33) and (34). By Lemma 18, for every \(x\in\partial\Omega,\) there exists \(0<R_{x}<1,\) a neighborhood \(W_{x}\) of \(x\) in \(\mathbb{R}^{n}\) and diffeomorphisms \(\Phi_{x}:B_{R_{x}}\to W_{x}\) such that the conclusions of Lemma 18 holds. By compactness of \(\partial\Omega,\) we can find finitely many points \(x_{1},\ldots,x_{l}\in\partial\Omega\) such that \(\partial\Omega\subset\bigcup\limits_{i=1}^{l}\Phi_{x_{i}}\left(B_{R_{x_{i}}/2 }^{+}\right).\) Choose \(\Omega_{0}\) such that \(\Omega_{0}\subset\subset\Omega,\)\(d_{0}:=\text{dist}\left(\Omega_{0},\partial\Omega\right)<1\) and \[\Omega\subset\bigcup\limits_{i=0}^{l}\Omega_{i},\quad\text{ where }\Omega_{i}=\Phi_{x_{i}}\left(B_{R_{x_{i}}/2}^{+}\right)\text{ for }1\leq i\leq l.\] Clearly, we have \[\left[\nabla u\right]_{\text{BMO}\left(\Omega\right)}^{2}\leq\left[\nabla u \right]_{\text{BMO}\left(\Omega_{0}\right)}^{2}+\sum\limits_{i=1}^{l}\left[ \nabla u\right]_{\text{BMO}\left(\Phi_{x_{i}}\left(B_{R_{x_{i}}/2}^{+}\right) \right)}^{2}. \tag{35}\] Fix \(0<\sigma<d_{0}/2.\) Then for any \(y\in\Omega_{0},\) we have \(B_{\sigma}\left(y\right)\subset\subset\Omega.\) By Theorem 9 in [10], we can find \(\alpha\in W_{d^{*},T}^{1,2}\left(B_{\sigma}\left(y\right);\Lambda^{k}\right)\) which is a weak solution of \[\int_{B_{\sigma}\left(y\right)}\left\langle\left(A\right)_{B_{\sigma}\left(y \right)}d\alpha;d\psi\right\rangle=\int_{B_{\sigma}\left(y\right)}\left\langle F ;d\psi\right\rangle-\int_{B_{\sigma}\left(y\right)}\left\langle G_{3};d\psi\right\rangle \tag{36}\] for all \(\psi\in W_{d^{*},T}^{1,2}\left(B_{\sigma}\left(y\right);\Lambda^{k}\right),\) where \(G_{3}:=\left[A\left(x\right)-\left(A\right)_{B_{\sigma}\left(y\right)}\right]du.\) Plugging \(\psi=\alpha\) in (36) and Young's inequality with \(\varepsilon>0,\) we have \[\int_{B_{\sigma}(y)}\left|d\alpha\right|^{2} \leq\frac{C}{\gamma}\int_{B_{\sigma}(y)}\left\langle\left(A\right)_ {B_{\sigma}(y)}d\alpha;d\alpha\right\rangle\] \[=\frac{C}{\gamma}\left(\int_{B_{\sigma}(y)}\left\langle F;d\alpha \right\rangle-\int_{B_{\sigma}(y)}\left\langle G_{3};d\alpha\right\rangle\right)\] \[=\frac{C}{\gamma}\left(\int_{B_{\sigma}(y)}\left\langle F-\left( F\right)_{B_{\sigma}(y)};d\alpha\right\rangle-\int_{B_{\sigma}(y)}\left\langle G_{3};d \alpha\right\rangle\right)\] \[\leq\frac{C}{\gamma}\left(\left|\int_{B_{\sigma}(y)}\left\langle F -\left(F\right)_{B_{\sigma}(y)};d\alpha\right\rangle\right|+\left|\int_{B_{ \sigma}(y)}\left\langle G_{3};d\alpha\right\rangle\right|\right)\] \[\leq\varepsilon\int_{B_{\sigma}(y)}\left|d\alpha\right|^{2}+C_{ \varepsilon}\left(\int_{B_{\sigma}(y)}\left|F-\left(F\right)_{B_{\sigma}(y)} \right|^{2}+\int_{B_{\sigma}(y)}\left|G_{3}\right|^{2}\right).\] Thus, choosing \(\varepsilon>0\) small enough and Proposition 5, we deduce \[\int_{B_{\sigma}(y)}\left|\nabla\alpha\right|^{2}\leq C\int_{B_{\sigma}(y)} \left|d\alpha\right|^{2}\leq C\left(\int_{B_{\sigma}(y)}\left|F-\left(F\right) _{B_{\sigma}(x)}\right|^{2}+\int_{B_{\sigma}(y)}\left|G_{3}\right|^{2}\right).\] Now the last term is estimated exactly as was done in Lemma 18 and we deduce \[\int_{B_{\sigma}(y)}\left|\nabla\alpha\right|^{2}\leq C\int_{B_{ \sigma}(y)}\left|F-\left(F\right)_{B_{\sigma}(y)}\right|^{2} +c\sigma^{n}\left[\Theta^{A}\left(\sigma\right)\right]^{2}\left[ \nabla u\right]^{2}_{\text{BMO}(\Omega)}\] \[+c\left[A\right]^{2}_{\frac{\left|\Omega^{2,n}}{1+\left|\log r \right|}}\left(\Omega\right)\left\|\nabla u\right\|^{2}_{L^{2}(\Omega)}. \tag{37}\] Note \(\beta:=u-\alpha\) satisfies the homogeneous constant coefficient system \[\int_{B_{\sigma}(y)}\left\langle\left(A\right)_{B_{\sigma}(y)}d\beta;d\psi \right\rangle+\int_{B_{\sigma}(y)}\left\langle d^{*}\beta;d^{*}\psi\right\rangle=0 \tag{38}\] for all \(\psi\in W^{1,2}_{d^{*},T}\left(B_{\sigma}\left(y\right);\Lambda^{k}\right),\) as \(d^{*}\beta=0.\) Hence \(\beta\) satisfies the decay estimates ( see Theorem 3 in [10] ). Thus, by standard arguments, for any \(0<\rho<\sigma,\) we have \[\int_{B_{\rho}(y)}\left|\nabla u-\left(\nabla u\right)_{B_{\rho} (y)}\right|^{2}\] \[\leq c\left(\frac{\rho}{\sigma}\right)^{n+2}\int_{B_{\sigma}(y)} \left|\nabla u-\left(\nabla u\right)_{B_{\sigma}(y)}\right|^{2}+c\int_{B_{ \sigma}(y)}\left|\nabla\alpha\right|^{2}.\] By (37) and the iteration lemma, for any \(y\in\Omega_{0}\), this implies \[\fint_{B_{\rho}(y)} \left|\nabla u-\left(\nabla u\right)_{B_{\rho}(y)}\right|^{2}\] \[\leq c\fint_{B_{\sigma}(y)}\left|F-\left(F\right)_{B_{\sigma}(y)} \right|^{2}+c\left[\Theta^{A}\left(\sigma\right)\right]^{2}\left[\nabla u \right]_{\mathrm{BMO}(\Omega)}^{2}\] \[\quad+c\sigma^{-n}\left(1+\left[A\right]_{\frac{\sigma^{2},n}{1+ \left\lfloor\log r\right\rfloor}}^{2}\left(\Omega\right)\right)\left\|\nabla u \right\|_{L^{2}(\Omega)}^{2},\quad\text{ for any }0<\rho<\sigma.\] In view of the obvious estimate for \(\rho\geq\sigma\), taking supremum, we arrive at \[\left[\nabla u\right]_{\mathrm{BMO}(\Omega_{0})}^{2}\leq C_{0}\left[\Theta^{A }\left(\sigma\right)\right]^{2}\left[\nabla u\right]_{\mathrm{BMO}(\Omega)}^ {2}+\sigma^{-n}\kappa_{0}, \tag{39}\] for every \(0<\sigma<d_{0}/2\), where \[\kappa_{0}:=\left(1+\left[A\right]_{\frac{1}{1+\left\lfloor\log r \right\rfloor}}^{2}\left(\Omega\right)\right)\left\|\nabla u\right\|_{L^{2}( \Omega)}^{2}+\left[F\right]_{\mathrm{BMO}(\Omega)}^{2}.\] By Lemma 18, for each \(1\leq i\leq l\), there exists constants \(C_{i},c_{1}^{i}>0\) such that \[\left[\nabla u\right]_{\mathrm{BMO}\left(\Phi_{x_{i}}\left(B_{R_{ x_{i}}/2}^{+}\right)\right)}^{2}\\ \leq C_{i}\left\{\left(\left[\Theta^{A}\left(2c_{1}^{i}\sigma \right)\right]^{2}+\left[\theta\left(\sigma\right)\right]^{2}\left(\left\|A \right\|_{L^{\infty}}^{2}+1\right)\right)\left[\nabla u\right]_{\mathrm{BMO}( \Omega)}^{2}+\sigma^{-n}\kappa\right\}, \tag{40}\] for every \(0<\sigma<R_{x_{i}}/8\), where \[\kappa:=\left(\left\|A\right\|_{L^{\infty}(\Omega)}^{2}+\left[A \right]_{\frac{1}{1+\left\lfloor\log r\right\rfloor}}^{2}\left(\Omega\right) +1\right)\left\|\nabla u\right\|_{L^{2}(\Omega)}^{2}+\left\|u\right\|_{L^{2}( \Omega)}^{2}\\ +\left\|F\right\|_{L^{2}(\Omega)}^{2}+\left[F\right]_{\mathrm{BMO }(\Omega)}^{2}. \tag{41}\] Take \(c_{1}^{0}=1\) and set \[\sigma_{1}:=\min\left\{\frac{d_{0}}{2},\min_{1\leq i\leq l}\left\{\frac{R_{x_{ i}}}{8}\right\}\right\},\tilde{C}:=\max_{0\leq i\leq l}\left\{C_{i}\right\} \text{ and }\tilde{c}_{1}:=\max_{0\leq i\leq l}\left\{c_{1}^{i}\right\}.\] Now choose \(0<\sigma_{0}<\sigma_{1}\) small enough such that \[\tilde{C}\left(\left[\Theta^{A}\left(2\tilde{c}_{1}\sigma_{0}\right)\right]^{ 2}+\left[\theta\left(\sigma_{0}\right)\right]^{2}\left(\left\|A\right\|_{L^{ \infty}}^{2}+1\right)\right)\leq\frac{1}{2\left(l+1\right)}. \tag{42}\] In view of (35), (39) and (40), this implies the estimate \[\left[\nabla u\right]_{\mathrm{BMO}(\Omega)}^{2} \leq\left(\sum_{j=0}^{l}\frac{1}{2\left(l+1\right)}\right)\left[ \nabla u\right]_{\mathrm{BMO}(\Omega)}^{2}+\tilde{C}\sigma_{0}^{-n}\kappa\] \[\leq\frac{1}{2}\left[\nabla u\right]_{\mathrm{BMO}(\Omega)}^{2}+ \tilde{C}\sigma_{0}^{-n}\kappa,\] where \(\kappa\) is as in (41) (note \(\kappa_{0}\leq\kappa\)). But this implies the estimate \[\left[\nabla u\right]_{\mathrm{BMO}(\Omega)}^{2}\leq C\left(\left\|u\right\|_{W^{ 1,2}(\Omega)}^{2}+\left\|F\right\|_{L^{2}(\Omega)}^{2}+\left[F\right]_{ \mathrm{BMO}(\Omega)}^{2}\right), \tag{43}\] where the constant \(C>0\) depends only on \[n,k,N,\gamma,\Omega,\Theta^{A},\left\|A\right\|_{L^{\infty}(\Omega)}\text{ and }\left[A\right]_{\mathscr{L}^{2,n}\frac{1}{1+\left\|\log r\right\|}}(\Omega)\,.\] But since \(u\in W^{1,2}_{d^{*},T}\) solves (32), using Young's inequality with \(\varepsilon>0\) small enough and Proposition 6, we have \[\left\|u\right\|_{W^{1,2}(\Omega)}^{2}\leq C\left\|du\right\|_{L^{2}(\Omega)} ^{2}\leq C\left\|F\right\|_{L^{2}(\Omega)}^{2}.\] Combining this with (43), we have (34) and consequently, (33). ### Uniqueness Now we discuss the uniqueness of solutions for our systems. **Lemma 23**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded with \(\partial\Omega\in C^{2}.\) Let \(A\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^ {-1}}^{2,n}\left(\Omega;\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right) \right),\)\(B\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^ {-1}}^{2,n}\left(\Omega;\mathcal{L}\left(\Lambda^{k};\Lambda^{k}\right)\right)\) be uniformly Legendre-elliptic with ellipticity constants \(\gamma_{A},\gamma_{B}>0\), respectively. Let \(u\in W^{d,2}_{T}\left(\Omega;\Lambda^{k}\right)\) be a weak solution to the following_ \[\left\{\begin{aligned} d^{*}\left(A\left(x\right)du \right)&=0&&\text{in }\Omega,\\ d^{*}\left(B\left(x\right)u\right)&=0&& \text{in }\Omega,\\ \nu\wedge u&=0&&\text{on }\partial\Omega, \end{aligned}\right. \tag{44}\] _Then \(u=0\) if and only if \(\mathcal{H}_{T}^{k}\left(\Omega;\Lambda^{k}\right)=\left\{0\right\}.\)_ Proof.: Suppose there exists a non-zero \(h\in\mathcal{H}_{T}^{k}\left(\Omega;\Lambda^{k}\right).\) Since \(h\) is smooth, we can use Theorem 21 to find \(\alpha\in W^{1,2}_{d^{*},T}\left(\Omega;\Lambda^{k-1}\right),\) a weak solution to \[\left\{\begin{aligned} d^{*}\left(B\left(x\right)d\alpha \right)&=d^{*}\left(B\left(x\right)h\right)&&\text{ in }\Omega,\\ d^{*}\alpha&=0&&\text{in }\Omega,\\ \nu\wedge\alpha&=0&&\text{on }\partial \Omega,\end{aligned}\right.\] such that \(\nabla\alpha\in\mathrm{BMO}.\) Then setting \(u=d\alpha-h,\) it is easy to see that \(u\in W^{d,2}_{T}\left(\Omega;\Lambda^{k}\right)\) is a solution to (44). But as \(u=0\) would imply \(h=d\alpha,\) which is impossible as no nontrivial harmonic field can be exact, \(u\) is a nontrivial solution to (44). For the reverse implication, ellipticity of \(A\) implies we must have \(du=0.\) Since \(\mathcal{H}_{T}^{k}\left(\Omega;\Lambda^{k}\right)=\left\{0\right\},\) by Hodge decomposition, there exists \(\beta\in W^{1,2}_{d^{*},T}\left(\Omega;\Lambda^{k-1}\right)\) such that \(u=d\beta.\) But now ellipticity of \(B\) and Proposition 6 implies \(\beta=0.\) This completes the proof. ## 5 Main results **Theorem 24**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded with \(\partial\Omega\in C^{2}.\) Let \(A\in L^{\infty}\cap\mathbb{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^{-1 }}^{2,n}\left(\Omega;\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right)\right)\) be uniformly Legendre-elliptic with ellipticity constant \(\gamma>0\) and let \(F\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right)\). Let \(u_{0}\in W^{1,2}\left(\Omega;\Lambda^{k}\right)\) be such that \(\nabla u_{0}\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\otimes\mathbb{R}^{n} \right).\) Then there exists \(u\in W^{1,2}\left(\Omega;\Lambda^{k}\right),\) which is a weak solution for the following_ \[\left\{\begin{aligned} d^{*}\left(A\left(x\right)du\right)& =d^{*}F&\text{in }\Omega,\\ d^{*}u&=0&\text{in }\Omega,\\ \nu\wedge u&=\nu\wedge u_{0}&\text{on } \partial\Omega,\end{aligned}\right. \tag{45}\] _then \(\nabla u\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\otimes\mathbb{R}^{n}\right)\) and there exists a constant_ \[C=C\left(n,k,N,\gamma,\Omega,\Theta^{A},\|A\|_{L^{\infty}},[A]_{\mathscr{L}_{ \frac{2^{n}}{1+\left|\log r\right|}}}\right)>0\] _such that we have the estimates_ \[\left[du\right]_{\mathrm{BMO}\left(\Omega\right)}\leq C\left(\left[F\right]_{ \mathrm{BMO}\left(\Omega\right)}+\left\|F\right\|_{L^{2}\left(\Omega\right)}+ \left[du_{0}\right]_{\mathrm{BMO}\left(\Omega\right)}+\left\|du_{0}\right\|_{ L^{2}\left(\Omega\right)}\right) \tag{46}\] _and_ \[\left[\nabla u\right]_{\mathrm{BMO}\left(\Omega\right)}\leq C\left(\left\|u \right\|_{L^{2}\left(\Omega\right)}+\left[\left(F,\nabla u_{0}\right)\right]_ {\mathrm{BMO}\left(\Omega\right)}+\left\|\left(F,\nabla u_{0}\right)\right\|_ {L^{2}\left(\Omega\right)}\right). \tag{47}\] _Moreover, the solution is unique if and only if \(\mathcal{H}_{T}^{k}\left(\Omega;\Lambda^{k}\right)=\left\{0\right\}.\)_ **Remark 25**.: _If \(\mathcal{H}_{T}^{k}\neq\left\{0\right\},\) then any weak solution \(v\in W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\) of (45) differs from \(u\) by a harmonic field \(h\in\mathcal{H}_{T}^{k}\) and thus all weak solutions satisfy the estimates (46) and (47). Also, if \(\mathcal{H}_{T}^{k}=\left\{0\right\},\) then the \(L^{2}\) norm of \(u\) can be dropped from the RHS of the estimate (47) in view of uniqueness._ Proof.: First we find \(\alpha\in W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\cap\left(\mathcal{ H}_{T}^{k}\right)^{\perp}\), a weak solution to \[\left\{\begin{aligned} d^{*}\left(A\left(x\right)d\alpha\right)& =d^{*}F-d^{*}\left(A\left(x\right)du_{0}\right)&\text{in }\Omega,\\ d^{*}\alpha&=0&\text{in }\Omega,\\ \nu\wedge\alpha&=0&\text{on }\partial\Omega.\end{aligned}\right.\] Existence follows from applying Lax-Milgram in the space \(W_{d^{*},T}^{1,2}\) ( see [10] ). Now, since \(\nabla u_{0}\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\otimes\mathbb{R}^{n} \right),\) we have \(du_{0}\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right)\) and using Remark 13 (ii) componentwise, we deduce \(A\left(x\right)du_{0}\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right).\) Thus, Theorem 21 implies \(\nabla\alpha\in\mathrm{BMO}\) with corresponding estimates. Now we find \(\beta\in W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\cap\left(\mathcal{ H}_{T}^{k}\right)^{\perp},\) a weak solution for the following \[\left\{\begin{aligned} d\beta&=0&\text{in }\Omega,\\ d^{*}\beta&=-d^{*}u_{0}&\text{in }\Omega,\\ \nu\wedge\beta&=0&\text{on }\partial\Omega.\end{aligned}\right.\] By standard estimates, (see e.g [10] ), we deduce \(\nabla\beta\in\) BMO with corresponding estimates. Finally, we set \(u=\alpha+\beta+u_{0}.\) As a consequence and Stampacchia interpolation and duality, we also have the following. **Theorem 26**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded with \(\partial\Omega\in C^{2}.\) Let \(1<p<\infty.\) Let \(A\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^ {-1}}^{2,n}\left(\Omega;\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right)\right)\) be uniformly Legendre-elliptic with ellipticity constant \(\gamma>0\) and let \(F\in L^{p}\left(\Omega;\Lambda^{k+1}\right)\). Let \(u_{0}\in W^{1,p}\left(\Omega;\Lambda^{k}\right)\) Then there exists \(u\in W^{1,p}\left(\Omega;\Lambda^{k}\right),\) which satisfies_ \[\begin{cases}d^{*}\left(A\left(x\right)du\right)=d^{*}F&\text{ in }\Omega,\\ d^{*}u=0&\text{ in }\Omega,\\ \nu\wedge u=\nu\wedge u_{0}&\text{ on }\partial\Omega,\end{cases}\] _in the weak sense and there exists a constant_ \[C=C\left(n,k,N,\gamma,\Omega,p,\Theta^{A},\|A\|_{L^{\infty}},[A]_{\mathscr{L} _{\frac{1}{1+\left|\log r\right|}}^{2,n}}\right)>0\] _such that we have the estimates_ \[\left\|du\right\|_{L^{p}\left(\Omega\right)}\leq C\left(\left\|F\right\|_{L^{p }\left(\Omega\right)}+\left\|du_{0}\right\|_{L^{p}\left(\Omega\right)}\right)\] _and_ \[\left\|\nabla u\right\|_{L^{p}\left(\Omega\right)}\leq C\left(\left\|u\right\| _{L^{p}\left(\Omega\right)}+\left\|F\right\|_{L^{p}\left(\Omega\right)}+\left\| u_{0}\right\|_{W^{1,p}\left(\Omega\right)}\right).\] _Moreover, the solution is unique if and only if \(\mathcal{H}_{T}^{k}\left(\Omega;\Lambda^{k}\right)=\left\{0\right\}.\)_ Proof.: We can obviously assume \(u_{0}=0.\) Now if \(p\geq 2,\) consider the'solution operator' that maps \(F\mapsto du,\) where \(u\) is the unique weak solution in \(W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\cap\left(\mathcal{H}_{T}^{k} \right)^{\perp}.\) By standard \(L^{2}\) estimates, this operator is bounded from \(L^{2}\) to \(L^{2}.\) Theorem 21 proves the operator is also bounded from BMO to BMO, where the operator norm in both cases can be bounded by a constant \[C=C\left(n,k,N,\gamma,\Omega,\Theta_{\frac{1}{1+\left|\log r\right|}}^{A},\|A \|_{L^{\infty}\left(\Omega\right)},[A]_{\mathrm{BMO}^{\frac{1}{1+\left|\log r \right|}}}\left(\Omega\right)\right)>0.\] Thus, Stampacchia's interpolation theorem ( see [13] ) can be used in the standard way ( see Chapter 6 and 7 of [5], also Section 3 in [1] ) to prove this operator is bounded from \(L^{p}\) to \(L^{p}\) for any \(2\leq p<\infty.\) Now the apriori \(L^{p}\) estimates in the case \(1<p<2\) follows by standard duality arguments when \(F\in L^{2}\cap L^{p}.\) The existence for the case \(1<p<2\) now follows from these estimates by approximating \(F\in L^{p}\) by a sequence \(\left\{F_{s}\right\}_{s\in\mathbb{N}}\subset L^{2}\cap L^{p}\) and passing to the limit. **Theorem 27**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded with \(\partial\Omega\in C^{2}.\) Let \(A\in L^{\infty}\cap\mathbb{\mathscr{L}}_{\left(1+\log r\right]^{-1}}^{2,n}\left( \Omega;\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right)\right),\)\(B\in L^{\infty}\cap\mathrm{V}\mathscr{L}_{\left(1+\log r\right]^{-1}}^{2,n}\left(\Omega;\mathcal{L}\left( \Lambda^{k};\Lambda^{k}\right)\right)\) be uniformly Legendre-elliptic with ellipticity constants \(\gamma_{A},\gamma_{B}>0,\) respectively. Let \(F\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right)\) and \(G\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\right).\) Let \(u_{0}\in L^{2}\left(\Omega;\Lambda^{k}\right)\) be such that \(du_{0}\in L^{2}\left(\Omega;\Lambda^{k+1}\right)\), \(u_{0}\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\right)\) and \(du_{0}\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right).\) Then there exists \(u\in W^{1,2}\left(\Omega;\Lambda^{k}\right),\) which is a weak solution for the following_ \[\begin{cases}d^{*}\left(A\left(x\right)du\right)=d^{*}F&\text{in }\Omega,\\ d^{*}\left(B\left(x\right)u\right)=d^{*}G&\text{in }\Omega,\\ \nu\wedge u=\nu\wedge u_{0}&\text{on }\partial\Omega,\end{cases} \tag{48}\] _such that \(u\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\right)\) and \(du\in\mathrm{BMO}\left(\Omega,;\Lambda^{k+1}\right)\) and there exists a constant \(C>0,\) depending only on_ \[n,k,N,\gamma_{A},\gamma_{B},\Omega,\Theta^{A},\Theta^{B},\left\|A\right\|_{L^ {\infty}},\left\|B\right\|_{L^{\infty}},\left[A\right]_{\mathscr{L}_{\frac{1} {1+\left|\log r\right|}}^{2,n}},\left[B\right]_{\mathscr{L}_{\frac{1}{1+ \left|\log r\right|}}^{2,n}},\] _such that we have the estimates_ \[\left[u\right]_{\mathrm{BMO}\left(\Omega\right)}+\left[du\right]_ {\mathrm{BMO}\left(\Omega\right)}\\ \leq C\left(\left\|u\right\|_{L^{2}\left(\Omega\right)}+\left[ \left(F,G,u_{0},du_{0}\right)\right]_{\mathrm{BMO}\left(\Omega\right)}+\left\| \left(F,G,u_{0},du_{0}\right)\right\|_{L^{2}\left(\Omega\right)}\right). \tag{49}\] _Moreover, the solution is unique if and only if \(\mathcal{H}_{T}^{k}\left(\Omega;\Lambda^{k}\right)=\left\{0\right\}.\)_ **Remark 28**.: _If \(\mathcal{H}_{T}^{k}\neq\left\{0\right\},\) then the set of weak solution \(v\in W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\) of (48) with zero data is in one to one correspondence with the set of harmonic fields \(h\in\mathcal{H}_{T}^{k}\) ( see Lemma 23 ) and all weak solutions satisfy the estimate (49). Also, if \(\mathcal{H}_{T}^{k}=\left\{0\right\},\) then the \(L^{2}\) norm of \(u\) can be dropped from the RHS of the estimate (49) in view of uniqueness._ Proof.: First we find \(\alpha\in W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\cap\left(\mathcal{ H}_{T}^{k}\right)^{\perp}\), a weak solution to \[\begin{cases}d^{*}\left(A\left(x\right)d\alpha\right)=d^{*}F-d^{*}\left(A \left(x\right)du_{0}\right)&\text{in }\Omega,\\ d^{*}\alpha=0&\text{in }\Omega,\\ \nu\wedge\alpha=0&\text{on }\partial\Omega.\end{cases}\] Exactly as before, Theorem 21 implies \(\nabla\alpha\in\mathrm{BMO}\) with corresponding estimates. Now we find \(\beta\in W_{d^{*},T}^{1,2}\left(\Omega;\Lambda^{k}\right)\cap\left(\mathcal{ H}_{T}^{k}\right)^{\perp},\) a weak solution to \[\begin{cases}d^{*}\left(B\left(x\right)d\beta\right)=d^{*}G-d^{*}\left(B\left( x\right)\alpha\right)-d^{*}\left(B\left(x\right)u_{0}\right)&\text{in }\Omega,\\ d^{*}\beta=0&\text{in }\Omega,\\ \nu\wedge\beta=0&\text{on }\partial\Omega.\end{cases}\] Again, Theorem 21 implies \(\nabla\beta\in\mathrm{BMO}\) with corresponding estimates. Since \(\nu\wedge\beta=0\) on \(\partial\Omega\) implies \(\nu\wedge d\beta=0\) on \(\partial\Omega,\) it is easy to verify \(u=\alpha+d\beta+u_{0}\) solves (48), \(u\in\mathrm{BMO}\) and \(du\in\mathrm{BMO}\) along with the estimates. As a consequence, we have the following regularity result for the general Hodge-maxwell system. **Theorem 29**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be open, bounded with \(\partial\Omega\in C^{2}.\) Let \(A\in L^{\infty}\cap\mathbb{V}\mathscr{L}_{\left(1+\left|\log r\right|\right)^{- 1}}^{2,n}\left(\Omega;\mathcal{L}\left(\Lambda^{k+1};\Lambda^{k+1}\right) \right),\ B\in L^{\infty}\cap\mathbb{V}\mathscr{L}_{\left(1+\left|\log r \right|\right)^{-1}}^{2,n}\left(\Omega;\mathcal{L}\left(\Lambda^{k};\Lambda^{k }\right)\right)\) be uniformly Legendre-elliptic with ellipticity constants \(\gamma_{A},\gamma_{B}>0\), respectively. Let \(\lambda\geq 0,\)\(F\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right)\) and \(G\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\right).\) Let \(u_{0}\in L^{2}\left(\Omega;\Lambda^{k}\right)\) be such that \(du_{0}\in L^{2}\left(\Omega;\Lambda^{k+1}\right)\), \(u_{0}\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\right)\) and \(du_{0}\in\mathrm{BMO}\left(\Omega;\Lambda^{k+1}\right).\) If there exists \(u\in L^{2}\left(\Omega;\Lambda^{k}\right)\) with \(du\in L^{2}\left(\Omega;\Lambda^{k+1}\right)\) such that \(u\) is a weak solution for the following_ \[\left\{\begin{aligned} d^{*}\left(A\left(x\right)du \right)&=\lambda B\left(x\right)u-\lambda G+d^{*}F& \text{ in }\Omega,\\ d^{*}\left(B\left(x\right)u\right)&=d^{*}G& \text{ in }\Omega,\\ \nu\wedge u&=\nu\wedge u_{0}&\text{ on } \partial\Omega,\end{aligned}\right. \tag{50}\] _then \(u\in\mathrm{BMO}\left(\Omega;\Lambda^{k}\right)\) and \(du\in\mathrm{BMO}\left(\Omega,;\Lambda^{k+1}\right)\) and there exists a constant \(C>0,\) depending only on_ \[n,k,N,\lambda,\gamma_{A},\gamma_{B},\Omega,\Theta^{A},\Theta^{B},\left\|A \right\|_{L^{\infty}},\left\|B\right\|_{L^{\infty}},\left[A\right]_{\mathscr{L }_{\frac{1}{1+\left|\log r\right|}}^{2,n}},\left[B\right]_{\mathscr{L}_{\frac {1}{1+\left|\log r\right|}}^{2,n}},\] _such that we have the estimates_ \[\left[u\right]_{\mathrm{BMO}\left(\Omega\right)}+\left[du\right] _{\mathrm{BMO}\left(\Omega\right)}\\ \leq C\left(\left\|u\right\|_{L^{2}\left(\Omega\right)}+\left[ \left(F,G,u_{0},du_{0}\right)\right]_{\mathrm{BMO}\left(\Omega\right)}+\left\| \left(F,G,u_{0},du_{0}\right)\right\|_{L^{2}\left(\Omega\right)}\right). \tag{51}\] Proof.: Since \(u,u_{0}\in L^{2}\left(\Omega;\Lambda^{k}\right),\) by standard Hodge decomposition theorem, we can write \[u-u_{0}=d\alpha+d^{*}\beta+h,\] where \(\alpha\in W_{T}^{1,2}\left(\Omega;\Lambda^{k-1}\right),\)\(\beta\in W_{T}^{1,2}\left(\Omega;\Lambda^{k+1}\right),\)\(h\in\mathcal{H}_{T}^{k}\left(\Omega;\Lambda^{k}\right)\) and \[d^{*}\alpha=0\quad\text{ and }\quad d\beta=0\qquad\text{ in }\Omega.\] Now since \(\nu\wedge u-u_{0}=0\) on \(\partial\Omega,\) we have \(\nu\wedge d^{*}\beta=0\) on \(\partial\Omega.\) As \(du,du_{0}\in L^{2}\) and \(\beta\in W_{T}^{1,2}\left(\Omega;\Lambda^{k+1}\right)\) is a weak solution to \[\left\{\begin{aligned} dd^{*}\beta&=du-du_{0}& \text{ in }\Omega,\\ d\beta&=0&\text{ in }\Omega,\\ \nu\wedge\beta&=0&\text{ on }\partial\Omega,\\ \nu\wedge d^{*}\beta&=0&\text{ on }\partial\Omega, \end{aligned}\right.\] we deduce \(\beta\in W^{2,2}\left(\Omega;\Lambda^{k+1}\right)\) ( see Theorem 10 in [10], our system is the Hodge dual ). Now, we see that \(\alpha\in W_{T}^{1,2}\left(\Omega;\Lambda^{k-1}\right)\) satisfies \[\left\{\begin{aligned} d^{*}\left(B\left(x\right)d\alpha \right)&=d^{*}G-d^{*}\left(B\left(x\right)\left[u_{0}+h+d^{*} \beta\right]\right)&\text{ in }\Omega,\\ d^{*}\alpha&=0&\text{ in }\Omega,\\ \nu\wedge\alpha&=0&\text{ on }\partial\Omega. \end{aligned}\right.\] As \(d^{*}\beta\in W^{1,2}\hookrightarrow L^{\frac{2n}{n-2}},\) Theorem 26 implies \(\alpha\in W^{1,\frac{2n}{n-2}}.\) This implies \(u\in L^{\frac{2n}{n-2}}.\) But \(\psi=d^{*}\beta\) is a weak solution to \[\begin{cases}d^{*}\left(A\left(x\right)d\psi\right)=\lambda B\left(x\right)u- \lambda G+d^{*}F&\text{ in }\Omega,\\ d^{*}\psi=d^{*}G&\text{ in }\Omega,\\ \nu\wedge\psi=0&\text{ on }\partial\Omega.\end{cases}\] Thus, \(\psi\in W^{1,\frac{2n}{n-2}}.\) This implies \(du\in L^{\frac{2n}{n-2}}.\) Repeating the arguments finitely many times, we deduce that \(u\in L^{q},\) where \(q<n\) is such that \(\frac{nq}{n-q}>n.\) Hence, by Theorem 14 in [10], we can find \(\phi\in W^{1,q}\) such that \[\begin{cases}\begin{aligned} d^{*}\phi&=\lambda B \left(x\right)u-\lambda G&\text{ in }\Omega,\\ d\phi&=0&\text{ in }\Omega,\\ \nu\wedge\phi&=0&\text{ on }\partial\Omega.\end{aligned}\end{cases}\] Hence, \(\phi\in\) BMO and from (50), we have \[\begin{cases}\begin{aligned} d^{*}\left(A\left(x\right)du \right)&=d^{*}\phi+d^{*}F&\text{ in }\Omega,\\ d^{*}\left(B\left(x\right)u\right)&=d^{*}G&\text{ in }\Omega,\\ \nu\wedge u&=\nu\wedge u_{0}&\text{ on }\partial\Omega.\end{aligned}\end{cases}\] Now applying Theorem 27 completes the proof. Proof of Theorem 1.: By eliminating \(H=\frac{i}{\omega}\left[\mu^{-1}\operatorname{curl}E-\mu^{-1}J_{m}\right],\) the Maxwell system can be written as the second order system \[\begin{cases}\operatorname{curl}(\mu^{-1}\operatorname{curl}E)=\omega^{2} \varepsilon E-i\omega J_{e}+\operatorname{curl}\left(\mu^{-1}J_{m}\right)& \text{ in }\Omega,\\ \operatorname{div}(\varepsilon E)=\frac{i}{\omega}\operatorname{ div}J_{e}&\text{ in }\Omega,\\ \nu\times E=\nu\times E_{0}&\text{ on }\partial\Omega,\end{cases}\] By Remark 13(iii), \(\mu^{-1}J_{m}\in\) BMO. Using Lemma 16, the result follows from Theorem 29 by taking \(A=\mu^{-1}\) and \(B=\varepsilon.\)
2302.09737
Fully Dynamic $k$-Center in Low Dimensions via Approximate Furthest Neighbors
Let $P$ be a set of points in some metric space. The approximate furthest neighbor problem is, given a second point set $C,$ to find a point $p \in P$ that is a $(1+\epsilon)$ approximate furthest neighbor from $C.$ The dynamic version is to maintain $P,$ over insertions and deletions of points, in a way that permits efficiently solving the approximate furthest neighbor problem for the current $P.$ We provide the first algorithm for solving this problem in metric spaces with finite doubling dimension. Our algorithm is built on top of the navigating net data-structure. An immediate application is two new algorithms for solving the dynamic $k$-center problem. The first dynamically maintains $(2+\epsilon)$ approximate $k$-centers in general metric spaces with bounded doubling dimension and the second maintains $(1+\epsilon)$ approximate Euclidean $k$-centers. Both these dynamic algorithms work by starting with a known corresponding static algorithm for solving approximate $k$-center, and replacing the static exact furthest neighbor subroutine used by that algorithm with our new dynamic approximate furthest neighbor one. Unlike previous algorithms for dynamic $k$-center with those same approximation ratios, our new ones do not require knowing $k$ or $\epsilon$ in advance. In the Euclidean case, our algorithm also seems to be the first deterministic solution.
Jinxiang Gan, Mordecai Jay Golin
2023-02-20T03:17:40Z
http://arxiv.org/abs/2302.09737v1
# Fully Dynamic \(k\)-Center in Low Dimensions via Approximate Furthest Neighbors ###### Abstract Let \(P\) be a set of points in some metric space. The approximate furthest neighbor problem is, given a second point set \(C\), to find a point \(p\in P\) that is a \((1+\epsilon)\) approximate furthest neighbor from \(C\). The dynamic version is to maintain \(P\), over insertions and deletions of points, in a way that permits efficiently solving the approximate furthest neighbor problem for the current \(P\). We provide the first algorithm for solving this problem in metric spaces with finite doubling dimension. Our algorithm is built on top of the navigating net data-structure. An immediate application is two new algorithms for solving the dynamic \(k\)-center problem. The first dynamically maintains \((2+\epsilon)\) approximate \(k\)-centers in general metric spaces with bounded doubling dimension and the second maintains \((1+\epsilon)\) approximate Euclidean \(k\)-centers. Both these dynamic algorithms work by starting with a known corresponding static algorithm for solving approximate \(k\)-center, and replacing the static exact furthest neighbor subroutine used by that algorithm with our new dynamic approximate furthest neighbor one. Unlike previous algorithms for dynamic \(k\)-center with those same approximation ratios, our new ones do not require knowing \(k\) or \(\epsilon\) in advance. In the Euclidean case, our algorithm also seems to be the first deterministic solution. Keywords:PTAS, Dynamic Algorithms, \(k\)-center, Furthest Neighbor. ## 1 Introduction The main technical result of this paper is an efficient procedure for calculating approximate furthest neighbors from a dynamically changing point set \(P\). This procedure, in turn, will lead to the development of two new simple algorithms for maintaining approximate \(k\)-centers in dynamically changing point sets. Let \(B(c,r)\) denote the ball centered at \(c\) with radius \(r\). The \(k\)-center problem is to find a minimum radius \(r^{*}\) and associated \(C\) such that the union of balls \(\bigcup_{c\in C}B(c,r^{*})\) contains all of the points in \(P\). In the arbitrary metric space version of the problem, the centers are restricted to be points in \(P\). In the _Euclidean_\(k\)-center problem \((\mathcal{X},d)=\left(\mathbb{R}^{D},\ell_{2}\right)\) with \(D\geq 1\) and \(C\) may be any set of \(k\) points in \(\mathbb{R}^{D}\). The Euclidean 1-center problem is also known as the minimum enclosing ball (MEB) problem. An \(\rho\)-approximation algorithm would find a set of centers \(C^{\prime}\), \(|C^{\prime}|\leq k\), and radius \(r^{\prime}\) in polynomial time such that \(\bigcup_{c\in C^{\prime}}B(c,r^{\prime})\) contains all of the points in \(P\) and \(r^{\prime}\leq\rho r^{*}\). The \(k\)-center problem is known to be NP-hard to approximate with a factor smaller than \(2\) for arbitrary metric spaces[10], and with a factor smaller than \(\sqrt{3}\) for Euclidean spaces [11]. Static algorithmsThere do exist two \(2\)-approximation algorithms in [1, 2] for the \(k\) center problem on an arbitrary metric space; the best-known approximation factor for Euclidean \(k\)-center remains \(2\) even for two-dimensional space when \(k\) is part of the input (see [11]). There are better results for the special case of the Euclidean \(k\)-center for fixed \(k\), \(k=1\) or \(2\) (e.g., see [1, 2, 3, 1, 12]). There are also PTASs [1, 2, 3] for the Euclidean \(k\) center when \(k\) and \(D\) are constants. Dynamic algorithmsIn many practical applications, the data set \(P\) is not static but changes _dynamically_ over time, e.g, a new point may be inserted to or deleted from \(P\) at each step. \(C\) and \(r\) then need to be recomputed at selected query times. If only insertions are permitted, the problem is _incremental_; if both insertions and deletions are permitted, the problem is _fully dynamic_. The running time of such dynamic algorithms are often split into the time required for an _update_ (to register a change in the storing data structure) and the time required for a _query_ (to solve the problem on the current dataset). In dynamic algorithms, we require both update and query time to be _nearly logarithmic_ or constant. The static versions take linear time. Some known results on these problems are listed in Table 1. As is standard, many of them are stated in terms of the _aspect ratio_ of point set \(P\). Let \(d_{max}=\sup\{d(x,y):x,y\in P\text{ and }x\neq y\}\) and \(d_{min}=\inf\{d(x,y):x,y\in P\text{ and }x\neq y\}\). The _aspect ratio_\(\Delta\) of \(P\) is \(\Delta=\frac{d_{max}}{d_{min}}\). The algorithms listed in the table work under slightly different models. More explicitly: 1. For arbitrary metric spaces, both [1] and the current paper assume that the metric space has a bounded doubling dimension \(dim(\mathcal{X})\) (see Definition 2). 2. In "Low dimension", update time may be exponential in \(D\); in "High dimension" it may not. 3. The "fixed" column denotes parameter(s) that must be fixed in advance when initializing the corresponding data structure, e.g., \(k\) and/or \(\epsilon\). In addition, in both [13, 2] for high dimensional space, \(f\geqslant 1\) is a constant selected in advance that appears in both the approximation factor and running time. The data structure used in the current paper is the navigating nets from [1]. It does not require knowing \(k\) or \(\epsilon\) in advance but instead supports them as parameters to the query. 4. In [1], (avg) denotes that the update time is in expectation (it is a randomized algorithm). 5. Schmidt and Sohler [19] answers the slightly different _membership query_. Given \(p\), it returns the cluster containing \(p\). In low dimension, the running time of their algorithm is expected and amortized. Our contributions and techniquesOur main results are two algorithms for solving the dynamic approximate \(k\)-center problem in, respectively, arbitrary metric spaces with a finite doubling dimension and in Euclidean space. 1. Our first new algorithm is for _any metric space with finite doubling dimension_: **Theorem 1**.: _Let \((\mathcal{X},d)\) be a metric space with a finite doubling dimension \(D\). Let \(P\subset X\) be a dynamically changing set of points. We can maintain \(P\) in \(O(2^{O(D)}\log\Delta\log\log\Delta)\) time per point insertion and deletion so as to support \((2+\epsilon)\) approximate \(k\)-center queries in \(O(k^{2}(\log\Delta+(1/\epsilon)^{O(D)}))\) time._ Compared with previous results (see table 1), our data structure does not require knowing \(\epsilon\) or \(k\) in advance, while the construction of the previous data structure depends on \(k\) or \(\epsilon\) as basic knowledge. 2. Our second new algorithm is for the Euclidean \(k\)-center problem: **Theorem 2**.: _Let \(P\subset\mathbb{R}^{D}\) be a dynamically changing set of points. We can maintain \(P\) in \(O(2^{O(D)}\log\Delta\log\log\Delta)\) time per point insertion and deletion so as to support \((1+\epsilon)\) approximate \(k\)-center queries in \(O(D\cdot k(\log\Delta+(1/\epsilon)^{O(D)})2^{k\log k/\epsilon})\) time._ \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Arbitrary Metric Space \((\mathcal{X},d)\)} \\ \hline Author & Approx. & Dimensions & Update Time & Query Time & Fixed \\ \hline Chan et al. [1] & \(2+\epsilon\) & High & \(O(k^{2}\frac{\log\Delta}{\epsilon})\) (avg.) & \(O(k)\) & \(k,\epsilon\) \\ \hline Goranci et al. [1] & \(2+\epsilon\) & Low & \(O((2/\epsilon)^{O(dim(\mathcal{X}))}\log\Delta\log\log\Delta\cdot\ln\epsilon^{ -1})\) & \(O(\log\Delta+k)\) & \(\epsilon\) \\ \hline Bateni et al. [1] & \(2+\epsilon\) & High & \(O(\frac{\log\Delta\log n}{\epsilon}(k+\log n))\)(avg.) & \(O(k)\) & \(k,\epsilon\) \\ \hline This paper & \(2+\epsilon\) & Low & \(O\left(2^{O(dim(\mathcal{X}))}\log\Delta\log\log\Delta\right)\) & \(O(k^{2}(\log\Delta+(1/\epsilon)^{O(dim(\mathcal{X}))})\) & \\ \hline \hline \multicolumn{6}{|c|}{Euclidean Space \((\mathbb{R}^{D},\ell_{2})\)} \\ \hline Author & Approx. & Dimensions & Update Time & Query Time & Fixed \\ \hline Chan [1] & \(1+\epsilon\) & Low & \(O((\frac{1}{\epsilon})^{D}k^{O(1)}\log n)\) (avg) & \(O(\epsilon^{-D}k\log k\log n+(\frac{k}{\epsilon})^{O(k^{1-1/P})})\) & \(k,\epsilon\) \\ \hline Schmidt and Sohler [19] & \(16\) & Low & \(O((2\sqrt{d}+1)^{d}\log^{2}\Delta\log n)\) (avg.) & \(O((2\sqrt{d}+1)^{d}(\log\Delta+\log n))\) & \\ \hline Schmidt and Sohler [19] & \(O(f\cdot D)\) & High & \(O(D^{2}\log^{2}n\log\Delta n^{1/f})\) (avg.) & \(O(f\cdot D\cdot\log n\log\Delta)\) & \(f\) \\ \hline (*) Bateni et al. [1] & \(f(\sqrt{8}+\epsilon)\) & High & \(O\big{(}\frac{\log\delta^{-1}\log\Delta}{\epsilon}Dn^{1/f^{2}+o(1)}\big{)}\) & \(\epsilon,f\) \\ \hline This paper & \(1+\epsilon\) & Low & \(O\left(2^{O(D)}\log\Delta\log\log\Delta\right)\) & \(O(D\cdot k(\log\Delta+(1/\epsilon)^{O(D)})2^{k\log k/\epsilon})\) & \\ \hline \end{tabular} \end{table} Table 1: Previous results on approximate dynamic \(k\)-centers. More information on the model used by each is in the text. Note that all algorithms listed provide correct results except for Schmidt and Sohler [19], which maintains \(O(f\cdot D)\) with probability \(1-1/n\), and Bateni et al. [1], which maintains a \(f(\sqrt{8}+\epsilon)\) solution with probability \(1-\delta\). [1] also combines the updates and queries. This algorithm seems to be the first deterministic dynamic solution for the Euclidean \(k\)-center problem. Chan [1] presents a randomized dynamic algorithm while they do not find a way to derandomize it. The motivation for our new approach was the observation that many previous results e.g., [1, 1, 2, 1, 10], on static \(k\)-center, work by iteratively searching the furthest neighbor in \(P\) from a changing set of points \(C\). The main technical result of this paper is an efficient procedure for calculating _approximate_ furthest neighbors from a dynamically changing point set \(P\). This procedure, in turn, will lead to the development of two new simple algorithms for maintaining approximate \(k\)-centers in dynamically changing point sets. Consider a set of \(n\) points \(P\) in some metric space \((\mathcal{X},d)\). A nearest neighbor in \(P\) to a query point \(q\), is a point \(p\in P\) satisfying \(d(p,q)=\min_{p^{\prime}\in P}d(p^{\prime},q)=d(P,q)\). A \((1+\epsilon)\)_approximate nearest neighbor to \(q\)_ is a point \(p\in P\) satisfying \(d(p,q)\leq(1+\epsilon)d(P,q)\). Similarly, a furthest neighbor to a query point \(q\) is a \(p\) satisfying \(d(p,q)=\max_{p^{\prime}\in P}d(p^{\prime},q)\). A \((1+\epsilon)\)_approximate furthest neighbor to \(q\)_ is a point \(p\in P\) satisfying \(\max_{p^{\prime}\in P}d(p^{\prime},q)\leq(1+\epsilon)d(p,q)\). There exist efficient algorithms for maintaining a _dynamic_ point set \(P\) (under insertions and deletions) that, given query _point_\(q\), quickly permit calculating approximate nearest [13] and furthest [1, 14, 15] neighbors to \(q\). A \((1+\epsilon)\)_approximate nearest neighbor_ to a query _set_\(C\), is a point \(p\in P\) satisfying \(d(p,C)\leq(1+\epsilon)d(P,C)\). Because "nearest neighbor" is decomposable, i.e., \(d(P,C)=\min_{q\in C}d(P,q)\), [13] also permits efficiently calculating an approximate nearest neighbor to set \(C\) from a dynamically changing \(P\). An approximate furthest neighbor to a query _set_\(C\) is similarly defined as a point \(p\in P\) satisfying \(\max_{p^{\prime}\in P}d(p^{\prime},C)\leq(1+\epsilon)d(p,C)\). Our main new technical result is Theorem 1.3, which permits efficiently calculating an approximate furthest neighbor to query set \(C\) from a dynamically changing \(P\). We note that, unlike nearest neighbor, furthest neighbor is not a decomposable problem and such a procedure does not seem to have previously known. This technical result permits the creation of new algorithms for solving the dynamic \(k\)_-center problem_ in low dimensions. ## 2 Searching for a \((1+\epsilon)\)-Approximate Furthest Point in a Dynamically Changing Point Set Let \((\mathcal{X},d)\) denote a fixed metric space. Definition 1: Let \(C,P\subset\mathcal{X}\) be finite sets of points and \(q\in\mathcal{X}\). Set \[d(C,q)=d(q,C)=\min_{q^{\prime}\in C}d(q^{\prime},q)\quad\text{and}\quad d(C,P) =\min_{p\in P}d(C,p).\] \(p\in P\) is a _furthest neighbor in \(P\) to \(q\)_if \(d(q,p)=\max_{p^{\prime}\in P}d(q,p^{\prime})\). \(p\in P\) is a _furthest neighbor in \(P\) to set \(C\)_if \(d(C,p)=\max_{p^{\prime}\in P}d(C,p^{\prime})\). \(p\in P\) is a \((1+\epsilon)\)-approximate furthest neighbor in \(P\) to \(q\) if \[\max_{p^{\prime}\in P}d(q,p^{\prime})\leq(1+\epsilon)d(q,p).\] \(p\in P\) is a \((1+\epsilon)\)-approximate furthest neighbor in \(P\) to \(C\) if \[\max_{p^{\prime}\in P}d(C,p^{\prime})\leq(1+\epsilon)d(C,p).\] FN\((P,q)\) and AFN\((P,q,\epsilon)\) will, respectively, denote procedures returning a furthest neighbor and a \((1+\epsilon)\)-approximate furthest neighbor to \(q\) in \(P\). FN\((P,C)\) and AFN\((P,C,\epsilon)\) will, respectively, denote procedures returning a furthest neighbor and \((1+\epsilon)\)-approximate furthest neighbor to \(C\) in \(P\). Our algorithm assumes that \(\mathcal{X}\) has finite doubling dimension. Definition 2 (Doubling Dimensions): The doubling dimension of a metric space \((\mathcal{X},d)\) is the minimum value \(\dim(\mathcal{X})\) such that any ball \(B(x,r)\) in \((\mathcal{X},d)\) can be covered by \(2^{\dim(\mathcal{X})}\) balls of radius \(r/2\). It is known that the doubling dimension of the Euclidean space \((R^{D},\ell_{2})\) is \(\Theta(D)\)[H\({}^{+}\)01]. Now let \((\mathcal{X},d)\) be a metric space with a finite doubling dimension and \(P\subset\mathcal{X}\) be a finite set of points. Recall that \(d_{max}=\sup\{d(x,y):x,y\in P\}\) and \(d_{min}=\inf\{d(x,y):x,y\in P,\ x\neq y\}\). and The _aspect ratio_\(\Delta\) of \(P\) is \(\Delta=\frac{d_{max}}{d_{min}}\). Our main technical theorem (proven below in Section 2.2) is: Theorem 2.1: _Let \((\mathcal{X},d)\) be a metric space with finite doubling dimension and \(P\subset\mathcal{X}\) be a point set stored by a navigating net data structure [10]. Let \(C\subset\mathcal{X}\) be another point set. Then, we can find a \((1+\epsilon)\)-approximate furthest point among \(P\) to \(C\) in \(O\left(|\mathcal{C}|(\log\Delta+(1/\epsilon)^{O(\dim(\mathcal{X}))})\right)\) time, where \(\Delta\) is the aspect ratio of set \(P\)._ The _navigating net_ data structure [10] is described in more detail below. ### Navigating Nets [10] Navigating nets are very well-known structures for dynamically maintaining points in a metric space with finite doubling dimension, in a way that permits approximate nearness queries. To the best of our knowledge they have not been previously used for approximate "furthest point from set" queries. To describe the algorithm, we first need to quickly review some basic known facts about navigating nets. The following lemma is critical to our analysis. Lemma 1: _[_10_]_ _Let \((\mathcal{X},d)\) be a metric space and \(Y\subseteq\mathcal{X}\). If the aspect ratio of the metric induced on \(Y\) is at most \(\Delta\) and \(\Delta\geqslant 2\), then \(|Y|\leqslant\Delta^{O(\dim(\mathcal{X}))}\)._ We next introduce some notation from [10]: Definition 3 (\(r\)-net): [12] Let \((\mathcal{X},d)\) be a metric space. For a given parameter \(r>0\), a subset \(Y\subseteq\mathcal{X}\) is an \(r\)-net of \(P\) if it satisfies: 1. For every \(x,y\in Y\), \(d(x,y)\geqslant r\); 2. \(\forall x\in P\), there exists at least one \(y\in Y\) such that \(x\in B(y,r)\). We now start the description of the navigating net data structure. Set \(\Gamma=\{2^{i}:i\in\mathbb{Z}\}\). Each \(r\in\Gamma\) is called a _scale_. For every \(r\in\Gamma\), \(Y_{r}\) will denote an \(r\)-net of \(Y_{r/2}\). The base case is that for every scale \(r\leqslant d_{min}\), \(Y_{r}=P\). Let \(\gamma\geqslant 4\) be some fixed constant. For each scale \(r\) and each \(y\in Y_{r}\), the data structure stores the set of points \[L_{y,r}=\{z\in Y_{r/2}:d(z,y)\leqslant\gamma\cdot r\}. \tag{1}\] \(L_{y,r}\) is called the _scale \(r\) navigation list of \(y\)_. Let \(r_{max}\in\Gamma\) denote the smallest \(r\) satisfying \(|Y_{r}|=1\) and \(r_{min}\in\Gamma\) denote the largest \(r\) satisfying \(L_{y,r}=\{y\}\) for every \(y\in Y_{r}\). Scales \(r\in[r_{min},r_{max}]\) are called _non-trivial_ scales; all other scales are called _trivial_. Since \(r_{max}=\Theta(d_{max})\) and \(r_{min}=\Theta(d_{min})\), the number of non-trivial scales is \(O\left(\log_{2}\frac{r_{max}}{r_{min}}\right)=O(\log_{2}\Delta)\). Finally, we need a few more basic properties of navigating nets: Lemma 2: _[_12_]__(Lemma 2.1 and 2.2) For each scale \(r\), we have:_ 1. \(\forall y\in Y_{r}\)_,_ \(|L_{y,r}|=O(2^{O(\dim(\mathcal{X}))})\)_._ 2. \(\forall z\in P\)_,_ \(d(z,Y_{r})<2r\)_;_ 3. \(\forall x,y\in Y_{r}\)_,_ \(d(x,y)\geqslant r\)_._ We provide an example (Figure 3) of navigating nets in the Appendix. Navigating nets were originally designed to solve dynamic approximate nearest neighbor queries and are useful because they can be quickly updated. Theorem 2.1: _([12]) Navigating nets use \(O(2^{O(\dim(\mathcal{X}))}\cdot n)\) words. The data structure can be updated with an insertion of a point to \(P\) or a deletion of a point in \(P\) in time \((2^{O(\dim(\mathcal{X}))}\log\Delta\log\log\Delta)\)\({}^{1}\). This includes \((2^{O(\dim(\mathcal{X}))}\log\Delta)\) distance computations._ ### The Approximate Furthest Neighbor Algorithm \(\text{AFN}(P,C,\epsilon)\) \(\text{AFN}(P,C,\epsilon)\) is given in Algorithm 1. Figure 1 provides some geometric intuition. \(\text{AFN}(P,C,\epsilon)\) requires that \(P\) be stored in a navigating net and the following definitions: Definition 4 (The sets \(Z_{r}\)): * \(Z_{r_{max}}=Y_{r_{max}}\), where \(|Y_{r_{max}}|=1\); * If \(Z_{r}\) is defined, \(Z_{r/2}=\bigcup_{z\in Z_{r}}\{y\in L_{z,r}:d(y,C)\geqslant\max_{z\in Z_{r}}d( z,C)-r\}\). Note that, by induction, \(Z_{r}\subseteq Y_{r}\). We now prove that \(\operatorname{AFN}(P,C,\epsilon)\) returns a \((1+\epsilon)\)-approximate furthest point among \(P\) to \(C\). We start by showing that, for every scale \(r\), the furthest point to \(C\) is close to \(Z_{r}\). Lemma 3: _Let \(a^{*}\) be the furthest point to \(C\) in \(P\). Then, every set \(Z_{r}\) as defined in Definition 4, contains a point \(z_{r}\) satisfying \(d(z_{r},a^{*})\leqslant 2r\)_ Proof: The proof is illustrated in Figure 2. It works by downward induction on \(r\). In the base case \(r=r_{max}\) and \(Z_{r_{max}}=Y_{r_{max}}\), thus \(d(a^{*},Z_{r_{max}})\leqslant 2r\). For the inductive step, we assume that \(Z_{r}\) satisfies the induction hypothesis, i.e, \(Z_{r}\) contains a point \(z^{\prime}\) satisfying \(d(z^{\prime},a^{*})\leqslant 2r\). We will show that \(Z_{r/2}\) contains a point \(y\) satisfying \(d(y,a^{*})\leqslant r\). Since \(Y_{r/2}\) is a \(\frac{r}{2}\)-net of \(P\), there exists a point \(y\in Y_{r/2}\) satisfying \(d(y,a^{*})\leqslant r\) (Lemma 2(2)). Then, \[d(z^{\prime},y)\leqslant d(z^{\prime},a^{*})+d(a^{*},y)\leqslant 2r+r=3r\] and thus, because \(\gamma\geqslant 4\), \(y\in L_{z^{\prime},r}\). Finally, let \(c^{\prime}=\arg\min_{c_{i}\in C}d(y,c_{i})\). Then \[d(y,C)=d(y,c^{\prime})\geqslant d(a^{*},c^{\prime})-d(a^{*},y)\geqslant d(a^{*}, C)-d(a^{*},y)\geqslant\max_{z\in Z_{r}}d(z,C)-r.\] Thus \(y\in Z_{r/2}\). Lemma 3 permits bounding the approximation ratio of algorithm \(\operatorname{AFN}(P,C,\epsilon)\). Lemma 4: _Algorithm \(\operatorname{AFN}(P,C,\epsilon)\) returns a point \(q\) whose distance to \(C\) satisfies \(\max_{p\in P}d(p,C)\leqslant(1+\epsilon)d(q,C)\)._ Proof: Let \(r^{\prime}\) denote the value of \(r\) at the end of the algorithm. Let \(a^{*}\) be the furthest point to \(C\) among \(P\). Consider the two following conditions on \(r^{\prime}\) : 1. \(r^{\prime}\leqslant\frac{1}{2}(\epsilon\cdot\max_{z\in Z_{r^{\prime}}}d(z,C))\). In this case, by Lemma 3, there exists a point \(z_{r^{\prime}}\in Z_{r^{\prime}}\) satisfying \(d(z_{r^{\prime}},a^{*})\leqslant 2r^{\prime}\). Let \(c^{\prime}=\arg\min_{c_{i}\in C}d(z_{r^{\prime}},c_{i})\). \[\max_{z\in Z_{r^{\prime}}}d(z,C) \geqslant d(z_{r^{\prime}},C)=d(z_{r^{\prime}},c^{\prime}) \geqslant d(a^{*},c^{\prime})-d(z_{r^{\prime}},a^{*})\] \[\geqslant d(a^{*},C)-2r^{\prime}\geqslant d(a^{*},C)-\epsilon\cdot \max_{z\in Z_{r^{\prime}}}d(z,C)\] Thus, \[(1+\epsilon)\cdot\max_{z\in Z_{r^{\prime}}}d(z,C)\geqslant d(a^{*},C)=\max_{ x\in P}d(x,C).\] (2) 2. \(r^{\prime}\leqslant r_{min}\). In this case, recall that \(Z_{r}\subseteq Y_{r}\) and that for every scale \(r^{\prime}\leqslant r_{min}\) and \(\forall y\in Y_{r}\), \(L_{y,r}=\{y\}\). Then \[Z_{r^{\prime}/2}=\bigcup_{z\in Z_{r^{\prime}}}\{y\in L_{z,r^{\prime}}:d(y,C) \geqslant\max_{z\in Z_{r^{\prime}}}d(z,C)-r^{\prime}\}\subseteq\bigcup_{z\in Z _{r^{\prime}}}\{z\}=Z_{r^{\prime}}.\] Now let \(r_{1}\) be the largest scale for which \(r_{1}\leqslant\frac{1}{2}(\epsilon\cdot\max_{z\in Z_{r_{1}}}d(z,C))\) and \(r_{2}\) the scale at which \(\operatorname{AFN}(C,\epsilon)\) terminates. From point 1, Equation (2) holds with \(r^{\prime}=r_{1}\). If \(r_{1}\geq r_{min}\), then \(r_{1}=r_{2}\) and the lemma is correct. If \(r_{1}<r_{min}\) then \(r_{1}\leq r_{2}\leq r_{min}\), so from point 2, \(Z_{r_{1}}\subseteq Z_{r_{2}}\) and \[(1+\epsilon)\cdot\max_{z\in Z_{r_{2}}}d(z,C)\geqslant(1+\epsilon)\cdot\max_{ z\in Z_{r_{1}}}d(z,C)\geqslant d(a^{*},C)=\max_{x\in P}d(x,C)\] Since \(r_{1}\) satisfies condition 1, the second inequality holds. Hence, the lemma is again correct. We now analyze the running time of \(\text{AFN}(P,C,\epsilon)\). Lemma 5: _In each iteration of \(\text{AFN}(P,C,\epsilon)\), \(|Z_{r}|\leqslant 4|C|(\gamma+2/\epsilon)^{O(\dim(\mathcal{X}))}\)._ Proof: We actually prove the equivalent statement that \(|Z_{r/2}|\leqslant 4|C|(\gamma+2/\epsilon)^{O(D)}\). For all \(y\in Z_{r/2}\), there exists a point \(z^{\prime}\in Z_{r}\) satisfying \(y\in L_{z^{\prime},r}\), i.e, \(d(z^{\prime},y)\leqslant\gamma\cdot r\). Let \(c^{\prime}=\arg\min_{c\in C}d(z^{\prime},C)\). Thus, \[d(y,c^{\prime})\leqslant d(c^{\prime},z^{\prime})+d(z^{\prime},y)=d(z^{\prime },C)+d(z^{\prime},y)\leqslant\max_{z\in Z_{r}}d(z,C)+\gamma\cdot r.\] An iteration of \(\text{AFN}(C,\epsilon)\) will construct \(Z_{r/2}\) only when \(\max_{z\in Z_{r}}d(z,C)\leqslant\frac{2r}{\epsilon}\). Therefore, \(d(y,c^{\prime})\leqslant(\gamma+2/\epsilon)r\). This implies \(Z_{r/2}\subseteq\bigcup_{c\in C}B(c,(\gamma+2/\epsilon)r)\). Next notice that, since \(Z_{r/2}\subseteq Y_{r/2}\) is a \(r/2\)-net, \(\forall z_{1},z_{2}\in Z_{r/2}\), \(d(z_{1},z_{2})\geqslant\frac{r}{2}\). Finally, for fixed \(c\in C\), \(\forall x,y\in Z_{r/2}\cap B(c,(\gamma+2/\epsilon)r)\), we have \(\frac{r}{2}\leqslant d(x,y)\leqslant 2(\gamma+2/\epsilon)r\). Thus, the aspect ratio \(\Delta_{B(c,(\gamma+2/\epsilon)r)}\) of the set \(Z_{r/2}\cap B(c,(\gamma+2/\epsilon)r)\) is at most \(\Delta_{B(c,(\gamma+2/\epsilon)r)}\leqslant\frac{2(\gamma+2/\epsilon)r}{ \frac{r}{2}}=4(\gamma+2/\epsilon)\). Therefore, by Lemma 1, \(\forall c\in C\), \(|Z_{r/2}\cap B(c,(\gamma+2/\epsilon)r)|\leqslant(4(\gamma+2/\epsilon))^{O( \dim(\mathcal{X}))}\). Thus, \(|Z_{r/2}|\leqslant|C|(4(\gamma+2/\epsilon))^{O(\dim(\mathcal{X}))}\) Lemma 6: \(\text{AFN}(P,C,\epsilon)\) _runs at most \(\log_{2}\Delta+O(1)\) iterations._ Proof: The algorithm starts with \(r=r_{max}\) and concludes when \(r\geq r_{min}/2\). Thus, the total number of iterations is at most \[\log_{2}\frac{r_{max}}{r_{min}/2}=1+\log_{2}\frac{r_{max}}{r_{min}}=1+\log_{2 }\Theta\left(\frac{r_{max}}{r_{min}}\right)=1+\log_{2}\Theta\left(\frac{d_{ max}}{d_{min}}\right)=O(1)+\log_{2}\Delta.\] Lemmas 5 and 6, immediately imply that the running time of \(\text{AFN}(C,\epsilon)\) is at most \(O(|C|(4(\gamma+2/\epsilon)))^{O(\dim(\mathcal{X}))}\log\Delta)\). A more careful analysis leads to the proof of Theorem 2.3. Due to space limitations the full proof is deferred to appendix B. ## 3 Modified \(k\)-Center Algorithms \(\text{AFN}(P,C,\epsilon)\) will now be used to design two new dynamic \(k\)-center algorithms. Lemma 2 hints that elements in \(Y_{r}\) can be approximate _centers_. This observation motivated Goranci et al. [1] to search for the smallest \(r\) such that \(|Y_{r}|\leqslant k\) and return the elements in \(Y_{r}\) as centers. Unfortunately, used this way, the original navigating nets data structure only returns an 8-approximation solution. [1] improve this by simultaneously maintaining multiple nets. Although we also apply navigating nets to construct approximate \(k\)-centers, our approach is very different from that of [1]. We do not use the elements in \(Y_{r}\) as centers themselves. We only use the navigating net to support \(\text{AFN}(P,C,\epsilon)\). Our algorithms result from substituting \(\text{AFN}(P,c,\epsilon)\) for deterministic furthest neighbor procedures in static algorithms. The next two subsections introduce the two modified algorithms. ### A Modified Version of Gonzalez's [10]'s Greedy Algorithm Gonzalez [10] described a simple and now well-known \(O(kn)\) time \(2\)-approximation algorithm that works for any metric space. It operates by performing \(k\) exact furthest neighbor from a set queries. We just directly replace those exact queries with our new approximate furthest neighbor query procedure. It is then straightforward to modify Gonzalez's proof from [10] that his original algorithm is a \(2\)-approximation one, to prove that our new algorithm is a \((2+\epsilon)\)-approximation one. The details of the algorithm (Algorithm 3) and the modified proof are provided in Appendix 0.C. This yields. Theorem 3.1: _Let \(P\subset\mathcal{X}\) be a finite set of points in a metric space \((\mathcal{X},d)\). Suppose \(\text{AFN}(P,C,\epsilon)\) can be implemented in \(T(|C|,\epsilon)\) time. Algorithm 3 constructs a \((2+\epsilon)\)-approximate solution for the \(k\)-center problem in \(O\left(k\cdot T\left(k,\frac{\epsilon}{5}\right)\right)\) time._ Plugging Theorem 3.1 into this proves Theorem 3.1. ### A Modified Version of the Kim Schwarzwald [11] Algorithm In what follows, \(D\geq 1\) is some arbitrary dimension. In 2020 [11] gave an \(O(nD/\epsilon)\) time (1+\(\epsilon\))-algorithm for the Euclidean \(1\)-center (MEB) problem. They further showed how to extend this to obtain a (1+\(\epsilon\))-approximation to the Euclidean \(k\)-center in \(O(nD2^{O(k\log k/\epsilon)})\) time. Their algorithms use, as a subroutine, a \(\Theta(n)\) (or \(\Theta(n|C|)\)) time brute-force procedure for finding \(\text{FN}(P,q)\) (or \(\text{FN}(P,C)\)). This subsection shows how replacing \(\text{FN}(P,q)\) (or \(\text{FN}(P,C)\)) by \(\text{AFN}(P,q,\epsilon/3)\) (or \(\text{AFN}(P,C,\epsilon/3)\)) along with some other minor small changes, maintains the correctness of the algorithm. Our modified version of Kim and Schwarzwald [11]'s MEB algorithm is presented as Algorithm 2. Let \(\epsilon>0\) be a constant. Their algorithm runs in \(O(1/\epsilon)\) iterations. The \(i\)'th iteration starts from some point \(m_{i}\) and uses \(O(n)\) time to search for the point \(p_{i+1}=\text{FN}(P,m_{i})\) furthest from \(m_{i}\). The iteration then selects a "good" point \(m_{i+1}\) on the line segment \(p_{i+1}m_{i}\) as the starting point for the next iteration, where "good" means that the distance from \(m_{i+1}\) to the optimal center is somehow bounded. The time to select such a "good" point is \(O(D)\). The total running time of their algorithm is \(O(nD/\epsilon)\). They also prove that the performance ratio of their algorithm is at most \((1+\epsilon)\). The running time of their algorithm is dominated by the \(O(n)\) time required to find the point \(\text{FN}(P,m_{i})\). As we will see in Theorem 3.1 below, finding the exact furthest point \(\text{FN}(P,m_{i})\) was not necessary. This could be replaced by \(\text{AFN}(P,\epsilon/3,m_{i})\). The first result is that this minor modification of Kim and Schwarzwald [11]'s algorithm still produces a \((1+\epsilon)\) approximation. Theorem 3.2: _Let \(P\subset\mathbb{R}^{D}\) be a set of points whose minimum enclosing ball has (unknown) radius \(r^{*}\). Suppose \(\text{AFN}(P,q,\epsilon)\) can be implemented in \(T(\epsilon)\) time._ _Let \(c,r\) be the values returned by Algorithm 2. Then \(P\subset B(c,r)\) and \(r\leq(1+\epsilon)r^{*}\). Thus Algorithm 2 constructs a \((1+\epsilon)\)-approximate solution and it runs in \(O\left(DT\left(\frac{\epsilon}{3}\right)\frac{1}{\epsilon}\right)\) time._ Plugging Theorem 3.1 into Theorem 3.1 proves Theorem 3.1 for \(k=1\). ``` 0: A set of points \(P\) and a constant \(\epsilon>0\). 0: A \((1+\epsilon)\)-approximate minimum enclosing ball \(B(c,r)\) containing all points in \(P\). The algorithm presented is just a slight modification of that of [13]. The differences are that in [13], line 4 was originally \(p_{i+1}=\text{FN}(P,m_{i})\) and the four \((1+\epsilon/3)\) terms on lines 8 and 9, were all originally \((1+\epsilon)\). 1: Arbitrarily select a point \(p_{1}\) from \(P\); 2: Set \(m_{1}=p_{1}\), \(r=\infty\), and \(\delta_{1}=1\); 3:for\(i=1\) to \(\left\lfloor 6/\epsilon\right\rfloor\)do 4:\(p_{i+1}=\)AFN\((P,m_{i},\epsilon/3)\); 5:\(r_{i}=\left(1+\frac{\epsilon}{3}\right)d(m_{i},p_{i+1})\); 6:if\(r_{i}<r\)then 7:\(c=m_{i}\); \(r=r_{i}\); 8:\(m_{i+1}=m_{i}+(p_{i+1}-m_{i})\cdot\frac{\delta_{i}^{2}+(1+\epsilon/3)^{2}-1}{ 2(1+\epsilon/3)^{2}}\) 9:\(\delta_{i+1}=\sqrt{1-\left(\frac{1+(1+\epsilon/3)^{2}-\delta_{i}^{2}}{2(1+ \epsilon/3)}\right)^{2}}\); ``` **Algorithm 2** Modified MEB\((P,\epsilon)\) Proof: Every ball \(B(m_{i},r_{i})\) generated by Algorithm 2 encloses all of the points in \(P\), i.e., \[\forall i,\quad\max_{p\in P}d(m_{i},p)\leqslant r_{i}. \tag{3}\] To prove the correctness of the algorithm it suffices to show that \(r\leq(1+\epsilon)r^{*}\). Without loss of generality, we assume that \(\epsilon\leqslant 1\). Each iteration of lines 4-9 of MEB\((P,\epsilon)\) must end in one of the two following cases: 1. \(d(m_{i},p_{i+1})\leqslant(1+\epsilon/3)r^{*}\), 2. \(d(m_{i},p_{i+1})>(1+\epsilon/3)r^{*}\). Note that if Case (1) holds for some \(i\), then, directly from Equation (3) (using \(\epsilon\leqslant 1\)), \[\max_{p\in P}d(m_{i},p)\leqslant r_{i}=(1+\epsilon/3)d(m_{i},p_{i+1})\leqslant (1+\epsilon/3)^{2}r^{*}<(1+\epsilon)r^{*}\] This implies that if Case 1 ever holds, Algorithm 2 is correct. The main lemma is Lemma 7: _If, \(\forall 1\leqslant i\leqslant j\), case (2) holds, i.e., \(d(m_{i},p_{i+1})>(1+\epsilon/3)r^{*}\), then \(j\leq\frac{6}{\epsilon}-1\)._ The proof of Lemma 7 is just a straightforward modification of the proof given in Kim and Schwarzwald [10] for their original algorithm and is therefore omitted. For completeness we provide the full modified proof in Appendix 0.D.1. Lemma 7 implies that, by the end of the algorithm, Case 1 must have occurred at least once, so \(r\leq(1+\epsilon)r^{*}\) and the algorithm outputs a correct solution. Derivation of the running time of the algorithm is straightforward, completing the proof of Theorem 4. [10] discuss (without providing details) how to use the "guessing" technique of [1, 2]) to extend their MEB algorithm to yield a \((1+\epsilon)\)-approximation solution to the \(k\)-center problem for \(k\geqslant 2\). For MEB, the Euclidean \(1\)-center, in each iteration, they maintained the location of a candidate center \(c\) and computed a furthest point to \(c\) among \(P\). For the Euclidean \(k\)-centers, in each step, they maintain locations of a set \(C\) of candidate centers, \(|C|\leqslant k\) and compute a furthest point to \(C\) among \(P\) using a \(\operatorname{FN}(P,C)\) procedure. Again we can modify their algorithm by replacing the \(\operatorname{FN}(P,C)\) procedure by a \(\operatorname{AFN}(P,C,\epsilon)\) one, computing an _approximate_ furthest point to \(C\) among \(P\). This will prove Theorem 4.2. The full details of a modified version of their algorithm have been provided in appendix 0.D.2, which uses \(\operatorname{AFN}(P,C,\epsilon)\) in place of \(\operatorname{FN}(P,C)\), as well as an analysis of correctness and run time. ## 4 Conclusion Our main new technical contribution is an algorithm, \(\operatorname{AFN}(P,C,\epsilon)\) that finds a \((1+\epsilon)\)-approximate furthest point in \(P\) to \(C\). This works on top of a navigating net data structure [11] storing \(P\). The proofs of Theorems 4.1 and 4.2 follow immediately by maintaining a navigating net and plugging \(\operatorname{AFN}(P,C,\epsilon)\) into Theorems 4.2 and 4.1, respectively. These provide a fully dynamic and deterministic \((2+\epsilon)\)-approximation algorithm for the \(k\)-center problem in a metric space with finite doubling dimension and a \((1+\epsilon)\)-approximation algorithm for the Euclidean \(k\)-center problem, where \(\epsilon,k\) are parameters given at query time. One limitation of our algorithm is that, because \(\operatorname{AFN}(P,C,\epsilon)\) is built on top of navigating nets, it depends upon aspect ratio \(\Delta\). This is the only dependence of the \(k\)-center algorithm on \(\Delta\). An interesting future direction would be to develop algorithms for \(\operatorname{AFN}(P,C,\epsilon)\) in special metric spaces built on top of other structures that are independent of \(\Delta\). This would automatically lead to algorithms for approximate \(k\)-center that, in those spaces, would also be independent of \(\Delta\).
2305.13910
Experimental Assessment of Misalignment Effects in Terahertz Communications
Terahertz (THz) frequencies are important for next generation wireless systems due to the advantages in terms of large available bandwidths. On the other hand, the limited range due to high attenuation in these frequencies can be overcome via densely installed heterogeneous networks also utilizing UAVs in a three-dimensional hyperspace. Yet, THz communications rely on precise beam alignment, if not handled properly results in low signal strength at the receiver which impacts THz signals more than conventional ones. This work focuses on the importance of precise alignment in THz communication systems and the significant effect of proper alignment is validated through comprehensive measurements conducted through a state-of-the-art measurement setup, which enables accurate data collection between 240 GHz to 300 GHz at varying angles and distances in an anechoic chamber eliminating reflections. By analyzing the channel frequency and impulse responses of these extensive and particular measurements, this study provides the first quantifiable results in terms of measuring the effects of beam misalignment in THz frequencies.
Hasan Nayir, Erhan Karakoca, Güneş Karabulut Kurt, Ali Görçin
2023-05-23T10:32:09Z
http://arxiv.org/abs/2305.13910v2
# Experimental Assessment of Misalignment Effects in Terahertz Communications ###### Abstract Terahertz (THz) frequencies are important for next generation wireless systems due to the advantages in terms of large available bandwidths. On the other hand, the limited range due to high attenuation in these frequencies can be overcome via densely installed heterogeneous networks also utilizing UAVs in a three-dimensional hyperspace. Yet, THz communications rely on precise beam alignment, if not handled properly results in low signal strength at the receiver which impacts THz signals more than conventional ones. This work focuses on the importance of precise alignment in THz communication systems and the significant effect of proper alignment is validated through comprehensive measurements conducted through a state-of-the-art measurement setup, which enables accurate data collection between 240 GHz to 300 GHz at varying angles and distances in an anechoic chamber eliminating reflections. By analyzing the channel frequency and impulse responses of these extensive and particular measurements, this study provides the first quantifiable results in terms of measuring the effects of beam misalignment in THz frequencies. Terahertz communications, unmanned aerial vehicles (UAVs), channel frequency response, channel impulse response. ## I Introduction While the need for high data rates is still on the agenda, the 6G vision has highlighted various key value indicators (KVIs) such as global coverage, service availability, sustainability, and reliability. When we set out with the "connection in anywhere, anytime, any device" motto, there is no doubt that aerial systems will be the most prominent candidate to bring access to urban, semi-urban, and remote rural areas. Adding aerial base stations for improving the quality of service (QoS) and boosting the coverage, reliability and capacity of wireless networks has been suggested in the academy for a while [1, 2, 3]. In addition to bringing fast deployment features to non-terrestrial networks (NTNs), unmanned aerial vehicles (UAVs) also act as a bridge between terrestrial networks (TNs) and other non-terrestrial network elements such as satellites and high altitude platforms (HAPs). Terahertz (THz) wireless systems are expected to be a vital enabler for 6G in tandem with TNs and NTNs because of their large contiguous bandwidth [4] which allows them to keep pace with the surge in wireless data volume and the increasing amount of traffic as new nodes are added to the network. Likewise, considering the fact that THz bands are not allocated yet for specific active services around the globe, there is an enormous potential to meet the need for the desired communication traffic. Hence the orchestration of NTNs and TNs with the THz communication is apparent towards 6G [5]. Along with their benefits, THz frequencies also come with high attenuation due to molecular absorption and spreading loss [6], which limits the communication range significantly. Thus, innovative densely deployable THz communication systems are required to cope with this issue. This is where UAVs come into play as a solution especially to provide instant high-capacity communication link in crowded environments or to support high-capacity data traffic between different TN and NTN nodes [7]. Moreover, along with their cost-effectiveness and instant 3D deployment capabilities which allow maintaining line-of-sight (LoS) condition for the communication links, UAVs also provide flexibility to the network nodes. Thus, UAVs are expected to pave radically the way for assisting THz communications. UAVs and THz are strong collaborators by nature and this mutually constructive relationship, in turn, can unlock new opportunities and innovative services [8]. While THz-integrated UAVs presents promising prospects they also bring new challenges to the field. Although the utilization of directional beamforming unified with directional antennas can provide higher antenna gains to reduce the high transmission loss in the THz frequency range, these systems are prone to pointing errors due to small beamwidths. Moreover, wind or sudden complex movements can cause uncontrollable tilts or rotations in UAV operations, leading to beam misalignment and an inevitable decrease in signal-to-noise ratio (SNR). Accordingly, UAV-assisted THz communication requires accurate beam alignment mechanisms and algorithms. As we move towards the development of 6G networks and the realization of THz communication systems, it is crucial to conduct a comprehensive investigation of misalignment scenarios. This involves analyzing and modeling the potential effects of the misalignment on specific applications. ### _Related Works_ The antenna misalignment effects on THz communication systems have been investigated across various environments and frequency ranges, but mostly by simulations [9, 10, 11, 12]. In particular, [9] and [10] examined the effects of antenna misalignment at 300 GHz in a simulated office setup by considering practical propagation conditions. In [11], the performance of a multicarrier THz wireless system is evaluated under the fading condition caused by misalignment. The effects of pointing error impairments under random fog conditions are examined in [12]. On the other hand, measurement based impact of misalignment has been analysed in [13, 14]. The authors in [13] carried out measurements to analyze the impact of the distance and single degree of misalignment on the path loss in the THz communication system and several important statistical parameters for line-of-sight (LOS) channels are measured. The performance of the experimental THz communication systems has been examined in case of antenna misalignment at 100, 300, 400 and 500 GHz by utilizing proper horn antennas in the [14]. A significant decrease trend was indicated in the received power with misalignment due to the divergence of the beams particularly with an increase in separation distance. Most importantly, the authors in [15] have designed a drone-based measurement setup to investigate the effects of mobility uncertainties on mmWave/THz-band communications between flying drones. The authors showed that the mobility of the UAVs when they are in movement causes significant performance degradation and link outages while propeller rotation and engine operations of the UAVs cause far less performance degradation. ### _Contributions_ In order to fully maximize the potential of THz communication systems, a deep understanding of their performance in practical conditions is required. As the realization of the UAV-assisted THz communication becomes increasingly prevalent, the effect of misalignment becomes a more prominent concern. While prior studies have explored the impact of distance and antenna misalignment to some extent on THz communication but more comprehensive approaches ought to address this issue. To achieve this, it is essential to gather application-specific measurements and conduct an in-depth analysis of their impact on channel frequency and impulse response. In the light of these motivations, our contributions are listed as follows: * Undertaking precise controlled THz misalignment experiments is a formidable task that requires extensive expertise and experience. In order to unravel the intricacies involved, a groundbreaking measurement system has been devised to capture precise measurements under varying misalignment scenarios and at different distances. Moreover, this study strives to illuminate future research endeavours by providing a comprehensive elucidation of the measurement campaign and sharing invaluable insights derived from a multidisciplinary approach. * Also, recognizing the significance of high bandwidths in practical THz communication systems, this study stands apart from the majority of existing literature on THz measurements. Rather than focusing solely on a specific frequency range, measurements were conducted using a single scan method, encompassing the 60 GHz band and spanning from 240 GHz to 300 GHz. * The measurements were analyzed in terms of the channel frequency and impulse responses to gain insights into the joint effect of the distance and misalignment. ### _Organization_ The remainder of this paper is organized as follows: in Section II, the signal model is introduced, providing a foundational understanding of the system. Subsequently, the measurement campaign is explained, outlining the approach and methodology employed. The obtained measurement results are presented in Section III, showcasing valuable findings and insights. Finally, the conclusion and future directions are discussed in Section IV, summarizing the key outcomes and presenting potential avenues for further research. ## II Signal Model and System Overview In this study, the effect of misalignment for THz channels is investigated in the anechoic chamber. The traditional linear, time-invariant channel model approach is used to reduce complexity in the analysis. ### _Signal Model_ The received signal at the passband can be expressed as \[r(t)=\mathrm{Re}\left\{\left(x_{I}(t)+jx_{Q}(t)\right)e^{j2\pi f_{c}t}\right\}, \tag{1}\] where \(f_{c}\) denotes the carrier frequency. Also, \(x_{I}(t)\) and \(x_{Q}(t)\) are the in-phase and quadrature components of the received signal, respectively. The received signal can be modeled as a superposition of multipath signals with different delays and complex gains. So, the channel at the baseband can be represented as \[h(t)=\sum_{l=0}^{L-1}\alpha_{l}e^{-j2\pi f_{c}t_{l}}\delta\left(t-t_{l}\right), \tag{2}\] where \(L\), \(\alpha_{l}\), and \(t_{l}\) represent the number of multipath components, channel complex gain, and delay for the \(l\)-th path, respectively. As we have mentioned before, measurements have been carried out in a fully isolated anechoic chamber, so we can assume there is only LoS signal transmission. Thus, LoS channels can be derived for \(L=1\) (2) as \[h(t)=a_{f}e^{j\theta}\delta\left(t-t_{0}\right), \tag{3}\] where \(a_{f}\), \(\theta\), and \(t_{0}=d/c\) denote the LoS path complex gain, phase of the signal, and propagation delay, respectively. Also, \(d\) is the distance between the transmitter and receiver, and \(c\) is the speed of the light. It is crucial to acknowledge that when using directional antennas, which is a common practice in THz communication, the effects of antenna misalignment, frequency-dependent loss, and frequency dispersion index can all be accounted for by the term \(a_{f}\). In existing literature, the stochastic characterization of multipath components in a static environment is commonly regarded as a combination of specular and diffused components, forming a superposition [16]. \[m_{l} =a_{l}e^{-i2\pi f_{ct}}\] \[=s_{l}+d_{l} \tag{4a}\] \[s_{l}=\sigma_{s_{l}}e^{(j2\pi f_{0}\cos(\theta_{l})+\phi_{l})}\] (4b) \[d_{l}=\sigma_{d_{l}}\frac{1}{\sqrt{M_{l}}}\sum_{m=1}^{M_{l}}b_{m}e^{(j2\pi fc \cos(\theta_{m})+\phi_{m})} \tag{4c}\] where the term \(\sigma_{s_{l}}\) represents the magnitude of the specular component, while \(\theta_{l}\) denotes the angle of arrival (AoA) and \(\phi_{l}\) represents the phase of the specular component. Similarly, \(\sigma_{d_{l}}\) corresponds to the magnitude of the diffused component, \(M_{l}\) signifies the number of diffused waves, \(b_{m}\) represents the amplitude of the incoming waves, \(\theta_{m}\) denotes the AoA, and \(\phi_{m}\) is the phase of the incoming waves forming the diffused component, respectively. It is commonly assumed, without loss of generality, that both \(\sigma_{s_{l}}\) and \(\sigma_{d_{l}}\) can be considered as unity under ideal conditions. The LoS scenario stands out as a special case in wireless propagation, exhibiting inherent characteristics in both large-scale and small-scale fading mechanisms. To guarantee LoS transmission, it is crucial to implement a fully isolated measurement setup within an anechoic chamber, incorporating absorbers. This setup effectively limits the losses introduced by the propagation channel to factors such as distance-dependent path loss, potential antenna misalignments, equipment imperfections, and non-ideal behaviors that may arise when operating in proximity to or above ideal conditions. In this model, the losses are contingent upon both the distance and misalignment between the transmitter and receiver. When taking into account the distance and angular losses associated with a directional antenna having a maximum gain direction angle \(\varphi\), the loss can be expressed in decibels dB as follows: \[PL=PL_{0}+10n\log_{10}(d)+\mathbb{G}\left(\phi-\varphi\right), \tag{5}\] \(d\) is the distance between the extender modules. \(\mathbb{G}\) represents the normalized angular gain pattern of an antenna and \(\phi\) is the angle between the extender modules. The angular gain function in linear scale can be approximated as [17] \[G(\theta)=\left|\frac{\sin(\omega\theta)}{\omega\theta}\right|,\quad|\theta| \leq\pi, \tag{6}\] where \(\omega\) is a parameter linked to both the maximum gain direction angle and the beamwidth of the directional antenna. The antenna beamwidth can be defined as twice the angular value at which the measured power in the maximum gain direction decreases by half. ### _Measurement Setup_ One of the most critical factors to be considered in misalignment experiments is ensuring the repeatability of the measurements. Especially when the focus is on THz frequencies various factors must be considered where even the slightest change can significantly impact the results. Foremost, it is necessary to ensure the accuracy of the \(0^{\circ}\) position, which is the reference in examining the misalignment effect. In addition to the \(0^{\circ}\) position, periodically verify the alignment of other angles as well. Secondly, the antenna must be in the center of rotation to keep the distance constant during the experiments. In order to ensure sustainability in measurements, the movable rotation platform should be arranged so that the calibration is renewed at regular intervals without removing the extenders. Such a process that starts from designing the measurement setup that serves a particular purpose and reaches the results verified by repeated measurements obliges carrying out a multi-disciplinary effort. Misalignment is controlled by rotating the receiver according to the angle scale. Thanks to the mini slide rail, when the calibration is needed the extenders are brought closer to each other without being removed from the setup, while the indexing plunger prevents unwanted angle changes. With this measurement setup, besides misalignment, distance-dependent measurement can also be taken since the rotation platforms are movable. Thirdly, it is crucial to ensure effective management of cables to avoid signal deterioration and interference. It is recommended to utilize high-quality cables that have suitable shielding to minimize signal losses and maintain a stable transmission. For this study, Minicircuit brand cables with low loss characteristics are preferred, particularly when operating at high frequencies. Fourthly, it is of utmost importance to perform frequent calibration of the measurement equipment, which includes antennas and receivers, in order to uphold precision. This process entails comparing the measured signals to established reference standards or calibrated sources. Regular recalibration is necessary to compensate for any alterations or fluctuations in the equipment's performance. Fifthly, it is vital to exercise control over the experimental environment to reduce the impact of external factors on the measurements. It is important to carefully manage factors like temperature, humidity, and electromagnetic interference. Implementing shielding measures to safeguard the measurements from external electromagnetic radiation sources is crucial in ensuring precise and accurate measurements. By adhering to these design guidelines, a robust measurement setup can be created that actively minimizes the impact of external factors, resulting in accurate and reliable measurements. #### Iii-B1 Description of Measurement Setup Our THz experimental setup is shown in Fig. 2 which is constructed in the MILTAL at the TUBITAK. The measurement setup consists of four main hardware parts and mechanical parts. Hardware parts of the system consist of Agilent vector network analyzer (VNA) E8361A, Oleson Microwave Labs (OML) V03VNA2-T and V03VNA2-T/R-A millimeter wave extender modules and N5260A extender controller. To enable the analysis of misalignment effects on the THz frequency range, extender modules are coupled with the VNA which is limited to an operational frequency of 67 GHz. The V03VNA2-T/R-A drives up the RF signal within the 12.2 GHz and 18.1 GHz range by a factor of 18 and expands the frequency of the transmission signals allowing VNA to analyze signals between 220 GHz and 325 GHz. Prior to transmission, the VNA acquires test intermediate frequency (IF) and reference IF signals through downconversion mixers. Following signal reception after passing through the channel, it undergoes downconversion at V03VNA2-T which results in a test IF signal that is fed back to the VNA for further analysis. The extender modules have been driven by an extended band WR-10 multiplier chain with a WR-03 waveguide output interface. The WR-03 waveguide output power of the V03VNA2-T/R-A is around -23 dBm. Also, the magnitude and phase stability of the extenders is \(\pm\)0.4dB and \(\pm\)8\({}^{\circ}\), respectively. Because of the narrow beamwidth at high frequencies, the alignment between the transmitter and receiver needs to be very precise. So, the extender modules have been installed in a mechanical system as mentioned in Section II-B where we can precisely change the distance and angles between the modules. #### Ii-B2 Measurement Methodology In this study, the operating frequency range is set to 240 GHz to 300 GHz because of the magnitude and phase stability of the extender modules in Fig. 1: A three-dimensional (3D) model depicting the mechanical components of the measurement setup, including a rotating mechanism with 1-degree precision in the horizontal direction and an adjustable horizontal distance between the transmitter and receiver. Fig. 3: Block diagram of transmitter and receiver extenders. Fig. 2: Measurement setup. this range. To ensure accuracy, the spectral resolution of each measurement is set to be 14.648 MHz, which corresponds to 4096 frequency points with an IF bandwidth of 100 Hz. To ensure accurate measurements, it is necessary to calibrate electronic devices and cables together. Prior to the measurements, calibration has been done by connecting the waveguide ports of the transmitter and receiver modules end-to-end to retrieve any unwanted effects caused by the electronics. In order to comprehensively investigate the impact of antenna misalignment on received power in the THz wireless channel, a series of measurements were conducted using a sliding rail and rotation platform. These platforms enabled horizontal angle adjustments with a precision of \(1^{\circ}\) at each distance setting, allowing for a comprehensive investigation of the influence of antenna misalignment with distance. In order to facilitate analysis and improve the reliability of data by minimizing the number of unknown variables, this study specifically evaluated changes in only the horizontal angle. By limiting the focus to horizontal angle changes, the study was able to produce more accurate and dependable findings. The measurements were obtained as \(S_{21}\) parameters and stored as complex numbers in Agilent VNA E8361A. The parameters of the measurements are presented in detail in Table I. ## III Measurement Results In this section, the joint impact of the antenna misalignment and the distance dependent path loss is presented by illustrating the channel frequency response and impulse response of the measurements. The channel frequency responses for 0,3,5,10 and 15-degree antenna misalignment at 20 cm and 80 cm is shown in Fig. 5. If only the distance between the transmitter and receiver is taken into account, the received power experiences a change of approximately 12 dB when the separation is increased from 20 cm to 80 cm. The 15-degree horizontal angle change from the reference point at 20 cm causes the 8 dB reduction of the received power. For example, the received power difference is around 12 dB without antenna misalignment for 20 cm and 80 cm. In case of antenna misalignment, loss of the received power can reach up to 8 dB. In addition, Fig. 6 shows that time domain analysis by taking inverse Fourier Transform of the measurement data. Fig. 4: Maximum values of the channel impulse response for all distances and antenna misalignment degrees. Fig. 5: Channel frequency responses with \(0^{\circ},5^{\circ},10^{\circ}\) and \(15^{\circ}\) antenna misalignment at 20cm and 80cm distances. Fig. 6: Channel impulse responses with \(0^{\circ},5^{\circ},10^{\circ}\) and \(15^{\circ}\) antenna misalignment at 20 cm and 80 cm distances. In this figure, it is seen that there is a decrease in received power due to antenna misalignment. So, antenna misalignment considerably impacts received signal power because THz antennas have a narrow beam width. Furthermore, Fig. 4 illustrates the channel impulse response with the combined distance and antenna misalignment measurements. It is plotted using the max value of the channel impulse response for every distance and angle measurements pairs. In addition, the direct numerical value equivalents of these measurement pairs are given in the Table II. In order to better understand the importance of the effect of antenna misalignment on received power, we can analyze the numerical data in Table II. For example, by increasing the distance between the transmitter and the receiver from 20 cm to 80 cm, the loss in received power is almost equal to the loss due to 10-degree antenna misalignment at a fixed 80 cm distance, and this loss is approximately 9 dB. Antenna misalignment is an indispensable concern for THz communication systems in UAVs, as it has a direct impact on the received power. The distance between the transmitter and the receiver plays a crucial role in path loss, which inevitably results in a decline in the received power. In addition, when the transmitter and receiver antennas are misaligned, the issue of power loss is exacerbated, causing a further decrease in the received power. In this context, the development of fast and robust beamforming and beam tracking algorithms is imperative for UAVs equipped with THz communication systems. The compensation for misalignment due to several reasons is imperative to maintain the desired level of received power, thus ensuring reliable communication in UAVs operating at THz frequencies. ## IV Conclusion and Future Directions THz communication systems hold promise as a solution for addressing the high data rate demands and increasing number of wireless devices, and UAVs have been featured to enable ubiquitous access to the sublime potential of these frequencies. This study initiates a discussion about antenna misalignment in UAV-assisted THz communication which will be one of the most critical challenges in the practical implementation step. Experiments performed with fine-tuned measurement setup examine the effect of misalignment and distance on the received power between 240 GHz and 300 GHz. Results have shown that even minor deviations in the alignment have a significant impact on SNR. In order to provide direction for future research, we have outlined the essential factors that need to be taken into account when conducting a controlled misalignment experiment. When transceiver technology advances and UAV-mounted THz campaigns become feasible, experiments should be conducted in a real-world setting. ## Acknowledgment We thank to StorAIge project that has received funding from the KDT Joint Undertaking (JU) under Grant Agreement No. \(101007321\). The JU receives support from the European Union's Horizon 2020 research and innovation programme in France, Belgium, Czech Republic, Germany, Italy, Sweden, Switzerland, Turkiye, and National Authority TUBITAK with project ID \(121N350\).
2301.10736
Generating large-scale network analyses of scientific landscapes in seconds using Dimensions on Google BigQuery
The growth of large, programatically accessible bibliometrics databases presents new opportunities for complex analyses of publication metadata. In addition to providing a wealth of information about authors and institutions, databases such as those provided by Dimensions also provide conceptual information and links to entities such as grants, funders and patents. However, data is not the only challenge in evaluating patterns in scholarly work: These large datasets can be challenging to integrate, particularly for those unfamiliar with the complex schemas necessary for accommodating such heterogeneous information, and those most comfortable with data mining may not be as experienced in data visualisation. Here, we present an open-source Python library that streamlines the process accessing and diagramming subsets of the Dimensions on Google BigQuery database and demonstrate its use on the freely available Dimensions COVID-19 dataset. We are optimistic that this tool will expand access to this valuable information by streamlining what would otherwise be multiple complex technical tasks, enabling more researchers to examine patterns in research focus and collaboration over time.
Michele Pasin, Richard Abdill
2023-01-25T17:55:15Z
http://arxiv.org/abs/2301.10736v1
Generating large-scale network analyses of scientific landscapes in seconds using Dimensions on Google BigQuery. ## Abstract The growth of large, programatically accessible bibliometrics databases presents new opportunities for complex analyses of publication metadata. In addition to providing a wealth of information about authors and institutions, databases such as those provided by Dimensions also provide conceptual information and links to entities such as grants, funders and patents. However, data is not the only challenge in evaluating patterns in scholarly work: These large datasets can be challenging to integrate, particularly for those unfamiliar with the complex schemas necessary for accommodating such heterogeneous information, and those most comfortable with data mining may not be as experienced in data visualisation. Here, we present an open-source Python library that streamlines the process accessing and diagramming subsets of the Dimensions on Google BigQuery database and demonstrate its use on the freely available Dimensions COVID-19 dataset. We are optimistic that this tool will expand access to this valuable information by streamlining what would otherwise be multiple complex technical tasks, enabling more researchers to examine patterns in research focus and collaboration over time. ## Introduction Across even the most disparate fields, meta-research is a key requirement for evaluating and improving the scientific enterprise. Understanding trends and gaps in how research is funded, performed and published allows data-driven introspection from funders, institutions, companies and individual researchers--what is being researched, and by whom? Which collaborations are most productive, and in what ways? Endless answers are available from comprehensive databases such as those provided by Dimensions, Crossref, Scopus and Web of Science [2], for those who have the technical capabilities of asking the questions. Though most such databases provide user-friendly web portals to access information, powerful analyses are enabled by programmatic access--computer-friendly mechanisms for accessing large quantities of data that can be parsed by third-party tools that perform tasks not built into existing web applications. One such mechanism is the "Dimensions on Google BigQuery" dataset, which organises the Dimensions corpus (more than 127 million publications, 6 million grants, 695,000 clinical trials, and so on) ("The data in Dimensions" 2022) into a relational database model hosted on Google's powerful BigQuery platform that enables quick execution of queries that would be too large or too slow on more conventional database platforms such as MySQL. However, information about these entities--and the billions of links between them--is necessarily spread out over 10 tables with dozens of fields each ("Data Source Tables" 2022), presenting a learning curve much steeper than that of the Dimensions web application. We present an open source Python library that streamlines the process of generating large-scale network visualisations of scientific research related to COVID-19. The library is available on GitHub1 and relies on the public-domain COVID-19 Dimensions on BigQuery2 dataset in order to extract and calculate proximity relationships between scientific entities of interest, although it can be easily configured to run on the full Dimensions data. The library includes components for both data extraction and processing, so that it can be consumed by the VOSviewer3 online visualisation tool. More output visualisations are being developed and will be added over the coming months. Footnote 1: [https://github.com/digital-science/dimensions-network-gen](https://github.com/digital-science/dimensions-network-gen) Footnote 2: [https://console.cloud.google.com/marketplace/product/digitalscience-public/covid-19-dataset-dimensions](https://console.cloud.google.com/marketplace/product/digitalscience-public/covid-19-dataset-dimensions) Footnote 3: [https://www.vosviewer.com/](https://www.vosviewer.com/) As previously argued (Hook, Porter, 2021), by leveraging cloud-computing infrastructures such as the one provided by Dimensions on Google BigQuery, it is possible to radically transform the way research analytics is done. The main contribution of this work is to provide a reusable open-source tool that practically demonstrates how to leverage such technologies, in order to generate insightful network representations of selected aspects of the scientific research landscape e.g. organisation collaboration networks and topic co-occurrence networks. #### Dataset The library by default uses the freely available COVID19 Dimensions dataset (Hook et al., 2020). The dataset contains all COVID-19 related published articles, preprints, datasets, grants and clinical trials from Dimensions free for anyone to access. The data was initially released in early 2021 in CSV format and subsequently as a public-domain BigQuery dataset to help the research community stay up to date and greatly reduce the time that would otherwise be required to collate this information from many disparate sources. The dataset is updated daily and, at the time of writing, contains more than 1.1 million documents (note: we used this dataset to make it easier to reproduce the work presented in this paper, however by modifying the default library settings it is also possible to point to the full Dimensions.ai dataset). \begin{table} \begin{tabular}{|c|c|} \hline **Entity** & **Records** \\ \hline Publications & 1,031,972 \\ \hline Clinical Trials & 14,723 \\ \hline Grants & 16,703 \\ \hline Patents & 41,473 \\ \hline Datasets & 32,784 \\ \hline Organizations & 36,670 \\ \hline \end{tabular} \end{table} Table 1: Summary of data included in the COVID19 Dimensions Dataset ### Implementation The library uses Python for data processing and displays network data using VOSviewer, a software tool for visualising network data that is available across multiple platforms, including a browser-based version [21]. The data extraction component is generic and outputs a data format that can also be ingested by other visualisation libraries, which we are planning to include in the library in the coming months. The library can be broken down into three main components: 1) a user-created SQL input query, 2) the BigQuery data extraction & network generation module, 3) the data transformation & visualisation component (Figure 1). ### 1 User Input It is possible to generate network analyses on the whole COVID19 database, or using a selected subset of data. This is achieved by letting users input any SQL query defining a COVID-19 document subset of interest (e.g. a group of journals, or a group of countries). Users can collect a library of queries of interest by storing them in a folder and then running the extraction script on all of them. For example, the following query selects only documents added to the database in the last 30 days: select id from 'covid-19-dimensions-ai.data.publications' where EXTRACT(DATE FROM date_inserted) >= DATE_ADD(CURRENT_DATE(), INTERVAL -30 DAY) Note how the input query simply returns a set of record IDs--users need only define those fields relevant for their filters; the more complicated task of joining and arranging information about those publications is handled by the library. ### 2 Data Extraction The data extraction step deals with the extraction of data from BigQuery and calculation of the network representation. Currently we have included two possible network calculations: 1. Concept co-occurrence network. This query generates two-concept pairs and counts how many publications are shared between these concepts (note: concepts in Figure 1: Architecture of _dimensions-network-gen_ Python library Dimensions are publication-level keywords normalised and weighted based on a relevancy score). 2. Organisation network. This query generates two-organisations pairs (from the authors affiliations) and counts how many publications are shared between these organisations. Both the extraction and network calculation steps are achieved using a single SQL query. The query includes the user input query (user-provided-subquery) and parameters values for max number of nodes and min weight of edges to be included in the result (8max_nodes, @min_edge_weight). The gist of the query lies in the double CROSS JOIN UNTEST. This mechanism allows to traverse a potentially very large number of relationships in seconds and to expose all relevant combinations of co-authoring organisations within the same data structure. ### 3. Data Transformation & Visualisation In this step the data extracted from BigQuery gets converted into a VOSviewer JSON file and packaged up into an HTML application that can be viewed in a browser. The Python library also includes a local server component that can be used to view the files locally on a computer. Example images of the VOSviewer networks generated are included below. Figure 2: SQL template query for the collaboration network generation Figure 4: VOSviewer rendering of the concepts network for the _last 30 days_ query Figure 3: VOSviewer rendering of the organisation network for the _last 30 days_ query ### Conflict of interest The co-authors are current (MP) and former (RJA) employees of Digital Science, the creator and provider of Dimensions.
2302.11178
IRS: An Incentive-compatible Reward Scheme for Algorand
Founded in 2017, Algorand is one of the world's first carbon-negative, public blockchains inspired by proof of stake. Algorand uses a Byzantine agreement protocol to add new blocks to the blockchain. The protocol can tolerate malicious users as long as a supermajority of the stake is controlled by non-malicious users. The protocol achieves about 100x more throughput compared to Bitcoin and can be easily scaled to millions of nodes. Despite its impressive features, Algorand lacks a reward-distribution scheme that can effectively incentivize nodes to participate in the protocol. In this work, we study the incentive issue in Algorand through the lens of game theory. We model the Algorand protocol as a Bayesian game and propose a novel reward scheme to address the incentive issue in Algorand. We derive necessary conditions to ensure that participation in the protocol is a Bayesian Nash equilibrium under our proposed reward scheme even in the presence of a malicious adversary. We also present quantitative analysis of our proposed reward scheme by applying it to two real-world deployment scenarios. We estimate the costs of running an Algorand node and simulate the protocol to measure the overheads in terms of computation, storage, and networking.
Maizi Liao, Wojciech Golab, Seyed Majid Zahedi
2023-02-22T07:19:18Z
http://arxiv.org/abs/2302.11178v1
# IRS: An Incentive-compatible Reward Scheme for Algorand ###### Abstract. Founded in 2017, Algorand is one of the world's first carbon-negative, public blockchains inspired by proof of stake. Algorand uses a Byzantine agreement protocol to add new blocks to the blockchain. The protocol can tolerate malicious users as long as a supermajority of the stake is controlled by non-malicious users. The protocol achieves about 100x more throughput compared to Bitcoin and can be easily scaled to millions of nodes. Despite its impressive features, Algorand lacks a reward-distribution scheme that can effectively incentivize nodes to participate in the protocol. In this work, we study the incentive issue in Algorand through the lens of game theory. We model the Algorand protocol as a Bayesian game and propose a novel reward scheme to address the incentive issue in Algorand. We derive necessary conditions to ensure that participation in the protocol is a Bayesian Nash equilibrium under our proposed reward scheme even in the presence of a malicious adversary. We also present quantitative analysis of our proposed reward scheme by applying it to two real-world deployment scenarios. We estimate the costs of running an Algorand node and simulate the protocol to measure the overheads in terms of computation, storage, and networking. ## 1. Introduction The concept of blockchain is first popularized by Bitcoin (Bitcoin, 1997) as a tamper-resistant distributed transaction ledger. Blockchains could be classified into two categories: permissioned and permissionless. Permissioned blockchains, also known as private blockchains, implement an access-control mechanism to restrict unauthorized users from accessing the ledger (Bilton et al., 2015; Bjorand and Kwiepka, 2016; Kwiepka, 2016). Examples include HyperLedger Fabric (Bilton et al., 2015) and Libra (now called Diem) (Libra and Diem, 2016). In contrast, permissionless blockchains do not impose any access restrictions (Bjorand and Kwiepka, 2016; Kwiepka, 2016). Examples include Bitcoin (Bitcoin, 1997) and Ethereum (Ethereum, 2000). When obtaining permission is not required, the system could become prone to Sybil attacks1. To mitigate the Sybil attack threat, permissionless consensus protocols often use additional mechanisms. For example, Bitcoin uses proof of work (PoW), which requires nodes to solve a computationally intensive puzzle. Winners earn the right to add blocks to the blockchain and collect rewards for their computational effort. PoW suffers from high energy and computational costs (Kwiepka, 2016). Proof of stake (PoS) has been proposed to mitigate these costs (Kwiepka, 2016; Kwiepka, 2016). In most PoS consensus protocols, nodes stake their cryptocurrency assets to gain rights to add blocks and earn rewards. Footnote 1: In a Sybil attack, the attacker creates a large number of pseudonymous identities to gain disproportionate control and/or influence over the system. Inspired by proof of stake, Algorand is one of the world's first carbon negative blockchain protocols (Kwiepka, 2016). The Algorand blockchain runs a randomized, committee-based consensus protocol (Kwiepka, 2016; Kwiepka, 2016). The core of the protocol is a Byzantine agreement protocol that allows nodes to reach consensus on a new block in the presence of Byzantine faults2. Nodes are selected randomly to participate in the Byzantine agreement protocol as committee members. The original reward scheme
2308.04248
Gloss Alignment Using Word Embeddings
Capturing and annotating Sign language datasets is a time consuming and costly process. Current datasets are orders of magnitude too small to successfully train unconstrained \acf{slt} models. As a result, research has turned to TV broadcast content as a source of large-scale training data, consisting of both the sign language interpreter and the associated audio subtitle. However, lack of sign language annotation limits the usability of this data and has led to the development of automatic annotation techniques such as sign spotting. These spottings are aligned to the video rather than the subtitle, which often results in a misalignment between the subtitle and spotted signs. In this paper we propose a method for aligning spottings with their corresponding subtitles using large spoken language models. Using a single modality means our method is computationally inexpensive and can be utilized in conjunction with existing alignment techniques. We quantitatively demonstrate the effectiveness of our method on the \acf{mdgs} and \acf{bobsl} datasets, recovering up to a 33.22 BLEU-1 score in word alignment.
Harry Walsh, Ozge Mercanoglu Sincan, Ben Saunders, Richard Bowden
2023-08-08T13:26:53Z
http://arxiv.org/abs/2308.04248v1
# GLOSS ALIGNMENT USING Word Embeddings ###### Abstract Capturing and annotating Sign language datasets is a time consuming and costly process. Current datasets are orders of magnitude too small to successfully train unconstrained Sign Language Translation (SLT) models. As a result, research has turned to TV broadcast content as a source of large-scale training data, consisting of both the sign language interpreter and the associated audio subtitle. However, lack of sign language annotation limits the usability of this data and has led to the development of automatic annotation techniques such as sign spotting. These spottings are aligned to the video rather than the subtitle, which often results in a misalignment between the subtitle and spotted signs. In this paper we propose a method for aligning spotting with their corresponding subtitlets using large spoken language models. Using a single modality means our method is computationally inexpensive and can be utilized in conjunction with existing alignment techniques. We quantitatively demonstrate the effectiveness of our method on the Meine DGS-Annotated (MeineDGS) and BBC-Oxford British Sign Language (BOBSL) datasets, recovering up to a 33.22 BLEU-1 score in word alignment. Harry Walsh, Ozge Mercanoglu Sincan, Ben Saunders, Richard Bowden CVSSP, University of Surrey Guildford, United Kingdom {harry.walsh, o.mercanoglusincan, b.saunders, r.bowden}@surrey.ac.uk Sign Language, Gloss Alignment, Natural Language Processing (NLP), Automatic Dataset Construction ## 1 Introduction Sign languages are the primary form of communication for the Deaf. Signs are expressed through the articulation of manual and non-manual features including body language, facial expressions, mouthing, hand shape, and motion [1]. Despite the recent successes of large language models, Sign Language Translation (SLT) between continuous sign language videos and spoken language remains a challenging task [2]. Even though results have been achieved within a constrained setting and a limited vocabulary [3, 4], progress towards unconstrained translation still requires larger-scale datasets. The visual nature of sign language has restricted the availability of high quality datasets, due to the difficulty of capturing and labeling a visual medium. The publicly available MeineDGS dataset [5] attempts to fully capture the details of the language using gloss1 annotations, non-manual mouthing and the Hamburg Notation System (HamNoSys). However, the curation of such a high quality dataset is both time consuming and costly, which has restricted its size to only 50k parallel text gloss sequences [5]. This scarcity of data has motivated the research community to automate the collection and annotation of large-scale public datasets. Footnote 1: Gloss is the written word associated with a sign Broadcast content has repeatedly been used as the source of sign language datasets to assist with tasks such as sign language recognition, alignment, and translation [6, 7, 8]. Under the European Accessibility Act, all EU countries are obligated to make content accessible [9]. Specifically, UK broadcasters must supply 5% of their content with British Sign Language (BSL) translations, which leads to the generation of a steady stream of sign language translation data. However, the raw data only contains the spoken language subtitlets and the video of the sign interpreter, who, although conducting translations from the subtitles, is often misaligned. In order to make use of this data for tasks such as SLT, the data needs to be curated, and subsequently aligned. As shown in Fig. 1, we have identified two types of alignment error; 1) Glosses that correspond to the preceding sentence are aligned to the current, shown by the gloss POPULAR and PRAISE that are misaligned to sentence \(t_{1}\). 2) Glosses are aligned to the following sentence, shown by the gloss INSECT that is misaligned to \(t_{5}\). There are several factors that lead to the misalignment of the sign to the spoken language subtitle. Firstly, there is a weak correlation between the number of words in a sentence and the number of signs contained in the translation. Additionally, the time taken to speak a word is not related to the time taken to perform a sign. Finally, the ordering of spoken language words is different from the gloss order [1]. All these factors result in the sign language lagging or preceding the corresponding subtitle. Previous work has attempted to align the subtitles with the sign language video by finding a correspondence between the glosses of the spotted isolated signs and words in the subtitle with a similar lexical form [10, 11, 12]. However, these works all require multi-modal inputs and are expensive to compute. In this paper, we propose an approach to align glosses to the corresponding spoken language sentence by leveraging the power of large spoken language models, such as BERT [13] and Word2Vec [14]. We make the following 2 contributions: 1) A novel alignment approach that can be used in conjunction with previous methods. 2) Quantitative evaluation of our approach on two different datasets from both BSL and German Sign Language - Deutsche Gebbardensprache (DGS), demonstrating our approach is language agnostic. ## 2 Related Work ### Spoken Language Alignment Word alignment techniques have been researched since the late \(20^{\text{th}}\) century, where Brown et al. developed the IBM models to assist in statistical machine translation [15]. Since then a number of statistical word alignment techniques have been proposed, such as GIZA++ [16, 17] or alignment via a hidden Markov model [18]. More re cently, deep learning based methods have demonstrated superior performance [19]. A variety of supervised methods have been created, some using statistical supervision [20], while others make use of the attention mechanism from a transformer [21]. Most similar to our approach, Stengel-Eskin et al. used the dot product distance between learnt embeddings to make an alignment prediction [19]. ### Sign Language Spotting Sign spotting is the task of locating isolated instances of signs in a continuous video. Several methods have been suggested to tackle this task, from early techniques that use hand crafted features [22, 23], to methods that employ subtitles as a form of weak supervision [7, 8]. More recent methods have employed multiple modalities to improve performance e.g. visual dictionaries [11] and mouthings [10]. However, all these methods still result in the misalignment of spotted signs and the subtitles, as shown in Fig. 1. ### Subtitle Alignment Subtitle alignment attempts to align a continuous sequence of signing to the corresponding subtitles. Early attempts to solve the alignment issues used 3D pose in a multi step approach, but assumed a similar ordering between the spoken and signed languages [24]. To overcome this assumption, Bull et al. trained a Bidirectional Long Short-Term Memory (BLSTM) with 2D keypoints using manually aligned subtitles as ground truth to segment a continuous video into candidate signs [25]. However, without a strong language model, such approaches tend to over segment the video. In subsequent works, the subtitles were incorporated into the input of the model along with the video and shifted temporal boundaries, to align broadcast footage [26]. In contrast, in this work we attempt to align spotted glosses to the spoken language subtitles using only word embeddings. Note our approach can be used in conjunction with these existing methods. ## 3 Methodology In this section we explain our methodology for aligning glosses with their corresponding subtitles. In Section 3.1 we explain how we use the embeddings from large spoken language models such as BERT and Word2Vec to create a mapping between a sequence of glosses and spoken language words. Then in Section 3.2 we show how to use the mapping to re-align glosses to the correct spoken language sentence. ### Text Gloss Mapping Our alignment approach relies on the lexical overlap that exists between the spoken language words and the signed glosses. Therefore, the gloss notation needs to be semantically motivated. Given the following example text, "where do you live?" and the following sequence of glosses, "YOU LIVE WHERE. ME LONDON". It is clear to see that the first three glosses correspond to the given text and the last two glosses potentially correspond to the next sentence. Following this intuition, we use two different word embedding techniques to find which glosses best correspond to a given spoken language sentence. Firstly, we use Word2Vec [13] to find connections between words and glosses that have a similar lexical form. Secondly, we use BERT [14] to find connections based on meaning. We find BERT embeddings capture the meaning of words allowing us to find connections between words and glosses that have a different lexical form, e.g. "supermarket" and "SHOP". Note when we apply our approach to DGS we first apply a compound splitting algorithm to improve the performance [27]. To find a mapping between a spoken language sequence \(X=(x_{1},x_{2},...,x_{W})\) with W words, and a sequence of glosses, \(Y=(y_{1},y_{2},...,y_{G})\) with G glosses, we first apply Word2Vec; \[X_{Vec}=Word2Vec(X) \tag{1}\] \[Y_{Vec}=Word2Vec(Y) \tag{2}\] where \(X_{Vec}\in\mathbb{R}^{W\times E_{Vec}}\) and \(Y_{Vec}\in\mathbb{R}^{G\times E_{Vec}}\). Calculating the outer product between the two embeddings produces the Word2Vec alignment; \[A_{Vec}=Y_{Vec}\otimes X_{Vec} \tag{3}\] where \(A_{Vec}\in\mathbb{R}^{G\times W}\). We repeat the above using BERT, to find connections based on meaning; \[X_{BERT}=BERT(X) \tag{4}\] \[Y_{BERT}=BERT(Y) \tag{5}\] \[A_{BERT}=Y_{BERT}\otimes X_{BERT} \tag{6}\] where \(X_{BERT}\in\mathbb{R}^{W\times E_{BERT}}\), \(Y_{BERT}\in\mathbb{R}^{G\times E_{BERT}}\) and \(A_{BERT}\in\mathbb{R}^{G\times W}\). The BERT model we apply uses a word-piece tokenizer, therefore to find an alignment on the word level we average the embeddings of the sub-units. The final alignment is found by joining the alignments from BERT and Word2Vec. We filter the Word2Vec alignment scores by \(\alpha\), only keeping strong connections. Thus, the final alignment is defined as; \[Align(X,Y)=A_{BERT}+(\alpha*A_{Vec}) \tag{7}\] where \(A\in\mathbb{R}^{G\times W}\). A visualization of two alignments, \(A\), is shown in Fig. 2. Here we find the alignment between two sequential text Figure 1: A visualisation of the alignment from the BBC-Oxford British Sign Language (BOBSL) dataset (vertical red lines indicate the correct boundaries). sentences from the BOBSL dataset, and the four glosses that correspond to the two sentences. FOOD and FACTORY belong to the first sentence "I've set up my own food factory inside this barn", and as shown by Fig. 2 (left) a strong alignment is found between the spoken language and it's corresponding glosses, FOOD, and FACTORY. The same applies to Fig. 2 (right) with the gloss BREAD. ### Gloss Alignment We use the alignment found above for two sequential sentences to re-align glosses to their corresponding subtitles. Given a dataset that consists of N sequences of spoken language sentences \(T=(X_{1},X_{2},...,X_{N})\) and N sequences of glosses \(S=(Y_{1},Y_{2},...,Y_{N})\), we take two sequential sentences, \(X_{i},X_{i+1}\), and we concatenate their corresponding glosses \(Y_{i:i+1}=Y_{i}+Y_{i+1}\). We want to find the index to split \(Y_{i:i+1}\) back into two sequences that result in the best alignment. The number of possible splits is \(N_{split}=length(Y_{i:i+1})+1\). Therefore, for each possible split we sum the max alignment score for each gloss in \(Align(X_{i},Y_{i:i+1})\) and \(Align(X_{i+1},Y_{i:i+1})\) as in Equation 7. Fig. 3 shows the alignment score for each possible split of \(Y_{i:i+1}\). We take the argmax of the alignment score to determine the optimal index to split \(Y_{i:i+1}\) back to two sequences. Fig. 3 shows the alignment score for the two sentences in Fig. 2, showing our approach is able to find the optimal alignment. The proposed algorithm is auto-regressive, meaning the output of the first split affects the next iteration. This introduces a bias that favours earlier sentences in the dataset. Therefore, to counter this effect we iterate through the data from \(i=[0,1,2,...N]\) and then for each subsequent iteration we reverse the order, such that iteration two is \(i=[N,N-1,N-2,...,0]\). In the next section we show that multiple iterations (forwards then backwards) of the algorithm increase the alignment score, but quickly converges. ## 4 Experimental Setup In this section we outline the experimental setup, detailing the pre-trained models that we use to create the word embeddings for both English and German. In Section 4.1 we describe how we corrupt the MeineDGS dataset to simulate a spotting misalignment. Finally, in Section 4.2 we explain how we gather the spottings from [26] and process them to create parallel text gloss sequences for our alignment algorithm. We use the Fasttext implementation of Word2Vec that supports 157 languages [28]. The models are trained on the Common Crawl and Wikipedia datasets and have an output dimension of 300. For the following experiments we use the English implementation when testing our approach on the BOBSL dataset and the German version when testing on MeineDGS. Note, we filter the Word2Vec embeddings by setting \(\alpha\) to 0.9. When creating embeddings with BERT we use Huggingface's python library transformers to load the models. When testing on MeineDGS we use Degets implementation of German BERT [29], which is trained on approximately 12GB of data from the Wiki, OpenLegalData, and News datasets. Finally, when testing on the English BOBSL dataset we use GoogleAI's implementation of BERT [30], which is trained on the Bookcorpus and Wikipedia datasets. To evaluate the performance of our algorithm on all datasets we use BLEU-1 score. We do not present results using higher n-gram BLEU scores as these metrics are used to measure the order accuracy, that is unnecessary for this task. ### MeineDGS Dataset All results on the MeineDGS dataset are computed against the original ground truth. The MeineDGS dataset contains 50k parallel sequences [5] and we follow the translation protocol set in [31]. The dataset has a source vocabulary of 18,457 with 330 deaf participants performing free form signing. Note we reorder the sequences sequentially as in the original videos. To evaluate our approach we corrupt the MeineDGS dataset. This allows us to simulate an alignment error created when using previously mentioned sign spotting techniques to automatically spot glosses in a sequence of continuous signing. We create two versions of the dataset to simulate; #### 4.1.1 Sequence misalignment A worst-case scenario, a total misalignment of all sequence pairs. For this, we offset all the gloss sequences by one. We add an empty sequence to the start \(Y_{empty}\) and remove the last sequence \(Y_{N}\) to maintain an equal number, N, of text gloss pairs. Therefore, we apply our alignment approach to \(T=(X_{1},X_{2},...,X_{N})\) and \(S=(Y_{empty},Y_{1},...,Y_{N-1})\). #### 4.1.2 Gloss misalignment To simulate the errors shown in Fig. 1 (glosses are misaligned to the preceding or succeeding sentence) we randomly shift up to 3 glosses to the previous or following sequence. We set probabilities of 15%, 20% and 10% of moving 1, 2 or 3 glosses, respectively. 10% of the time we do not alter the sequence. Note if the sequence has fewer Figure 3: The alignment scores found for the two sentences in Fig. 2 Figure 2: An example of two alignments, \(A\), found between the TRG: “FOOD FACTORY BREAD MAKE” and sentence one (left): “I’ve set up my own food factory inside this barn”, and sentence two (right): “So, it’s the battle of the breads.”. glosses than we wish to shift, then we do not alter it. In total we move 21,273 glosses to the preceding sequence and 21,359 to the next sequence. ### BOBSL Dataset The BOBSL dataset contains 1,193K sentences extracted from 1,962 videos from 426 different TV shows [6]. The dataset itself only contains the subtitle from the original TV show and the signer. The test set comes with three variants of the subtitles audio-aligned, audio-aligned shifted, and manually aligned. By calculating the average difference between the audio-aligned and signing-aligned sentences, Albanie et al. [6] found the signer lags the subtitle by approximately +2.7 seconds. Thus, the audio-shifted variant applies this 2.7 second delay to all time stamps. The manually aligned subtitles contain a subset of the original audio-aligned subtitles. Therefore, when comparing our alignment results against the manually aligned subtitles we are restricted to this subset of the data. In order to perform alignment we use the automatically extracted spottings from [32]. We then process the data, aligning the dense spots with the three variants of the subtitle provided in the original BOBSL test set. ## 5 Experiments - Quantitative Evaluation Fig. 4 shows the results of applying the alignment algorithm to two versions of the MeineDGS and BOBSL datasets. As can be seen, the algorithm has a positive effect on all variants, increasing the BLEU-1 scores by up to 33.22. Next we discuss the results in detail, starting with MeineDGS (sequence level misalignment, then gloss level misalignment), followed by the BOBSL dataset (audio aligned, then manually aligned). ### MeineDGS Alignment #### 5.1.1 Sequence misalignment In this experiment we offset the MeineDGS dataset by 1 sequence, to simulate the worst case where all glosses are misaligned. Fig. 4 - MeineDGS Sequence (orange line) shows there is a shared gloss vocabulary between sequential sentences, as the baseline score is not zero. Impressively, the approach is able to recover a large proportion of the glosses, increasing the BLEU-1 score from 7.69 to 40.91, a improvement of 432%. #### 5.1.2 Gloss misalignment Fig. 4 - MeineDGS Gloss (yellow line) shows the results of applying our alignment approach to the corrupted dataset. By computing the data we decrease the BLEU-1 score from perfect alignment (100 BLEU-1) to 72.84. From this baseline we are able to recover 2.41 BLEU-1 score using a single forward and backward pass through the data, an improvement of 3.3%. However, further iterations are detrimental as can be expected from a greedy algorithm. This shows that the approach is able to recover a portion of the corruption. However, it should be noted that the effectiveness of the approach is dataset dependent, as the similarity of sequential sentences will effect the reliability of the mapping found between words and glosses. ### BOBSL Alignment #### 5.2.1 Audio aligned Here we show that our approach is able to move the audio-aligned subtitle toward the improved audio-shifted subtitles. As shown in Fig. 4 the approach improves the audio alignment by 6.2 BLEU-1. It should be noted that the audio-shifted subtitles are not perfect ground truth. Additionally, the spottings are not perfect, which introduces an error to any alignment approach as we may be attempting to map glosses that do not align with any words in the spoken sentence. Thus, we could expect the performance to increase if the quality of the underlying spottings improves. #### 5.2.2 Manually aligned The manually aligned subtitles and their timings affect how we collect the spottings, which leads to a variation in the number of glosses. Hence, why the baseline score at \(N=0\) is lower compared to the previous audio-aligned experiment. Despite this limitation, the approach is able to improve the alignment by 10.19 BLEU-1. ## 6 Conclusion Sign Language alignment is an essential step in creating large-scale datasets from raw broadcast data. Improving the alignment between the subtitles and the associated signed translation would have positive effects on tasks such as translation, recognition, and production. In this paper we have demonstrated that embeddings from large spoken language models can be used to align glosses with their corresponding subtitles. Our approach can be run in addition to existing multi-model methods and is computationally inexpensive in comparison. We have shown the approach is capable of recovering up to a 33.22 BLEU-1 score in word alignment. ## 7 Acknowledgment We thank Adam Munder, Mariam Rahmani, and Marina Lovell from OmniBridge, an Intel Venture, for supporting this project. We also thank Thomas Hanke and the University of Hamburg for use of the MeineDGS data. Figure 4: Results of applying the alignment algorithm to the MeineDGS (Gloss and Sequence level misalignment) and BOBSL datasets (Audio aligned and Manually aligned)
2308.13734
Computational Discovery of Fast Interstitial Oxygen Conductors
New highly oxygen-active materials may enhance many energy-related technologies by enabling efficient oxygen-ion transport at lower temperatures, e.g., below 400 Celsius. Interstitial oxygen conductors have the potential to realize such performance but have received far less attention than vacancy-mediated conductors. Here, we combine physically-motivated structure and property descriptors, ab initio simulations, and experiments to demonstrate an approach to discover new fast interstitial oxygen conductors. Multiple new families were found which adopt completely different structures from known oxygen conductors. From these families, we synthesized and studied oxygen kinetics in La4Mn5Si4O22+d (LMS), a representative member of perrierite/chevkinite family. We found LMS has higher oxygen ionic conductivity than the widely used yttria-stabilized ZrO2, and among the highest surface oxygen exchange rates at intermediate temperature of known materials. The fast oxygen kinetics is the result of simultaneously active interstitial and interstitialcy diffusion pathways. This work developed and demonstrated a powerful approach for discovering new families of interstitial oxygen conductors and suggests many more such materials remain to be discovered.
Jun Meng, Md Sariful Sheikh, Ryan Jacobs, Jian Liu, William O. Nachlas, Xiangguo Li, Dane Morgan
2023-08-26T02:37:46Z
http://arxiv.org/abs/2308.13734v3
# Computational Discovery of Fast Interstitial Oxygen Conductors ###### Abstract New highly oxygen-active materials may enhance many energy-related technologies by enabling efficient oxygen-ion transport at lower temperatures, e.g., below \(\approx\)400 \({}^{\circ}\)C. Interstitial oxygen conductors have the potential to realize such performance but have received far less attention than vacancy-mediated conductors. Here, we combine physically-motivated structure and property descriptors, _ab initio_ simulations, and experiments to demonstrate an approach to discover new fast interstitial oxygen conductors. Multiple new families were found which adopt completely different structures from known oxygen conductors. From these families, we synthesized and studied oxygen kinetics in La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) (LMS), a representative member of perrierite/chevkinite family. We found LMS has higher oxygen ionic conductivity than the widely used yttria-stabilized ZrO\({}_{2}\), and among the highest surface oxygen exchange rates at intermediate temperature of known materials. The fast oxygen kinetics is the result of simultaneously active interstitial and interstitialcy diffusion pathways. This work developed and demonstrated a powerful approach for discovering new families of interstitial oxygen conductors and suggests many more such materials remain to be discovered. Materials which rapidly conduct oxygen are critical for a variety of energy devices such as fuel cells (solid oxide,[1, 2] proton ceramic,[3, 4] and reversible[5] cells), electrolyzers,[6] solid-oxide metal-air redox batteries,[7] gas sensors,[8] chemical looping devices,[9] memristors,[10] and oxygen separation membranes.[11] Almost all of the state-of-the-art oxygen-active materials transport oxygen via a vacancy-mediated mechanism, which requires mobile oxygen to move off the oxygen sublattice and pass through an unstable activated state. The bond breaking associated with this process is generally energetically unfavorable and, consequently, even the best vacancy oxygen conductors have inadequate oxygen kinetics for practical applications at temperatures below \(\approx\) 600 \({}^{\circ}\)C. Overcoming this limitation would allow for more varied, durable, and cost-effective devices. The vacancy-mediated mechanism has dominated the science of oxygen-active materials since the discovery of the first oxygen ion conductor (Y doped ZrO\({}_{2}\)) around 1900 by Nernst[12] and continues to present. In contrast, interstitial oxygen diffusion is relatively uncommon and there are no systematic approaches to discover or optimize interstitial oxygen conductors. Interstitial oxygen conductors have many potential advantages over vacancy-mediated conductors. Interstitial oxygen diffuses within the interstitial lattice, therefore the interstitial oxygen conductors typically have lower migration barriers compared to vacancy oxygen conductors, with entries in the Citrine Informatics database showing average values of \(\approx\) 0.6 eV versus \(\approx\) 1.1 eV, respectively (**Fig. 1**).[13] This \(\approx\)0.5 eV reduction in migration barrier would afford an approximate \(\approx\)1000 increase in ionic conductivity at 600 \({}^{\circ}\)C. Another benefit is that, at fixed P(O\({}_{2}\)), interstitial oxygen becomes thermodynamically more favorable as the temperature is decreased (i.e., more oxidizing conditions). This higher defect concentration will increase the oxygen conductivity at low temperatures, opposite the trend in vacancy conductors. Other potential benefits may include a trend of increasing vs. decreasing diffusivity at higher defect concentration, since at least one study[14] found that interstitial oxygen diffusivity in CeO\({}_{2}\) increased beyond the dilute limit while vacancy oxygen diffusivity decreased. Finally, oxygen surface exchange and catalytic reactions involving pulling oxygen to the surface are potentially faster when transport is mediated by interstitials as compared to vacancies, as the entire surface has accessible interstitial sites to absorb oxygen as interstitials. The synergistic combination of lower migration barriers, increasing concentration at low temperatures, and many active surface sites for exchange suggest that interstitial oxygen conductors may lead to large performance improvements in oxygen-active materials. Therefore, we propose that a promising path to expand the palette of highly oxygen-active materials at lower temperatures is to develop methods to discover and engineer interstitial oxygen conductors. The imbalance in the number of known vacancy versus interstitial oxygen conductors is likely the result of the difficulty of forming interstitial oxygen in many materials, due to the large size of the oxygen anion. The oxides presently known to have predominantly interstitial-mediated oxygen conductivity remain constrained to five families, which include Ruddlesden-Popper (e.g., La\({}_{2}\)NiO\({}_{4+}\)[15]), apatite (e.g. La\({}_{10}\)-Sr\({}_{x}\)Si\({}_{6}\)O\({}_{27-0.5}\)[16]), melilite (e.g., La\({}_{2}\)-Sr\({}_{x}\)Ga\({}_{3}\)O\({}_{7+0.5}\)[17]), hexagonal manganites (e.g., YMnO\({}_{3+}\)[18]), and hexagonal perovskite (e.g., Ba\({}_{7}\)Nb\({}_{2.9}\)Mo\({}_{1.0}\)O\({}_{20.05}\)[19]). In addition to these purely interstitial-conducting materials, the fluorite-type (e.g., UO\({}_{2+}\)[20]), and schelite (e.g., CeNbO\({}_{4+}\)[21, 22]) compounds can conduct oxygen ions through both interstitial- and vacancy-mediated mechanisms, depending on the impurity type. A few of these systems (Ruddlesden-Popper Nd\({}_{2}\)NiO\({}_{4+}\)[23], apatite La\({}_{9.75}\)Sr\({}_{0.25}\)Si\({}_{6}\)O\({}_{26.895}\)[16], melilite La\({}_{1.54}\)Sr\({}_{0.46}\)Ga\({}_{3}\)O\({}_{7.27}\)[24], and hexagonal perovskite Ba\({}_{7}\)Nb\({}_{2.9}\)Mo\({}_{1.1}\)O\({}_{20.05}\)[25]) were reported to show high ionic conductivity, comparable to the commercial ionic conductor yttria-stabilized zirconia (YSZ). Despite this promising list, far less attention has been devoted to interstitial-dominated systems than vacancy-dominated ones, and we lack methods to discover new interstitial oxygen conductors. In this work we overcame that limitation and laid a foundation to dramatically increase the palette of interstitial oxygen systems, enabling researchers to explore the advantages of interstitial conductors over their more conventional vacancy-mediated counterparts for the advancement of oxygen-active material applications. In this work, we proposed a practical approach based on structural and chemical features as well as _ab initio_ calculations for finding high-performing interstitial oxygen conductors, identified multiple new promising classes, and demonstrated the exceptional performance of one example material, La\({}_{4}\)Mn\({}_{5}\)Si\({}_{6}\)O\({}_{22+}\) (LMS). To the best of our knowledge, LMS has no similarity to any known oxygen ion conductors and cannot be related to known oxygen active materials by simple substitutions, structural similarity arguments, or other approaches that might render it "obvious" in some way. The discovery of LMS demonstrated the effectiveness of the approach to discover new material classes that would likely not have been considered for oxygen-active applications without such computational guidance. A descriptor approach for discovering new interstitial oxygen conductors We proposed a set of simple descriptors (features) for discovering interstitial oxygen conductors based on material structure, composition, and readily available property data. The structure features are based on the hypothesis that (1) the facile formation of interstitial oxygen requires sufficient free volume and electrons from oxidizable transition metal cations, (2) the fast migration of interstitial oxygen should be enhanced by the presence of short diffusion pathways, a result consistent with intuition and the correlation between hop length and migration barrier found in perovskites[26]. The property features are focused on thermodynamic stability and synthesizability. This led to screening on the following five structure and property features: (1) free space for interstitial oxygen, (2) short hop distance, (3) thermodynamic stability, (4) oxidizability, and (5) synthesizability. One could quantify these features in different ways and here we use structures and properties from the Materials Project[27], with the details given in **Methods 1**, step (1)-(5). We screened nearly 34k oxide materials with the five descriptors and retained 519 compounds, which were classified into 345 unique structural groups based on structural similarity analysis[28] (**Methods 1**, step (6)). One candidate for each group was selected for further validation with _ab initio_ studies, discussed below. It is striking to note the power of these simple descriptors based on physical-intuition-guided hypotheses, as we quickly winnowed the field of 34k oxides down to 345, a 99% reduction in the search space enabled by basic analysis of material structure and composition. _Ab initio_ computational screening of promising compounds for discovering new interstitial oxygen conductors The above 6 screening steps based on structure, composition, and property data took only a couple of days on a fast processor. Then we screened the resulting 345 compounds for promising cases by the formation energy (\(E_{f}\)) and migration barriers (\(E_{m}\)) of interstitial oxygen, calculated by the slower but more quantitative density functional theory (DFT) methods (_"ab initio_ simulation" stage in **Fig. 2**). Specifically, compounds with \(E_{f}\leq 0.3\) eV (at P(O\({}_{2}\)) = 0.2 atm and T=300 K) were considered promising (**Methods 2**, step (7)), and 80 out of the 345 compounds (23%) were retained (**Table S1**), suggesting that the use of simple descriptors in steps (1)-(6) was highly effective in winnowing the compound space to a tractable number of DFT calculations, while also retaining a considerable percentage of promising candidates. Next, _ab initio_ molecular dynamics (AIMD) simulation was used to estimate \(E_{m}\) (**Methods 2**, step (8)). The AIMD calculations are quite slow and as of this writing 26 out of the 80 compounds were studied and ranked by their estimated \(E_{m}\) (**Table S2**). 9 compounds with the estimated \(E_{m}\leq 0.86\) eV were selected for more accurate determination of \(E_{m}\) with long-time AIMD simulation. From these extended runs, 3 compounds with \(E_{m}\leq 0.5\) eV were identified as members of promising new families of interstitial oxygen conductors, which were K\({}_{2}\)Mn\({}_{2}\)(MoO\({}_{4}\))\({}_{3}\), La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22}\), and CeMn\({}_{2}\)Ge\({}_{4}\)O\({}_{12}\). These three compounds are members of families of double mlybdates A\({}_{2}\)TM\({}_{2}\)(MoO\({}_{4}\))\({}_{3}\) (A=alkali metal, TM=transition metal)[29], perrierite/chevkinite RE\({}_{4}\)TM\({}_{5}\)Si\({}_{4}\)O\({}_{22}\) (RE=rare earth, TM=transition metal)[30], and germinates RE\({}_{1}\)TM\({}_{2}\)Ge\({}_{4}\)O\({}_{12}\) (RE= rare earth, TM=transition metal)[31], respectively. These new families are a conservative estimate of the true number which could be uncovered with our present approach as many compounds have not yet been fully studied by AIMD and no effort was made to explore oxide structures not available in the Materials Project. La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22}\) (LMS) was selected as a representative member of the perrierite/chevkinite family for experimental investigation due to its predicted fast interstitial oxygen diffusion, simple established synthesis method, and inclusion of inexpensive, earth-abundant, and non-toxic elements, which are all traits desirable for new oxygen-active materials potentially useful in a variety of applications. Structure of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) LMS was first reported by Gueho _et al._ with structure and magnetic data in 1995,[32] with no further studies of which we are aware. LMS is a layered sorosilicate material with multivalent manganese and isostructural with perrierite and chevkinite, which crystallizes in the space group _C2/m_. In **Fig. 3a**, LMS displays eclipsed sorosilicate Si\({}_{2}\)O\({}_{7}\) groups separated by rutile-like sheets of edge-shared Mn\({}_{1}\)\({}^{4+}\)/Mn\({}_{2}\)\({}^{3+}\) octahedra and single isolated Mn\({}_{3}\)\({}^{2+}\) octahedra. The stochiometric primitive cell has two Mn\({}_{1}\)\({}^{4+}\), two Mn\({}_{2}\)\({}^{3+}\), and one Mn\({}_{3}\)\({}^{2+}\), where their valence states assignment fully consistent with the magnetic moments observed in the DFT calculations (**SI Discussion 3**). Sorosilicate Si\({}_{2}\)O\({}_{7}\) groups show a zigzag arrangement along the a-axis, connect with Mn\({}_{3}\)\({}^{2+}\) octahedra along b-axis and Mn\({}_{2}\)\({}^{3+}\) octahedra along c-axis by sharing corners, leaving free space in between these unconnected Si\({}_{2}\)O\({}_{7}\) chains. The La atoms are between the rutile-like layer and the sorosilicate layer, surrounded by 10 oxygen atoms. The structure has ample free space, a highly flexible network, and multiple Mn ions potentially capable of oxidation, making it ideally suited to form and transport interstitial oxygen. A dual diffusion mechanism enabled by undercoordinated sorosilicate groups and flexible corner-sharing framework _Ab initio_ studies and simple thermodynamic considerations suggest that excess oxygen with La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) ( \(\delta\approx\) 0.5 ) is thermodynamically favorable under air conditions, while oxygen vacancies are very unfavorable due to their high formation energy (**SI Discussion 2**). In **Fig. 3a**, the most stable interstitial site (\(O^{I}\)) lies in between two adjacent sorosilicate Si\({}_{2}\)O\({}_{7}\) groups, connecting two Si tetrahedra, and the second most stable interstitial site (\(O^{I}\)) lies in the joint of the Si\({}_{2}\)O\({}_{7}\) and the Mn\({}_{3}\)\({}^{2+}\) octahedra. These two prevailing interstitial sites contribute two distinct and competitive diffusion pathways observed from AIMD simulations. In the interstitial diffusion mechanism (yellow arrow in **Fig. 3a**), the \(O_{i}\) hops between the \(O^{I}\) sites through the channel between sorosilicate chains along the a-axis, with a barrier of 0.45 eV calculated by Climbing Image Nudged Elastic Band (CI-NEB) method (**Fig. S2a**). A parallel active interstitialcy (cooperative "knock-on") mechanism is indicated by cyan arrows, in which the \(O_{i}\) moves along the corner-sharing Si\({}_{2}\)O\({}_{7}\)-MnO\({}_{2}\)-Si\({}_{2}\)O\({}_{7}\) framework along the b-axis. In the interstitialy mechanism, the \(O_{i}\) first hops from the \(O^{I}\) site to a lattice site by kicking a lattice oxygen to the \(O^{I}\) site, which then moves to a lattice site by kicking another lattice oxygen to the next \(O^{I}\) site. By passing through the metastable interstitial \(O^{I}\) site, the \(O_{i}\) diffuses through an interstitialy mechanism with a CI-NEB calculated barrier of 0.53 eV (**Fig. S2b**). The oxygen diffusion coefficient was calculated by AIMD simulations and machine learning-trained interatomic potential molecular dynamics (ML-IPMD) simulation (**Methods 3-4**), where the ML-IPMD was used to obtain more accurate diffusion coefficient through better sampling at lower temperatures compared to AIMD. **Fig. 3b** shows a calculated migration barrier of 0.44 eV over a wide temperature range, consistent with the CI-NEB barriers. The DFT predicted stable interstitials with low migration barriers indicate that La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) is a fast oxygen conductor. lattice oxygen[33, 34] and surface chemisorbed oxygen,[35] respectively. The low binding energy peak at 528.9 eV represents the interstitial oxygen within LMS.[33] The XPS survey scan on the LMS pellet surface revealed the presence of La, Mn, Si, and O without any detectable impurity elements (**Fig. S5**). Thermogravimetric analysis (TGA) was performed to study the temperature dependence of oxygen content (**Fig. 4c**). Detectable but modest reversible changes in oxygen content were observed when heating/cooling the LMS powder. The interstitial oxygen content \(\delta\) changed from 0.42 (based on EPMA result) to 0.52, corresponding to a 0.5% change in the total oxygen content. The small change of oxygen content with respect to temperature is consistent with the _ab initio_ results of \(\delta\approx 0.5\), suggesting that the interstitials are stabilized by oxidizing the Mn\({}^{2+}\) ions and further oxidization of the system is difficult (**Si Discussion 2**). The TGA and _ab inito_ results suggest LMS has a small thermodynamic factor, discussed more below. LMS is a semiconductor with a narrow indirect band gap of 0.79 eV measured by UV-vis spectroscopy (**Fig. S6**), indicating that LMS could have electron conduction at high temperatures due to thermal excitation. The band gap was calculated by _ab initio_ method with various functionals, and the predicted band gap of 0.72 eV using the strongly constrained and appropriately normed (SCAN) functional is the most consistent with experiment (**Table S3, Si Discussion 1**). The 4-probe conductivity measurement shows that LMS is a mixed ionic-electronic conductor (**Fig. S7**). Ionic conductivity (\(\sigma_{ion}\)) of LMS was measured using electron blocking 8YSZ,[36] and our setup was verified by obtaining robust results on the well-studied mixed conductor La\({}_{0.6}\)Sr\({}_{0.4}\)Co\({}_{0.2}\)Fe\({}_{0.8}\)O\({}_{3}\) (LSCF) (**Methods 9** and **Fig. S8-S9**). In **Fig. 4d**, LMS has comparable ionic conductivity to many of the best fast oxygen conductors and considerably higher ionic conductivity than widely used commercial materials such as LSCF and YSZ. In addition, LMS has comparable or improved ionic conductivity compared to other interstitial oxygen conductors, such as hexagonal perovskite Ba\({}_{7}\)Nb\({}_{3.9}\)Mo\({}_{1.1}\)O\({}_{20.05}\), apatite La\({}_{9.75}\)Sr\({}_{0.25}\)Si\({}_{6}\)O\({}_{26.895}\), Ruddlesden-Popper Nd\({}_{2}\)NiO\({}_{4+\delta}\), and melilite La\({}_{1.54}\)Sr\({}_{0.46}\)Ga\({}_{3}\)O\({}_{7.27}\). The experimental activation barrier of oxygen ion conduction in LMS is 0.72\(\pm\)0.03 eV, determined by the Arrhenius relation \(\sigma T=\sigma_{0}\)e\({}^{(-\frac{E_{A}}{k_{B}T})}\) (**Methods 11**). Given the small temperature dependence of oxygen stoichiometry from TGA, this experimental \(E_{A}\) is expected to be similar to the DFT SCAN CI-NEB calculated migration barriers, which were 0.69 eV and 0.74 eV for interstitial and interstitial diffusion, respectively (**Fig. S2c, d**). Note that the above AIMD barrier is compared to GGA CI-NEB barriers as they both were simulated with GGA. Here the experimental barrier is compared to SCAN CI-NEB barriers, as the latter is expected to be the most accurate as we have calculated (**Si Discussion 1**). The good agreement between the experimental activation energy and DFT migration barriers further supports the dominance of both interstitial and interstitial mechanisms for oxygen diffusion in LMS. To further probe the oxygen kinetics in LMS, the oxygen surface exchange coefficient (\(k_{chem}\)) and the chemical oxygen diffusivity (\(D_{chem}\)) were studied using the electrical conductivity relaxation (ECR) method. The Arrhenius plots of \(D_{chem}\) and \(k_{chem}\) of LMS in **Fig. 4e-f** show that LMS has \(D_{chem}\) and \(k_{chem}\) comparable to numerous state-of-art solid oxide electrode materials over a wide range of temperatures. LMS has among the highest \(k_{chem}\) values at low temperatures of any known material, in part due to the relatively small activation energy of 0.82 eV for oxygen surface exchange (**Methods 11**). The enhanced \(k_{chem}\) of LMS at low temperatures compared to vacancy-mediated diffusers might be due to there being more surface interstitial sites than vacancy sites available for oxygen exchange, but further study is needed to understand the interstitial surface exchange mechanism. ## Discussion The oxygen tracer diffusion coefficient (\(D^{*}\)) was determined using the Nernst-Einstein equation (**Methods 11**). The thermodynamic factor \(\gamma\) and the tracer surface exchange coefficient (\(k^{*}\)) were evaluated by \(D_{chem}=\gamma D^{*}\) and \(k_{chem}=\gamma k^{*}\).[37] The thermodynamic factor of LMS varies from 21 to 28 in the temperature range of 600 to 750 \({}^{\circ}\)C (**Fig. S11f**), which is about 10 to 20 times smaller than that of the commonly studied mixed ionic electronic conducting perovskites LSCF,[38] BCFZr,[39] and BSCF.[40] The small \(\gamma\) and high \(k_{chem}\) imply LMS will have high \(k^{*}\) compared with other state-of-the-art materials. **Fig. S13** shows that LMS has a higher \(k^{*}\) at intermediate and low temperatures than state-of-the-art materials, perhaps even exceeding that of the leading BSCF material. The high surface exchange rate suggests LMS has the potential to assist in achieving fast oxygen reduction kinetics, e.g., reducing area specific resistance in the air electrode of solid oxide cells for electricity or hydrogen production, although its low electronic conductivity means it would have to be used as a composite with a good electrical conductor in many applications.[41] This work demonstrates the largely untapped potential of interstitial oxygen ion conductors for reduced temperature oxygen-active material applications. We have designed a simple but effective method using physically-motivated descriptors and _ab initio_ calculations to search for new families of interstitial oxygen diffusers. The effectiveness of our approach was confirmed by prediction and experimental confirmation of an entirely new class of predominantly interstitial oxygen conductor, represented by LaMn\({}_{3}\)Si\({}_{0}\)O\({}_{2+\delta}\) (LMS). LMS has very fast oxygen surface exchange and transport through both interstitial and interstitialy mechanisms, which are enabled by the free space between sorosilicate chains, corner-sharing framework Si\({}_{2}\)O\({}_{7}\)-MnO\({}_{2}\)-Si\({}_{2}\)O\({}_{7}\), and high redox activity of the nearby Mn\({}_{3}\)\({}^{2+}\) ion. We note that LMS is just one example of the broader perrierite/chekinite RE\({}_{4}\)TM\({}_{3}\)Si\({}_{4}\)O\({}_{22}\) structural family, other compositions within this family or composition and/or microstructure refinement of LMS itself may yield additional materials with fast oxygen interstitial transport. We also proposed that families of double molybdates A\({}_{2}\)TM\({}_{2}\)(MoO\({}_{4}\))\({}_{3}\) and germinates RE\({}_{1}\)TM\({}_{2}\)Ge\({}_{4}\)O\({}_{12}\) are promising for further study. The success of the approach suggests that we have identified relatively simple structural and chemical features that strongly correlate with stable interstitial oxygen formation and fast migration. The discovery of a new high-performing interstitial oxygen conductor from experimental study of just one material suggests that the method is highly effective, and that many more materials exhibiting fast oxygen interstitials kinetics can be found through its application. ## Reference * [1] Wachsman, E. D. & Lee, K. T. Lowering the temperature of solid oxide fuel cells. _Science (1979)_**334**, 935-939 (2011). * [2] Jacobson, A. J. Materials for Solid Oxide Fuel Cells. _Chem. Mater._**22**, 660-674 (2010). * [3] Vollestad, E. _et al._ Mixed proton and electron conducting double perovskite anodes for stable and efficient tubular proton ceramic electrolytes. _Nat. Mater._**18**, 752-759 (2019). * [4] Choi, S. _et al._ Exceptional power density and stability at intermediate temperatures in protonic ceramic fuel cells. _Nat. Energy_**3**, 202-210 (2018). * [5] Duan, C. _et al._ Highly efficient reversible protonic ceramic electrochemical cells for power generation and fuel production. _Nat. Energy_**4**, 230-240 (2019). * [6] Hong, W. T., Risch, M., Stoerzinger, K. A. & Grimaud, A. Toward the rational design of non-precious transition metal oxides for oxygen electrocatalys. _Energy Environ. Sci._**8**, 1404-1427 (2015). * [7] Zhang, C. & Huang, K. Chapter 7. Solid-oxide metal-air redox batteries. In _Solid Oxide-Based Electrochemical Devices_ 217-250 (Elsevier, 2020). DOI:10.1016/B978-0-12-818285-7.00007-1. * [8] Eranna, G., Joshi, B. C., Runthala, D. P. & Gupta, R. P. Oxide Materials for Development of Integrated Gas Sensors--A Comprehensive Review. _Crit. Rev. Solid State Mater. Sci._**29**, 111-188 (2004). * [9] Hossain, M. M. & Lasa, H. I. de. Chemical-looping combustion (CLC) for inherent CO2 separations--a review. _Chem. Eng. Sci._**63**, 4433-4451 (2008). * [10] Yang, J. J., Strukov, D. B. & Stewart, D. R. Memristive devices for computing. _Nat. Nanotechnol._**8**, 13-24 (2013). - O perovskites for solid oxide fuel cells and gas separation membranes. _Solid State Ion._**135**, 719-725 (2000). * [12] Funke, K. Solid State Ionics: from Michael Faraday to green energy--the European dimension. _Sci. Technol. Adv. Mater._**14**, 43502 (2013). * [13] Antono, E., Meredig, B., Mulholland, G. J. Citrine Informatics ARPA-E Ionics Database. [https://cirtination.com/datasets/151085](https://cirtination.com/datasets/151085) (accessed August 18). * [14] Waldow, S. & De Souza, R. Is excess faster than deficient? A molecular-dynamics study of oxygen-interstitial and oxygen-vacancy diffusion in CeO2. _J. Phys. Energy_**2** 024001 (2020). * [15] Xu, S., Jacobs, R. & Morgan, P. Factors Controlling Oxygen Interstitial Diffusion in the Ruddlesden-Popper Oxide La2-xSrxNiO4+. _Chem. Mater._**30**, 7166-7177 (2018). * [16] Arikawa, H., Nishiguchi, H., Ishihara, T. & Takita, Y. Oxide ion conductivity in Sr-doped La1OGe6027 apatite oxide. _Solid State Ion._**136-137**, 31-37 (2000). * [17] Schuett, J., Schultze, T. K. & Grieshammer, S. Oxygen Ion Migration and Conductivity in La5FGa3O7Mellittes from First Principles. _Chem. Mater._**32**, 4442-4450 (2020). * [18] Skjaerv, S. H. _et al._ Interstitial oxygen as a source of p-type conductivity in hexagonal manganites. _Nat. Commun._**7**, 7491 (2016). * [19] Yashima, M. _et al._ High oxide-ion conductivity through the interstitial oxygen site in Ba7Nb4MoO20-based hexagonal perovskite related oxides. _Nat. Commun._**12**, 1-7 (2021). 20] D. W. Strickler and W. G. Carlson. Electrical Conductivity in the ZrO2-Rich Region of Several M2O3-ZrO2 Systems. _J. Am. Ceram. Soc._**48**, 286-289 (1965). * [21] Li, J. _et al._ Modulated structure determination and ion transport mechanism of oxide-ion conductor CeNbO4+. _Nat. Commun._**11**, 1-9 (2020). * [22] Pramana, S. S. _et al._ Correlation of Local Structure and Diffusion Pathways in the Modulated Anisotropic Oxide Ion Conductor CeNbO4.25. _J. Am. Chem. Soc._**138**, 1273-1279 (2016). * [23] Song, J., Ning, D., Boukamp, B., Bassat, J. M. & Bouwmeester, H. J. M. Structure, electrical conductivity and oxygen transport properties of Ruddlesden-Popper phases Ln: N +1Ni0O3 n+[L(n = La, Pr and Nd; N = 1, 2 and 3). _J. Mater. Chem. A. Mater._**8**, 22206-22221 (2020). * [24] Thomas, C. I. _et al._ Phase stability control of interstitial oxide ion conductivity in the La1+Sr1-xGa3O7+x/2 Mellitte Family. _Chem. Mater._**22**, 2510-2516 (2010). * [25] Fop, S. _et al._ High oxide ion and proton conductivity in a disordered hexagonal perovskite. _Nat. Mater._**19**, 752-757 (2020). * [26] Mayeshiba, T. T. & Morgan, D. D. Factors controlling oxygen migration barriers in perovskites. _Solid State Ion._**296**, 71-77 (2016). * [27] Jain, A. _et al._ Commentary: The Materials Project: A materials genome approach to accelerating materials innovation. _APL Mater._**1**, 11002 (2013). * [28] Pan, H. _et al._ Benchmarking Coordination Number Prediction Algorithms on Inorganic Crystal Structures. _Inorg. Chem._**60**, 1590-1603 (2021). * [29] Solodovnikov, S. F., Klevtsova, R. F., Kim, V. G. & Klevtsov, P. V. Double molydates of composition CsR22+(MoO4) (R=Ni, Co, Mg, Mn, Cd) and the crystal structure of Cs2Co2(MoO4)3. _J. Struct. Chem._**27**, 928-933 (1986). * [30] Ito, Jun; Arem, J. E. Checkinite and perrierite: synthesis, crystal growth and polymorphism. _American Mineralogis_**56**, 307-319 (1971). * [31] Tawid-Gueho, C., Leone, P., Palvadeau, P. & Rouxel, J. CeMn2Ge4O12-Synthesis and Structural Characterization of Two New Rare-Earth Manganese Germanates: CeMn2Ge4O12and GdMnGe2O7. _J. Solid State Chem._**143**, 145-150 (1999). * [32] Queibo, C., Giaquintas, D., Mansot, J. L., Ebel, T. & Palvadeau, P. Structure and Magnetism of La4Mn5Si4O22 and La4V5Si4O22: Two New Rare-Earth Transition Metal Sorosilicates. _Chem. Mater._**7**, 486-492 (1995). * [33] Kumar, U., Yadav, D. & Upadhyay, S. Investigation of structural, optical, and magnetic properties of Nd-doped Sr2SnO4 Ruddlesden Popper oxide. _J. Am. Ceram. Soc._**103**, 5743-5757 (2020). * [34] Kumar, U. & Upadhyay, S. Investigation of structural, optical and electrical properties of Sr2SnO4 Sr1.99Eu0.01Sn04 and Sr2SnO.99Eu0.0104 Ruddlesden Popper oxide. _Mater. Res. Express_**6**, 55805 (2019). * [35] Islam, M. N., Ghosh, T. B., Chopra, K. L. & Acharya, H. N. XPS and X-ray diffraction studies of aluminum-doped zinc oxide transparent conducting films. _Thin Solid Films_**280**, 20-25 (1996). * [36] Lei, C., Simpson, M. F. & Virkar, A. V. Investigation of Ion and Electron Conduction in the Mixed Ionic-Electronic Conductor-La-Sr-Co-Fe-Oxide (LSCF) Using Alternating Current (AC) and Direct Current (DC) Techniques. _J. Electrochem. Soc._**169**, 014506 (2022). * [37] Fundamentals of Thermophysical Properties. In _Computational Design of Engineering Materials: Fundamentals and Case Studies_ (eds. Wang, J. et al.) 198-263 (Cambridge University Press, 2023). DOI: 10.1017/9781108643764.008. * [38] Endler-Schuck, C., Joos, J., Niedrig, C., Weber, A. & Ivers-Tiffee, E. The chemical oxygen surface exchange and bulk diffusion coefficient determined by impedance spectroscopy of porous La0.58Sr0.4Co0.2FeO.803-6 (LSCF) cathodes. _Solid State Ion._**269**, 67-79 (2015). * [39] Zohourian, R., Merkle, R. & Maier, J. Proton uptake into the protonic cathode material BaCo0.4FeO.4ZrO.203-6 and comparison to protonic electrolyte materials. _Solid State Ion._**299**, 64-69 (2017). * [40] Bucher, E., Egger, A., Ried, P., Sitte, W. & Holtappels, P. Oxygen nonstoichiometry and exchange kinetics of Ba0.5Sr0.5Co0.8Fe0.203-6. _Solid State Ion._**179**, 1032-1035 (2008). * [41] Jacobs, R. _et al._ Unconventional Highly Active and Stable Oxygen Reduction Catalysts Informed by Computational Design Strategies. _Adv. Energy Mater._**12**, 2201203 (2022). * [42] Huang, K., Tichy, R. S. & Goodenough, J. B. Superior Perovskite Oxide-Ion Conductor; Strontium- and Magnesium-Doped LaGaO3:1, Phase Relationships and Electrical Properties. _J. Am. Ceram. Soc._**75**, 2565 (1998). * [43] Steele, B. C. H. Appraisal of Ce1-yGdyO2-y/2 electrolytes for IT-SOFC operation at 500 "C. _Solid State Ion._**129**, 95-110 (2000). * _Solid State Ion._**179**, 1032-1035 (2008). * [45] Li, M. _et al._ A family of oxide ion conductors based on the ferroelectric perovskite Na0.5Bi0.5TiO3. _Nat. Mater._**13**, 31-35 (2014). * [46] Yang, X. _et al._ Cooperative mechanisms of oxygen vacancy stabilization and migration in the isolated tetrahedral anion Scheelitte structure. _Nat. Commun._**9**, (2018). ## Figures Figure 3: (a) Bulk structure of the La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22}\). The La, Mn, Si, and O sites are shown as green, purple, blue, and red spheres, respectively. The black dash line denotes the single unit cell. The yellow ball represents the \(O^{1}\) site and the cyan ball represents the \(O^{2}\) site, respectively. Interstitial Oxygen (\(O_{i}\)) in LMS diffuses through both interstitial mechanism (yellow arrow) and interstitial (cooperative “knock-on”) mechanism (cyan arrows). (b) Arrhenius plot of tracer diffusivity \(D_{tracer}\) of oxygen predicted by _ab initio_ molecular dynamic (AIMD) and machine learning trained interatomic potential molecular dynamic (ML-IPMD) simulations. The error bars represent the standard deviation of \(D_{tracer}\) determined by multi-origin analysis (**Methods 3**). Figure 4: (a) Room temperature X-ray diffraction pattern of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{O22\cdot\delta}\) (LMS) and corresponding Rietveld refinement. (b) The peak-fitting results of O1s XPS spectra. (c) Mass percentage and corresponding interstitial oxygen content (\(\delta\)) change during the thermogravimetric analysis of LMS between 50 \({}^{\circ}\)C and 750 \({}^{\circ}\)C under a 1 atm O\({}_{2}\) environment, inserted arrow points the interstitial oxygen contents of \(\delta\)=0.42 measured by the electron probe micro-analyzer (EPMA) analysis. (d) Arrhenius plots of the measured ionic conductivity of LMS compared with the leading oxygen ion conductors Zr\({}_{0.92}\)Y\({}_{0.08}\)O\({}_{2\cdot\delta}\) [YSZ], \({}^{20}\) La\({}_{0.85}\)Sr\({}_{0.2}\)Ga\({}_{0.83}\)Mg\({}_{0.17}\)O\({}_{3\cdot\delta}\) [LSGM], \({}^{42}\) Ce\({}_{0.9}\)Gd\({}_{0.1}\)O\({}_{1.95}\) [CGO], \({}^{43}\) (La\({}_{0.5}\)Sr\({}_{0.4}\))\({}_{0.95}\)Co\({}_{0.2}\)Fe\({}_{0.2}\)O\({}_{3\cdot\delta}\) [LSCF - this work], Ba\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{0.8}\)Fe\({}_{0.2}\)O\({}_{3\cdot\delta}\) [BSCF], \({}^{44}\) Na\({}_{0.5}\)Bi\({}_{0.95}\)Mg\({}_{0.08}\)Mg\({}_{0.02}\)O\({}_{2.965}\) [Mg-doped NBT perovskite], \({}^{45}\) Nd\({}_{2}\)NiO\({}_{4\cdot\delta}\) [Ruddlesden-Popper], \({}^{23}\) La\({}_{0.75}\)Sr\({}_{0.25}\)Si\({}_{0}\)O\({}_{2.895}\) [Apatite], \({}^{16}\) La\({}_{1.54}\)Sr\({}_{0.46}\)Ga\({}_{3}\)O\({}_{7.27}\) [Mellilite], \({}^{24}\) Ba\({}_{3}\)Nb\({}_{3.9}\)Mo\({}_{1.1}\)O\({}_{20.05}\) [hexagonal perovskite], \({}^{25}\) and Bi\({}_{0.975}\)Sr\({}_{0.025}\)VO\({}_{3.9875}\) [Scheelite]. \({}^{46}\) (e) \(D_{chem}\) and (f) the \(k_{chem}\) of LMS comparing with La\({}_{2}\)NiO\({}_{4\cdot\delta}\) (LaNiO), La\({}_{0.5}\)Sr\({}_{0.5}\)FeO\({}_{3}\) (LSF), La\({}_{0.5}\)Sr\({}_{0.5}\)CoO\({}_{3}\) (LSC), La\({}_{0.6}\)Sr\({}_{0.4}\)Co\({}_{0.2}\)Fe\({}_{0.8}\)O\({}_{3}\) (LSCF), Ba\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{0.8}\)Fe\({}_{0.2}\)O\({}_{3\cdot\delta}\) (BSCF), and BaCo\({}_{0.7}\)Fe\({}_{0.22}\)Y\({}_{0.08}\)O\({}_{3\cdot\delta}\) (BCF). Methods ### 1. Descriptor screening approach Step (1) Free space. Crystallographic information files (CIF files) from the Materials Project database were used to calculate the free volume for forming interstitials. DFT-computed structures in the Materials Project are first analyzed using the Voronoi analysis method implemented in pymatgen[1] to find the potential interstitial sites. Distance from the interstitial site to its nearest neighboring cations \(d_{c}\) and anions \(d_{a}\) were used as the descriptor of the free volume needed for an interstitial oxygen anion \(O^{2}\). Specifically, the screening criteria were established as \(d_{c}\geq 0.99\) A and \(d_{a}\geq 0.88\) A. The distance criteria applied above were established by taking the minimum distances observed for the interstitial oxygens in Ruddlesden-Popper La\({}_{2}\)NiO\({}_{4}\) and apatite La\({}_{10}\)Si\({}_{6}\)O\({}_{27}\), which were taken as a guide to determine if there is sufficient room for an interstitial oxygen anion \(O^{2}\) in the material. Interstitial sites that met the screening criteria were identified as reasonable interstitial oxygen sites \(O_{i}\)for the next step and only structures with such \(O_{i}\) sites were retained. After step (1), 16,455 compounds were retained, and the search space was reduced by 52%. Step (2) Short hop distance. The minimum distance between the two nearest neighboring \(O_{i}\) sites from step (1) was used as the criterion representing the hop distance. Only materials with this distance \(\leq 3\) A were retained. This distance was chosen based on the shortest hop distance of interstitial oxygen in Ruddlesden-Popper La\({}_{2}\)NiO\({}_{4}\) and apatite La\({}_{10}\)Si\({}_{6}\)O\({}_{27}\), for which the hop distance is ~2.8 A and ~2.7 A, respectively. After step (2), 9,477 compounds were retained, and the search space was reduced by 72%. Step (3) thermodynamic stability. Materials stability was assessed based on energy relative to the convex hull (\(E_{hull}\) ) as computed by the Materials Project. Only compounds with \(E_{hull}<100\) meV/atom (closed system) and \(E_{hull}<200\) meV/atom (system open to P(O\({}_{2}\)) = 0.2 atm and T = 300 K) were retained. These thresholds were chosen to be simple round numbers and within a few tens of meV/atom above the cutoffs that identify the Ruddlesden-Popper La\({}_{2}\)NiO\({}_{4}\) and apatite La\({}_{10}\)Si\({}_{6}\)O\({}_{27}\) as acceptably stable, since these are well-studied interstitial oxygen conductors known to be reasonably stable under many device conditions. Ruddlesden-Popper La\({}_{2}\)NiO\({}_{4}\) has \(E_{hull}\) = 76 meV (closed system) and \(E_{hull}\) = 94 meV (open to P(O\({}_{2}\)) = 0.2 atm and T = 300 K), and apatite La\({}_{10}\)Si\({}_{6}\)O\({}_{27}\) has \(E_{hull}\) = 54 meV (closed system) and \(E_{hull}\) = 169 meV (open to P(O\({}_{2}\)) = 0.2 atm and T = 300 K), respectively. Step (4) oxidizability. The valence state of each site was predicted by the Bond Valence analysis[2] module incorporated in pymatgen. To ensure the redox process of including additional oxygen into the materials with a low barrier, we proposed that the redox property from the cations is critical. If the valence state of a cation is smaller than its maximum oxidation state, for example, the Mn atom is at 2+ instead 7+, there are free valence electrons associated with this cation, and the compound has high redox potential and thus can be easily oxidized. Oxidizability is assessed by calculating the sum of the differences between the highest valence state and the actual valence state of each cation ion. Only compounds predicted to have oxidizable cations were retained. Step (5) synthesizability. Materials are checked for whether it was reported as being synthesized at least once in the Inorganic Crystal Structure Database (ICSD)[3]. Compounds were retained only if the structure has an entry of experimental data in ICSD. After steps (3)-(5), only 519 compounds were retained, and the search space was reduced by 98.5%. Step (6) Structure similarity analysis. For the remaining 519 compounds, a structure similarity analysis[4] assessed based on local coordination information from all sites in two structures was performed to group these compounds into 345 structural families. One candidate for each structure group was selected for further validation with _ab initio_ studies. The selection of the candidate material followed a two-step process. First, the candidate was selected from the top of the stability ranking. If multiple materials shared the same level of stability, the candidate was subsequently selected by its oxidizability ranking. ### 2. Ab initio simulation screening approach Step (7) Formation energy ( \(E_{f}\) ). Density functional theory (DFT) calculations were used to calculate the formation energy (\(E_{f}\)). The interstitial site with the largest free space is calculated. Structure optimization was performed for the bulk structures from the Materials Project. Supercells were built from the relaxed bulk structure with an approximately minimum length of 8 A along each direction to minimize defect self-interaction. Formation energy was calculated by \(E_{f}=E_{(supercell\ with\ O_{1})}-E_{(supercell)}-\mu_{O}(1)\), where the \(\mu_{O}\)is the oxygen chemical potential under air condition. The O chemical potential was calculated by using a combination of DFT calculated total energies and experimental thermodynamic data for O\({}_{2}\) gas at air condition [5] with the following form [6] \(\mu_{O}=\frac{1}{2}\left[E_{O_{2}}^{VASP}+\Delta h_{O_{2}}^{0}+H(T,P^{0})-H(T^ {0},P^{0})-TS(T,P^{0})+kT\ln\left(\frac{P}{P^{0}}\right)-\left(G_{O_{2}}^{s,vib }(T)-H_{O_{2}}^{s,vib}(T^{0})\right)\right]\) (2) where \(E_{O_{2}}^{VASP}\) is the _ab initio_ calculated energy of an O\({}_{2}\) gas molecule, \(\Delta h_{O_{2}}^{0}\) is a numerical correction that takes into account the temperature increase of O\({}_{2}\) gas from 0 K to \(T^{0}\), the contribution to the enthalpy at \(T^{0}\)when oxygen is in the solid phase, and the numerical error in overbinding of the O\({}_{2}\) molecule in DFT. \(\Delta h_{O_{2}}^{0}\) is obtained from comparing calculated formation energies and experimental formation enthalpies of numerous oxides, we used \(\Delta h_{O_{2}}^{0}\) =0.70 eV/O from Ref. [7]. \(H(T,P^{0})\)and \(H(T^{0},P^{0})\) are the gas enthalpy values at standard and general temperatures \(T^{0}\) and \(T\), respectively. In this case, \(T^{0}\) is 298 K and \(T\) refers to the room temperature 300K. \(TS(T,P^{0})\) is the gas entropy, and the logarithmic term is the adjustment of the chemical potential for arbitrary pressure, where \(P\) and \(P^{0}\) are the referenced pressure and the standard pressure, respectively. In this case, the referenced pressure is 0.2 atm. The \(\left(G_{O_{2}}^{s,vib}(T)-H_{O_{2}}^{s,vib}(T^{0})\right)\) term accounts for the solid phase vibrations, which are approximated with an Einstein model with an Einstein temperature of 500 K [8]. Step (8) Migration barrier \(E_{m}\). _Ab initio_ molecular dynamics (AIMD) was performed at 2000K for 30ps for the remaining 80 compounds to have an initial evaluation of the migration barrier \(E_{m}\). We used this initial run to observe the AIMD trajectories for hopping times and estimate the migration barrier based on the hop rate \(r\) \(r=ve^{(-\frac{E_{m}}{k_{b}T})}\) (3), where \(v\) is the attempt frequency, estimated as 5x10 [12] s\({}^{1}\), \(k_{b}\) is the Boltzmann constant, and \(T\) is the temperature, respectively. The hop rate \(r\) was observed from the AIMD simulation, and an initial estimation for \(E_{m}\) could be determined. The hop rate of 0.033/ps, i.e., 1 hop observed within 30ps AIMD simulation, corresponds to a migration barrier of 0.86 eV. Materials that have at least 1 hop observed within 30ps at 2000K, were selected for further DFT studies with long AIMD runs at different temperatures to calculate oxygen tracer diffusion coefficients, which were fit to an Arrhenius form \(Ae^{E_{m}/kT}\). All the calculations in this screening approach were performed with DFT using the Vienna ab Initio Simulation Package (VASP) code [9]. The generalized gradient approximation exchange-correlation functional Perdew, Burke, and Ernzerhof (GGA-PBE) [10] and projector augmented wave method (PAW) [11] were used for the effective potential for all atoms. The valence electron configuration of the La, Mn, Si, and O atoms utilized in all calculations were 5s\({}^{2}\)5p\({}^{6}\)6s\({}^{2}\)4d [1], 3p\({}^{6}\)4s\({}^{2}\)3d [5], 3s\({}^{2}\)3p [2], and 2s\({}^{2}\)2p [4], respectively. The plane wave cutoff energy was 520 eV and spin-polarized calculations were performed. For the defect formation energy calculations, the stopping criteria for total energy calculations were 0.01 meV/cell for the electronic relaxation and 0.05 eV/A for ionic relaxation, respectively. K-point meshes were automatically generated based on the structural volume with a k points density of 0.04/A\({}^{3}\) to ensure calculation accuracy. For the AIMD simulations were performed using gamma-point-only sampling of k-space. The structure was first heated up to 2000K within 0.3 ps in the NVT ensemble using the Andersen thermostat, and then simulated in the NVT ensemble using a Nose-Hoover thermostat [12, 13] for 30ps. AIMD simulation of \(O_{i}\) diffusion in La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22\cdot\delta}\) With approximately 2% \(O_{i}\) concentration, the oxygen ion diffusivity in La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22\cdot\delta}\) was studied by _ab initio_ molecular dynamic simulation at the temperature from 1000K to 2200K, with a step of 200K. The structure was first heated up to 2000K within 0.3 ps in the NVT ensemble using the Andersen thermostat, and then simulated in the NVT ensemble using a Nose-Hoover thermostat [12, 13] for 300ps at each temperature state. No signs of melting at high temperatures were observed. No effort was made to correct for thermal expansion as it was assumed the effect would be small. We performed a multi-time origin method [14] to calculate the average diffusion coefficient \(D\) along with its standard deviation to ensure a good statistical averaging. The diffusion coefficient within a simulation time \(t\) was evaluated using the mean squared displacement by \(D_{t}=\left[\frac{1}{6t}\left(MSD\right)\right]_{t}\) (4), where \(t\) is the simulation time. We evaluate \(D_{t_{i}}\) for all times between \(t_{i}\) to \(t_{i}\)+120ps, where \(t_{i}\) changes from 0 to 180ps with a step of 1.2ps. Then, the diffusion coefficient \(D\) and its standard deviation \(D_{std}\) were calculated by all the sampled \(D_{t_{i}}\) by \(D=\frac{\sum D_{t_{i}}}{n}\) and \(D_{std}=\sqrt{\frac{\sum(D-D_{t_{i}})^{2}}{n-1}}\), where \(n\) is the total number of \(t_{i}\). Then, the migration barrier was calculated by fitting the Arrhenius relationship using the diffusion coefficient at the 5 temperatures by \(D=D_{0}\mathrm{e}^{(-\frac{Em}{k_{b}T})}\) (5), where \(D_{0}\) is the pre-exponential factor, \(k_{b}\) is the Boltzmann constant, and \(T\) is the temperature. Finally, the ionic conductivity was calculated from the diffusion coefficient based on the Einstein-Nernst equation, \(\sigma=\frac{cz^{2}F^{2}}{RT}D\) (6), where \(c\) is the volume concentration of oxygen species, z is the charge of each oxygen ion, \(F\) is the Faraday constant, \(R\) is the gas constant, and T is temperature. ## 4 MI-IPMD simulations. Machine learning interatomic potential (ML-IP) was trained by the moment tensor potential (MTP) method[15, 16]. The training data were obtained from the AIMD trajectories from 1000K to 2600K with an interval of 200K. For each temperature, we collected 200 structures with a time interval of 0.12ps to cast aside similar structures. An optimized MTP is then obtained by minimizing the errors in the predicted energies, forces, and stresses with respect to the DFT data. We set the weights of 100:1:0 to the energy, force, and stress data points, following previous works.[17, 18] The radius cutoff was set to be 5.0 A, a typical value used in previously reported MTPs, and the maximum level of the basis functions is set to be 20. All the training and evaluations were performed using the Machine-Learning Interatomic Potentials (MLIP) package.[19] Classical MD simulations were performed using the trained MTP. The time step was set to 1 fs, and the total simulation time was 1 ns for temperatures above 1200K, 5 ns at 1000K, 20ns at 800K, and 100ns at 600K, respectively. ## 5 Synthesis. Stoichiometric quantities of highly pure La\({}_{2}\)O\({}_{3}\) (Alfa Aesar, 99.99 %), MnO\({}_{2}\) (Acros Organics, > 99.99 %), and SiO\({}_{2}\) (Alfa Aesar, 99.9 %) were mixed with KCl (Alfa Aesar, 99 - 100.5 %) flux. The flux-to-reactants molar ratio was 28.5:1 (mass ratio was 1.6:1). The reaction mixture was heated in a covered alumina crucible (30ml) at 900 \({}^{\circ}\)C for 6 days. The sample was slowly cooled down to 500 \({}^{\circ}\)C at a cooling rate of 20 \({}^{\circ}\)C/h and then further cooled down to room temperature at a cooling rate of 95 \({}^{\circ}\)C/h. The resultant mixture was washed with deionized water several times to remove the KCl flux, and then the obtained powder was dried on a hot plate at 120 \({}^{\circ}\)C in air. The LMS powder was grounded to fine powder in acetone medium using a porcelain mortar-pestle and pelletized in a rectangular bar using polyvinyl alcohol as binder and sintered at 1050 \({}^{\circ}\)C for 24 h. ## 6 Characterization. The structural characterization of the LMS powder and pellet sample was performed using room temperature X-ray diffraction (XRD) technique using Cu-Ka source (Bruker D8 Discovery) followed by Rietveld refinement using Fullprof code.[20] The Field effect scanning electron microscopy (FESEM) image of the pellet was collected using a high-resolution microscope (Zeiss 1530). The UV-vis diffuse reflectance of the LMS pellet was measured using a spectrophotometer (Perkin Elmer Lambda 19 UV/Vis/NIR) and the optical band gap was determined using the Kubelka Munk equation[21] and Tauc plot.[22] X-ray photoemission spectroscopy analysis was performed on the sintered LMS pellet surface using a Thermo K-Alpha X-ray photoelectron spectrometer (Alka source). The pellet surface was cleaned by Ar sputtering inside the XPS chamber at ultra-high vacuum (\(\sim\) 10\({}^{-9}\) torr). The spot size of the X-ray beam was 400 micrometers. ## 7 EPMA Analysis. The chemical composition of the polished LMS pellet was measured using a CAMECA SX-Five FE-EPMA operated at 12 kV accelerating voltage and 20 nA beam current. The lowest possible accelerating voltage was selected that would minimize activation volume while also providing adequate overvoltage for excitation of Mn K\(\alpha\). Samples were mounted in epoxy and polished with colloidal alumina suspension. Samples and standards were coated with 1 nm Ir immediately prior to analysis. The O Ka x-rays were measured with PCO crystal (2d=47.12A), Si K\(\alpha\) with LTAP (thallium acid phthalate, large format), Mn K\(\alpha\) with LIIF (lithium fluoride, large format), and La L\(\alpha\) with LPET (pentaerythnitol, large format). Pulse height analysis was operated in differential mode for O and Si to avoid high order reflections from La. Spectral resolution of the PCO crystal on 160 mm radius Rowland circle was adequate to resolve O Ka peaks from interference with Mn L/ without requiring direct interference correction. Measurements included a 20s peak counting time and 10s counting time for each high and low background position. The electron beam was fully focused for spot measurements, with a practical spot diameter of approximately 250 nm. The accuracy of individual measurements was evaluated based on the quality of the analytical total. The average atomic percentage was determined from the analysis of 14 individual grains on the sample surface with <0.5 wt% standard deviation for each element and used to calculate the bulk atomic formula (Table S6). ## 8 Thermogravity analysis. Thermogravimetric analysis (TGA) was performed in oxygen at the temperature range from 50 to 750 \({}^{\circ}\)C at a heating and cooling rate of 3 \({}^{\circ}\)C/ min using a TGA analyzer (TA Instruments Q500). The ground LMS powder was heated in two cycles to remove any adsorbed species. The third cycle is plotted in **Fig. 4c**. The room temperature oxygen content for LMS was considered from the EPMA results. A small mass correction (% of total mass) was performed to compensate for the sudden mass jump at the beginning of the heating cycle. We speculate that this abrupt fluctuation is associated with the instrument heater as similar behavior was observed in other materials too. The structural stability of the sample was also confirmed by performing XRD analysis after the TGA measurements. ## 9 Conductivity measurement. The total conductivity of the LMS was measured in air conditions by the conventional DC 4-probe method using a Keithley 6221 power supply and a 2182A nanovoltmeter. Pt wire and silver paste were used for the electrical connection. Pt wires were used as the current and voltage leads. The current leads were connected to the two ends of LMS pellet using Ag paste. The voltage leads were wrapped around the LMS pellet at equal distances from the two ends using Ag-paste filling the gap between the Pt wire and LMS pellet to ensure a good electrical connection. For all the conductivity measurements, we used the current in the range of 1 to 10 \(\upmu\)A. Electronic conductivity was measured using Au as the ionic blocking electrode at the two ends of the polished LMS pellet [23]. We first deposited Au (300 nm) on one cross-section of the LMS pellet using Au sputtering unit (Leica ACE600) and heated it at 600 \({}^{\circ}\)C for 2 hours, and then repeated for the other cross-section. Afterwards, we deposited a thick Au layer (approximately 200 microns) on both ends using high pure Au paste and heated it at 900 \({}^{\circ}\)C for 2 hours. Pseudo 4-probe method was used to measure the electronic conductivity, where both the current and voltage leads were connected to the Au electrode terminals. The ionic conductivity measurement was performed using 8YSZ blocks as the electronic blocking electrode [23]. A schematic of the assembly was displayed in **Fig. S8** along with the details of the setup in the supplementary file. The ionic conductivity of LMS was measured in air from 600 to 750 \({}^{\circ}\)C. To measure the resistance at a particular temperature, we measured the voltage across the LMS pellet at different currents and determined the resistance from the linear region of the I-V curve as shown in **Fig. S10**. The non-linearity at the higher current region of the I-V curve indicates the presence of ionic conductivity in LMS [23]. Conductivity measurements were conducted on three LMS pellets, The averaged conductivity along with its standard deviation are displayed in **Fig. S7**. ## 10 Electrical conductivity relaxation (ECR). ECR study was performed using LMS pellet of length ~15 mm, width ~6 mm and thickness ~0.6 mm. A vertical and sealed alumina tube was used as the ECR chamber. The sintered LMS pellet was kept floating inside the chamber using 4 Pt-wires passing through a 4-bore alumina tube. The excess space of the ECR chamber was filled using alumina balls of average diameter of ~ 3 mm to reduce the effective chamber volume for faster gas exchange. The same electrical connection method described in the total conductivity measurement was used. 21% and 5 % oxygen balanced N\({}_{2}\) gas with a total flow of 300 SCCM was used to create different P(O\({}_{2}\)) conditions inside the chamber. A 4-way gas switching valve was installed for the fast gas switching at the sample chamber. Abrupt oxygen partial pressure was changed from 5 to 21 % and vice versa at 750, 700, 650 and 600 \({}^{\circ}\)C, and the transient conductivity was measured. The normalized and fitted ECR data during the oxidation and reduction at different temperatures were shown in **Fig. S11a-b**. The value of \(D_{chem}\) and \(k_{chem}\) of LMS at a fixed temperature was determined as the average during oxidation and reduction at that temperature. The fitting details were presented in **SI Discussion 7**. **11. Fittings of experimental data** The experimental activation energy of oxygen ion conducting in LMS was fitted on three separate measurements conducted on three LMS pellets using the Arrhenius relation \(\sigma T=\sigma_{0}\mathrm{e}^{(-\frac{E_{A}}{k_{B}T})}\), from which the average activation is 0.72 eV with a standard deviation of 0.03 eV from the three LMS pellets. The experimental activation energies of chemical oxygen diffusivity \(D_{chem}\) and surface exchange coefficient were obtained by fitting the Arrhenius relation \(D_{chem}=D_{chem}^{0}\mathrm{e}^{(-\frac{E_{A}}{k_{B}T})}\) and \(k_{chem}=k_{chem}^{0}\mathrm{e}^{(-\frac{E_{A}}{k_{B}T})}\). The \(D_{chem}\) and \(k_{chem}\) were obtained from ECR analysis on one LMS pellet. The activation energy is 0.70\(\pm\)0.01 eV for \(D_{chem}\) and 0.82\(\pm\)0.01 eV for \(k_{chem}\), respectively, where the error bars are the standard deviation, representing the goodness of fitting. The tracer diffusion coefficient \(D^{*}\) was derived by the Nernst-Einstein equation \(D^{*}=\frac{\sigma RT}{cz^{2}F^{2}}\) using the experimental conductivity \(\sigma\) and experimental oxygen volume concentration \(c\) from TGA results. Data availability Source data and data that support the plots within this paper are available on Figshare. Acknowledgements This work was funded by the US Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Award # DE-SC0020419. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant Number ACI-1548562. Author contributions J.M. and S.S. have equal contributions to this work. R.J, and D.M. conceived and managed the project. J.M. performed the screening, ab initio calculations, and theoretical analyses with assistance from D.M. and R.J. S.S performed the synthesis, characterization, conductivity, and kinetic measurements. J.L. helped with the ECR analysis and contributed to scientific discussions. W.O.N. performed the EPMA analysis. X.L. trained the machine learning interatomic potential. J.M. wrote the first version of the manuscript with input from S.S. R.J. and D.M. reviewed the manuscript. All authors have reviewed and commented on the manuscript. Competing interests The authors declare no competing interests. Reference 1. Atsuyuki Okabe, Barry Boots, Kokichi Sugihara, S. N. C. _Spatial tessellations: concepts and applications of Voronoi diagrams_. (John Wiley, 2000). 2. O'Keefe, M. & Brese, N. E. Atom sizes and bond lengths in molecules and crystals. _J. Am. Chem. Soc._**113**, 3226-3229 (1991). 3. Belsky, A., Hellenbrandt, M., Karen, V. L. & Luksch, P. New developments in the Inorganic Crystal Structure Database (ICSD): accessibility in support of materials research and design. _Acta Crystallogr. B: Struct. Sci. Cryst._**58**, 364-369 (2002). 4. Pan, H. _et al._ Benchmarking Coordination Number Prediction Algorithms on Inorganic Crystal Structures. _Inorg. Chem._**60**, 1590-1603 (2021). 5. P.J. Linstrom and W.G. Mallard, Eds. NIST Chemistry WebBook, NIST Standard Reference Database Number 69. _National Institute of Standards and Technology, Gaithersburg MD, 20899_ (2023). 6. Jacobs, R. M., Booske, J. H. & Morgan, D. Intrinsic defects and conduction characteristics of Sc 20 3 in thermionic cathode systems. _Phys. Rev. B_**86**, 1-10 (2012). 7. Wang, L., Maxisch, T. & Ceder, G. Oxidation energies of transition metal oxides within the GGA+U framework. _Phys. Rev. B_**73**, 195107 (2006). 8. Lee, Y.-L., Kleis, J., Rossmeisl, J. & Morgan, D. Ab initio energetics of LaB03(001) (B=Mn, Fe, Co, and Ni) for solid oxide fuel cell cathodes. _Phys. Rev. B_**80**, 224101 (2009). 9. Kresse, G. & Furthmuller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. _Phys. Rev. B_**54**, 11169-11186 (1996). * [10] Perdew, J., Burke, K. & Ernzerhof, M. Generalized Gradient Approximation Made Simple. _Phys. Rev. Lett._**77**, 3865-3868 (1996). * [11] Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. _Phys. Rev. B_**59**, 1758-1775 (1999). * [12] Shuichi, N. Constant Temperature Molecular Dynamics Methods. _Prog. Theor. Phys._**103**, 1-46 (1991). * [13] Hoover, W. G. Canonical dynamics: Equilibrium phase-space distributions. _Phys. Rev. A_**31**, 1695-1697 (1985). * [14] Barnard, L. & Morgan, D. Ab initio molecular dynamics simulation of interstitial diffusion in Ni-Cr alloys and implications for radiation induced segregation. _J. Nucl. Mater._**449**, 225-233 (2014). * [15] Shapeev, A. V. Moment Tensor Potentials: A Class of Systematically Improvable Interatomic Potentials. _Multiscale Model. Simul._**14**, 1153-1173 (2016). * [16] Gubaev, K., Podryabinkin, E. V., Hart, G. L. W. & Shapeev, A. V. Accelerating high-throughput searches for new alloys with active learning of interatomic potentials. _Comput. Mater. Sci._**156**, 148-156 (2019). * [17] Zuo, Y. _et al._ Performance and Cost Assessment of Machine Learning Interatomic Potentials. _J. Phys. Chem. A_**124**, 731-745 (2020). * [18] Li, X.-G., Chen, C., Zheng, H., Zuo, Y. & Ong, S. P. Complex strengthening mechanisms in the NbMoTaW multi-principal element alloy. _NPJ Comput. Mater._**6**, 70 (2020). * [19] Novikov, I. S., Gubaev, K., Podryabinkin, E. V & Shapeev, A. V. The MLIP package: moment tensor potentials with MPI and active learning. _Mach. Learn. Sci. Technol._**2**, 025002 (2021). * [20] Rodriguez-Carvajal, J. Recent advances in magnetic structure determination by neutron powder diffraction. _Phys. B: Condens. Matter._**192**, 55-69 (1993). * [21] Sangiorgi, N., Aversa, L., Tatti, R., Verucchi, R. & Sanson, A. Spectrophotometric method for optical band gap and electronic transitions determination of semiconductor materials. _Opt. Mater. (Amst)_**64**, 18-25 (2017). * [22] Tauc, J., Grigorovici, R. & Vancu, A. Optical Properties and Electronic Structure of Amorphous Germanium. _Phy. status solidi (b)_**15**, 627-637 (1966). * [23] Lei, C., Simpson, M. F. & Virkar, A. V. Investigation of Ion and Electron Conduction in the Mixed Ionic-Electronic Conductor- La-Sr-Co-Fe-Oxide (LSCF) Using Alternating Current (AC) and Direct Current (DC) Techniques. _J. Electrochem. Soc._**169**, 14506 (2022). ## Computational Discovery of Fast Interstitial Oxygen Conductors Jun Meng1,4, Md Sariful Sheikh1,4, Ryan Jacobs1, Jian Liu2, William O. Nachlas3, Xiangguo Li1, Dane Morgan1,* _1 Department of Materials Science and Engineering, University of Wisconsin Madison, Madison, WI, USA._ _2 DOE National Energy Technology Laboratory, Morgantown, WV, USA._ _3 Department of Geoscience, University of Wisconsin Madison, Madison, WI, USA._ _4 These authors contributed equally: Jun Meng, Md Sariful Sheikh. E-mails: [email protected]._ Table of Contents.. **Figure S6.** Diffuse reflectance spectroscopy of La\({}_{4}\)Mn\({}_{4.69}\)Si\({}_{4.03}\)O\({}_{22.42}\) (LMS) pellet and the determined optical band gap...................9 Figure S7. Measured total conductivity, electronic conductivity, and ionic conductivity of La\({}_{4}\)Mn\({}_{4.69}\)Si\({}_{4.03}\)O\({}_{22.42}\) (LMS)...................9 Figure S8. A schematic of the ionic conductivity measurement method using the conventional DC 4-probe method with 8YSZ (8 mol% yttria-Stabilized zirconia) electron blocking on both ends...................10 Figure S9. Temperature-dependent ionic conductivity of La\({}_{0.65}\)Sr\({}_{0.4}\)Co\({}_{0.2}\)Fe\({}_{0.8}\)O\({}_{3\cdot 5}\) (LSCF) measured using 8YSZ (8 mol% yttria-Stabilized zirconia) electron blocking in this work compared with literature reported ionic conductivity of LSCF...................10 Figure S10. Current-voltage characteristics across LMS in Au/LSCF/YSZ/LMS/LSCF/YSZ/Au system at temperature from 600 to 750 \({}^{\circ}\)C...................11 Figure S11. Normalized and fitted electrical conductivity relaxation (ECR) data during oxidation and reduction at temperatures from 600 to 750 \({}^{\circ}\)C. The fitted D\({}_{\text{chem}}\), k\({}_{\text{chem}}\), and Bi-number during oxidation and reduction. The derived thermodynamic factor of LMS at temperature from 600 to 750 \({}^{\circ}\)C...................12 Figure S12. Room temperature X-ray diffraction (XRD) pattern of the LMS pellet before and after the electronic conductivity relaxation (ECR) study...................13 Figure S13. The tracer diffusion coefficient D\({}^{*}\) and the trace surface exchange coefficient k\({}^{*}\) of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) compared to the state-of-art materials...................13 Discussion..................14 1. Electrical property and defect formation energy with different DFT functionals...................14 2. DFT studies of oxygen defects in La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\)...................14 3. Calculated spin state of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\)...................14 4. Climbing Image Nudged Elastic Band (CI-NEB) calculation...................15 5. lodometric titration...................15 6. Preparation of the ionic conductivity measurement using YSZ blocks...................15 7. Fitting of the ECR data..................16 8. Determination of average D\({}_{\text{chem}}\) and k\({}_{\text{chem}}\) of state-of-art materials from literatures...................16 Reference..................17 ## Supplementary Tables **Table S1.** Materials list with interstitial oxygen formation energy \(E_{f}\leq 0.3\) eV under atmosphere condition (T=300 K, P(O\({}_{2}\))=0.2 atm) by _ab initio_ calculation. \begin{tabular}{c c c c c c} \hline \hline **Materials-ID** & **Formula** & \(\mathbf{E_{f}}\) & **Materials-ID** & **Formula** & \(\mathbf{E_{f}}\) \\ \hline mp-23349 & BiB3O6 & -3.85 & mp-772957 & SrV4010 & -0.42 \\ \hline mp-1196071 & Ba2Fe2O5 & -3.51 & mp-1196110 & SrCuTe2O7 & -0.34 \\ \hline mp-1204837 & NaFe2Si6O15 & -3.25 & mp-1200170 & Ba5Cr3O13 & -0.34 \\ \hline mp-23356 & Bi4B2O9 & -2.73 & mp-23446 & GeBi2O5 & -0.33 \\ \hline mp-555752 & NaFe2Mo3O12 & -2.68 & mp-1195799 & K2Fe2B2O7 & -0.32 \\ \hline mp-1199587 & Yb2VO5 & -2.28 & mp-556076 & Sr2Co2O5 & -0.30 \\ \hline mp-555924 & Ca5Nb5O17 & -2.10 & mp-29189 & VHg2O4 & -0.29 \\ \hline mp-542931 & Bi2B8O15 & -2.05 & mp-667343 & Re2Hg5O10 & -0.29 \\ \hline mp-559364 & SrBi2B4O10 & -2.00 & mp-1204772 & Co2As2O7 & -0.27 \\ \hline mp-29508 & LiMo3O9 & -1.91 & mp-558472 & SrCu2B2O6 & -0.23 \\ \hline mp-744682 & Cr8Bi4O29 & -1.69 & mp-705159 & K2Co2Mo3O12 & -0.23 \\ \hline mp-29058 & V3Bi6O16 & -1.61 & mp-563010 & RbFeMo2O8 & -0.20 \\ \hline mp-18907 & Ca2MnAlO5 & -1.55 & mp-6496 & Ba2NaCu3O6 & -0.17 \\ \hline mp-1194512 & V2Cu3O9 & -1.47 & mp-29112 & CrHg5O6 & -0.15 \\ \hline mp-630403 & Ca2MnGaO5 & -1.45 & mp-18924 & Sr3Fe2O6 & -0.15 \\ \hline mp-558429 & NaFe4Mo5O20 & -1.37 & mp-558751 & CaBi2O4 & -0.14 \\ \hline mp-19290 & Mn2As2O7 & -1.33 & mp-6027 & Ba2Ti2CuO6 & -0.08 \\ \hline mp-559180 & Ba2CuB2O6 & -1.28 & mp-704097 & Sr6Co4Bi2O15 & -0.05 \\ \hline mp-22113 & Ca2Fe2O5 & -1.25 & mp-1203275 & Co2AsO5 & -0.04 \\ \hline mp-1003437 & KMn2O4 & -1.24 & mp-1203433 & Hg3AsO5 & -0.04 \\ \hline mp-21926 & SrFe2O4 & -1.23 & mp-647862 & Cr2Mo3O12 & -0.04 \\ \hline mp-753258 & Li3CrO4 & -1.13 & mp-29048 & SrBi2O4 & 0.00 \\ \hline mp-19165 & BaFe5i4O10 & -1.06 & mp-1190373 & K2V2Co07 & 0.01 \\ \hline mp-29259 & Bi2PdO4 & -1.06 & mp-554698 & K10MnMo7O27 & 0.07 \\ \hline mp-556203 & La8Ni4O17 & -1.06 & mp-17387 & LiVAsO5 & 0.09 \\ \hline mp-20161 & Na2CoGeO4 & -0.91 & mp-18893 & Ca2Mn3O8 & 0.12 \\ \hline mp-18096 & Na2CoSi4O10 & -0.91 & mp-559685 & V2Cd4Te3O15 & 0.15 \\ \hline mp-18926 & La3Ni2O7 & -0.90 & mp-1200219 & V4Cr2O13 & 0.16 \\ \hline mp-21635 & CeMn2Ge4O12 & -0.87 & mp-546111 & Cr3AgO8 & 0.18 \\ \hline mp-1194618 & Ba4Y2Fe2011 & -0.87 & mp-560340 & La2Pd2O5 & 0.18 \\ \hline mp-19228 & K2MnV4O12 & -0.82 & mp-1200054 & V4Fe2O13 & 0.19 \\ \hline mp-505042 & CuBi2O4 & -0.78 & mp-541433 & CdBi2O4 & 0.21 \\ \hline mp-550998 & TiZnBi2O6 & -0.73 & mp-19142 & Mn2V2O7 & 0.23 \\ \hline mp-541464 & La4Mn5Si4O22 & -0.70 & mp-639811 & KirO3 & 0.24 \\ \hline mp-558316 & La4Ni3O10 & -0.69 & mp-19395 & MnO2 & 0.25 \\ \hline \end{tabular} \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{**Functional**} & \multicolumn{1}{c}{\(E_{f}\)(eV)} & \multicolumn{1}{c}{**Optical Indirect Band gap (eV)**} \\ \hline GGA & -0.7 & 0.44 \\ \hline GGA+U (U=3.9) & 1.13 & 1.02 \\ \hline SCAN & -0.11 & 0.72 \\ \hline HSE (\(\alpha\)=0.10) & 0.73 & 0.73 \\ \hline HSE (\(\alpha\)=0.15) & 1.02 & 1.11 \\ \hline HSE (\(\alpha\)=0.20) & 1.31 & 1.54 \\ \hline HSE (\(\alpha\)=0.25) & 1.57 & 1.93 \\ \hline Experiment & & 0.79 \\ \hline \hline \end{tabular} \end{table} Table S3: Interstitial oxygen formation energy \(E_{f}\) in bulk La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) under atmosphere environment (T=300 K, P(O\({}_{2}\))=0.2 atm), and the optical band gap of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22}\) calculated by different exchange and correlation functionals and from experimental measurement. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{Parameters} & \multicolumn{1}{c}{Rietveld refined data} & \multicolumn{1}{c}{reference data} \\ \hline a (Å) & 14.0461 (9) Å & 14.024 Å \\ b (Å) & 5.5836 (4)Å & 5.571 Å \\ c (Å) & 11.7299 (8) Å & 11.703 Å \\ \(\alpha\) & 90\({}^{\rm o}\) & 90\({}^{\rm o}\) \\ \(\beta\) & 114.3579 (13)\({}^{\rm o}\) & 114.34\({}^{\rm o}\) \\ \(\gamma\) & 90\({}^{\rm o}\) & 90\({}^{\rm o}\) \\ \hline \hline \end{tabular} \end{table} Table 5: **Rietveld refined lattice positions of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\).** \begin{table} \begin{tabular}{c c c c c c} \hline \hline & **La** & **Mn** & **Si** & **O** & **Formula** \\ \hline Ideal & 11.43\% & 14.29\% & 11.43\% & 62.86\% & La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22}\) \\ \hline EPMA & 11.39\(\pm\)0.14\% & 13.34\(\pm\)0.09\% & 11.46\(\pm\)0.29\% & 63.81\(\pm\)0.43\% & La\({}_{4}\)Mn\({}_{4.69}\)Si\({}_{4.03}\)O\({}_{22+0.42}\) \\ \hline Titration & 11.39\% & 13.34\% & 11.46\% & 63.85\(\pm\)0.07 \% & La\({}_{4}\)Mn\({}_{4.69}\)Si\({}_{4.03}\)O\({}_{22+0.47}\) \\ & (EPMA) & (EPMA) & & & \\ \hline \hline \end{tabular} \end{table} Table 5: Atomic percentage value along with the standard deviation measured by the electron probe micro-analyzer (EPMA) and the derived formula from EPMA and lodometric titration analysis compared with the ideal results. Figure S2: Energy landscape of the (a, c) interstitial diffusion and (b, d) interstitial diffusion calculated by GGA-PBE, and SCAN functional, respectively. Figure S5: XPS survey spectrum of LMS, suggesting the presence of La, Mn, Si, and O atoms without presence of any detectable impurity elements. Figure S6: (a) Diffuse reflectance spectroscopy of LMS pellet. (b) The optical band gap of LMS was determined from the diffuse reflectance spectroscopy. **Figure S8.** A schematic of the ionic conductivity measurement method using the conventional DC 4-probe method with 8YSZ (8 mol% Yttria-Stabilized Zirconia) electron blocking on both ends. Figure S10: Current-voltage characteristics across LMS in one Au/LSCF/YSZ/LMS/LSCF/YSZ/Au system at (a) 750 \({}^{\circ}\)C, (b) 700 \({}^{\circ}\)C, (c) 650 \({}^{\circ}\)C, and (d) 600 \({}^{\circ}\)C. The error bars represent the standard deviation obtained from \({}^{\sim}\)150 measurements at each fixed current from one sample. The error bars are difficult to see as they are less than the size of the symbols. Figure S11: Normalized and fitted electrical conductivity relaxation (ECR) data of one LMS sample during the (a) oxidation and (b) reduction at different temperatures. The fitted (c) \(D_{chem}\), (d) \(k_{chem}\) and (e) calculated Bi-number during oxidation and reduction. (f) Calculated thermodynamic factor at the studied temperatures. The error bars of the fitted values (with 95% confidence, i.e. \(\pm 2\) standard deviation) are smaller than the size of the symbols. Please see details of fitting of the ECR data in Discussion 7. Figure S12: Room temperature X-ray diffraction (XRD) pattern of the LMS pellet before and after the electronic conductivity relaxation (ECR) study confirms the structural stability. The XRD profile also demonstrates enhanced grain orientation on the pellet surface along the (001) direction. ## Discussion ### 1. Electrical property and defect formation energy with different DFT functionals. Since the GGA functional predicts underestimated oxygen ion migration barrier and optical band gap compared to the experiments, the band gap and defect formation energy for the bulk structure of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) were studied with different DFT exchange and correlation functionals, which are the GGA, GGA with Hubbard U correction (GGA+U) (U=3.9eV for Mn) [19], hybrid functional of Heyd, Scuseria, and Ernzerhof (HSE06) [20], and the strongly constrained and appropriately normed (SCAN) functionals [21]. Formation energy and the band gap vary with different potentials, from which we believe that different approaches to represent the d-electron physics have a significant effect on the defect formation energies. Based on the previous studies on the performance of different exchange-correlation potentials, SCAN predicts the most accurate optical band gap and defect formation energy that is consistent with the experimental observation. ### 2. DFT studies of oxygen defects in La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\). The formation energy of the oxygen interstitial and vacancy (at \(\approx\)2% concentration) in the bulk structure of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22}\) were calculated under atmospheric conditions (T=300 K, P(O\({}_{2}\))=0.2 atm). The two most energetically favorable configurations of the interstitial oxygen site were displayed in Fig. **51a-b** and the most stable configuration of the vacancy oxygen site was shown in Fig. **51c**, along with the defection formation energies. These calculations were performed using the SCAN functional [22], which was proven to give the most consistent band gap value with experiment (**Discussion 1**). Monkhorst-Pack k-point meshes [23] of \(4\times 3\times 2\) was used for the \(2\times 1\)\(\times 1\) supercell with 70 atoms. The formation energies of \(O_{i}\) at different concentrations in La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) were studied, which are -0.11 eV, -0.11 eV, 0.49 eV, and 0.52 eV under air conditions for \(\delta=0.25,0.5,0.75,1\), respectively. The results suggest that the interstitial is stabilized by oxidizing Mn\({}^{2+}\) ions. With including one interstitial oxygen into the \(2\times 1\times 1\) supercell (\(\delta=0.5\)), the only two Mn\({}^{2+}\) ions in the supercell were oxidized to Mn\({}^{3+}\). With more interstitial oxygen included (\(\delta=0.75\)), the defect formation energy is increased by 0.6 eV, indicating that further oxidization of Mn\({}^{3+}\) ions is difficult. It is worth noting that the defect formation energies are very close when \(\delta=0.25,0.5\) and \(\delta=0.75,1\), respectively, suggesting that the interstitial concentration dependence of the formation energy is much more strongly impacted by the oxidative state of Mn ions than the direct interstitial interaction. The results suggest that the equilibrium concentration of interstitial oxygen in La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\) is \(\delta\approx 0.5\) under air conditions, which is consistent with the EMPA measurement of the interstitial content \(\delta=0.42\) and the TGA results of \(\delta=0.42\sim 0.52\). ### 3. Calculated spin state of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22+\delta}\). The DFT calculated magnetic moments are consistent with Mn\({}_{1}\)\({}^{4+}\), Mn\({}_{2}\)\({}^{3+}\) and Mn\({}_{3}\)\({}^{2+}\) being in their high spin magnetic states. More specifically, our average integrated (within the Wigner-Seitz radius) z-component of the spin on Mn\({}_{1}\), Mn\({}_{2}\), and Mn\({}_{3}\) are 2.90, 3.41, and 4.41 \(\mu_{B}\) from DFT GGA-PBE calculation, respectively. These values are consistent with the expected ideal formal valence spin state based on summing of unpaired electrons of 3, 4, and 5 for Mn\({}_{1}\)\({}^{4+}\), Mn\({}_{2}\)\({}^{3+}\) and Mn\({}_{3}\)\({}^{2+}\), respectively. These ideal formal valence spin values can be used to calculate the total spin-only magnetic moment from the formula \(\mu_{cal}^{2}=2*\mu_{cal}^{2}(Mn^{4+})+2*\mu_{cal}^{2}(Mn^{3+})+\mu_{cal}^{2}(Mn ^{2+})\) /formula unit [1], where for each Mn ion the effective magnetic moment was calculated by \(\mu_{cal}=\sqrt{n*(n+2)}\) and \(n\) is the sum of unpaired electron. Using this formula, the total spin-only magnetic moment is predicted to be 10.63 \(\mu_{B}\)/formula unit, which is an excellent match for the experimental measured value of 10.64 \(\mu_{B}\)/formula.[1] Upon the inclusion of \(O_{i}\), two adjacent Mn\({}^{2+}\) ions are oxidized to Mn\({}^{3+}\), with a change of the spin state from 5 to 4. **4. Climbing Image Nudged Elastic Band (CI-NEB) calculation.** The migration barriers of the \(O_{i}\) interstitial and interstitialcy diffusion pathways in LMS were studied by the Climbing Image Nudged Elastic Band (CI-NEB) method.[24] The calculations were performed separately using GGA-PBE and SCAN functionals. The plane wave cutoff energy was set as 520 eV. The stopping criteria for total energy calculations were 0.001 meV/cell for electronic relaxation and 0.05 eV/A for ionic relaxation, respectively. 7 images were used for the interstitial diffusion pathway and 5 images were used for the interstitialcy diffusion pathway. **5. lodometric titration.** lodometric titration was performed in nitrogen atmosphere based on the following assumptions/criteria: First, charge neutrality; second, cations are in the stoichiometric ratio as obtained in the EPMA study; third, the valence of La and Si in LMS is 4+; fourth, average valence of Mn in LMS is X+ > 2+. The reaction mechanism is \[\text{Mn}^{\text{X+}}+\text{(X-2)CI-}=\text{Mn}^{\text{2+}}+\text{(X-2)/2. Cl}_{2}\] \[\text{Cl}_{2}+\text{2I-}=\text{2CI}+\text{l}_{2}\] A weighed amount of ground LMS pellet was dissolved in an aqueous solution of KI and HCI (6N). Cl\({}_{2}\) is generated during this dissolution and the in-situ generated Cl\({}_{2}\) reacts with the I- to produce I\({}_{2}\). The liberated I\({}_{2}\) is measured by titration with a standard volumetric aqueous solution of Na\({}_{2}\)S\({}_{2}\)O\({}_{3}\) (\(\thicksim\) 0.01 N). Finally, the stoichiometry of the oxygen was calculated from the measured I\({}_{2}\) amount. The measurement was repeated for five times to confirm the reproducibility, and the average value along with the standard deviation in the mean was presented in **Table S6**. **6. Preparation of the ionic conductivity measurement using YSZ blocks.** The ionic conductivity measurement was performed using pre-synthesized commercial electrode material yttria-stabilized zirconia (8 mol % Y\({}_{2}\)O\({}_{3}\) in ZrO2, 8YSZ; Sigma Aldrich) blocks as the electronic blocking electrode.[4] The 8YSZ pellets were sintered at 1500 \({}^{\text{o}}\)C for 6 hours. As shown in Fig. S8, the cross-section of the 8YSZ pellets was 4.8 mm x 4.8 mm and the thickness was 1.5 mm. The thickness of the LMS pellets were \(\thicksim\) 0.8 mm. All pellets were polished on all sides to remove any surface contamination and to reduce contact resistance. To ensure better connectivity between LMS and 8YSZ pellet we used a homemade paste of LMS in ethanol and made a thin layer of LMS between LMS and 8YSZ blocks. For voltage measurement, two thin Au wires were inserted at the YSZ and LMS junctions for the voltage measurement. To measure the voltage across the LMS pellet accurately, we sputtered an Au line (width \(\thicksim\) 0.2 mm, thickness\(\thicksim\) 200 nm) on the LMS surface and connected it with the Au wires. For the efficient oxygen exchange, we used two porous LSCF pellets (thickness \(\thicksim\) 1 mm) at two ends of the assembly. The porous LSCF pellets were sintered at 1050 \({}^{\text{o}}\)C for 6 hours using commercial LSCF electrode power (Sigma Aldrich). These two sintered porous LSCF pellets were also connected to the YSZ pellet by a thin LSCF layer made using homemade LSCF paste. The exposed surface of the LSCF pellets was coated with Au by sputtering. The whole sample assembly was pressed vertically between two Au-coated alumina plates and two Pt wires were connected to these alumina plates as the current leads. The whole system was sintered at 900 \({}^{\circ}\)C for 1 hour before performing the measurement. **7. Fitting of the ECR data** The obtained ECR data was fitted to determine the \(D_{chem}\), \(k_{chem}\) along with its standard deviation in the average value, using a previous reported 2-D model.[25] In **Fig. S11(a, b)**, the nonlinear least square fitting was performed using the publicly available NETL Electrical Conductivity Relaxation (ECR) Analysis Tool.[25] A small conductivity drift was observed during the whole ECR process which may happen due to the microstructure change occurring at high temperatures. We corrected this resistance drift before fitting. The ECR data shown a very fast oxygen exchange at the beginning (~30 s to 2 min, with the time being inversely proportional to temperature) of the transient response, which may be due to the presence of (001) oriented grain on the pellet surface (confirmed by XRD, **Fig. S12**) or a patch of unknown secondary surface phase sufficiently thin as to not be detectable by XRD. In order to fit the ECR data using a single phase model, we neglected this fast response region as fitting this region using the single phase model gives a very high value of \(k_{chem}\) with a large error bar. We have chosen the initial point where the percentage of error in both \(D_{chem}\) and \(k_{chem}\) is < 5%. The error bars reported for the \(D_{chem}\) and \(k_{chem}\) represent a 95% confidence interval for \(D_{chem}\)and \(k_{chem}\)that is provided as part of the NETL ECR Analysis Tool[25] based on numerical aspects of their fitting. However, it is important to acknowledge that the true uncertainty in the values of \(D_{chem}\)and \(k_{chem}\) can be affected by many issues, including sample to sample variability, \(D_{chem}\)and \(k_{chem}\)covariance in fitting [25] and multiple measurement limitations, e.g., gas flush effects.[26] Variations in \(D_{chem}\)and \(k_{chem}\) between different samples, research groups, and experimental setups and approaches are often close to an order of magnitude, even for well-studied materials. The value of \(D_{chem}\) and \(k_{chem}\) of LMS at a fixed temperature was determined as the average during oxidation and reduction at that temperature. The potential for reliable determination of both \(D_{chem}\)and \(k_{chem}\) was confirmed by the Bi-number, defined as \(Bi=\frac{L_{Z}}{D_{chem}/k_{chem}}\), where \(L_{Z}\) represents the half-thickness of the LMS sample. The obtained values of Bi number falls within the range of 0.03 to 30\({}^{27}\) (**Fig. S11e**), suggesting that reliable values of both \(D_{chem}\) and \(k_{chem}\) can be extracted from the measurement. The stability of the samples after the ECR measurement was confirmed using the XRD technique (**Fig. S12**). **8. Determination of average \(D_{chem}\) and \(k_{chem}\) of state-of-art materials from literatures.** Diffusion coefficient \(D\) and surface exchange coefficient \(k\) of La\({}_{4}\)Mn\({}_{5}\)Si\({}_{4}\)O\({}_{22}\)+\(\delta\) were compared with state-of-the-art materials La\({}_{0.5}\)Sr\({}_{0.4}\)Co\({}_{0.2}\)Fe\({}_{0.8}\)Os\({}_{0.8}\) (LSCF), Ba\({}_{0.5}\)Sr\({}_{0.5}\)Co\({}_{0.8}\)Fe\({}_{0.2}\)O\({}_{3}\)-\(\delta\) (BSCF), La\({}_{0.5}\)Sr\({}_{0.5}\)CoO\({}_{3}\)-\(\delta\) (LSC), La\({}_{0.5}\)Sr\({}_{0.5}\)FeO\({}_{3}\)-\(\delta\) (LSF), BaCoFeVO\({}_{3}\)-\(\delta\) (BCFY), and La\({}_{2}\)NiO\({}_{4}\)+\(\delta\) (**Fig. 4(e,f) and Fig. S13**). Data along with references are available as digital SI. and We found that there is a wide variation of the \(D\) and \(k\) reported at different temperatures, making the comparison difficult. Hence, we have calculated the average of \(D\) and \(k\) at 600 \({}^{\circ}\)C and 800 \({}^{\circ}\)C from the literatures where the authors have studied at least two different temperatures. From the average \(D\) and \(k\), we determined the activation energy using the Arrhenius relationship and derived \(D\) and \(k\) changing with temperature.
2301.10050
Development of a photothermal measurement model to determine layer thickness of multi-layered coating systems with unknown thermal properties
In this article, a general model for 1D thermal wave interference is derived for multi-layered coating systems on a thermally thick substrate using the same principles as for the well established one-layered and two-layered coating cases. Using the lock-in thermography principle, an illumination source modulates the surface of those systems periodically by a planar, sinusoidal wave form with a fixed frequency. The coating systems absorb the optical energy on its surface and convert it into thermal energy, resulting in the propagation of a spatially and temporally periodic thermal wave with the same frequency. These thermal waves, originating at the surface, are reflected and transmitted at each interface leading to infinitely many wave trains that need to be tracked in order to formulate the final surface temperature as a superposition of all these waves. The heat transfer inside the object depends on the layer thickness of each coating, but also on the thermal properties of each layer material. The goal is to have a mathematical and physical model which describes the phase angle data measured by an infrared camera. Having these data, the main objective of this paper is to determine the thickness of each coating layer. In practice, the thermal properties of the layers usually are unknown, which makes the process even more difficult. For that reason, this article presents a concept to determine the thermal properties in advance.
Dimitri Rothermel, Thomas Schuster
2023-01-24T14:47:19Z
http://arxiv.org/abs/2301.10050v1
Development of a photothermal measurement model to determine layer thickness of multi-layered coating systems with unknown thermal properties ###### Abstract In this article, a general model for 1D thermal wave interference is derived for multi-layered coating systems on a thermally thick substrate using the same principles as for the well established one-layered and two-layered coating cases. Using the lock-in thermography principle, an illumination source modulates the surface of those systems periodically by a planar, sinusoidal wave form with a fixed frequency. The coating systems absorb the optical energy on its surface and convert it into thermal energy, resulting in the propagation of a spatially and temporally periodic thermal wave with the same frequency. These thermal waves, originating at the surface, are reflected and transmitted at each interface leading to infinitely many wave trains that need to be tracked in order to formulate the final surface temperature as a superposition of all these waves. The heat transfer inside the object depends on the layer thickness of each coating, but also on the thermal properties of each layer material. The goal is to have a mathematical and physical model which describes the phase angle data measured by an infrared camera. Having these data, the main objective of this paper is to determine the thickness of each coating layer. In practice, the thermal properties of the layers usually are unknown, which makes the process even more difficult. For that reason, this article presents a concept to determine the thermal properties in advance. photothermal measurements infrared thermography thermal wave interference parameter estimation layer thickness determination multi-layered coating systems thermal properties ## 1 Introduction In this article we investigate general multi-layered coating systems of \(n+1\) total layers, where \(n\in\mathbb{N}\) coating layers of different materials \(M_{1},\ldots,M_{n}\) are applied on top of each other on a substrate material \(M_{n+1}\), see Figure 1. For \(i=1,\ldots,n+1\), the variable \(L_{i}\in\mathbb{R}_{+}\) denotes the thickness of the associated layer. The thermal properties of each material are characterized by the thermal _diffusivity_ and the _thermal diffusivity_ and denoted by \(\alpha_{i}\in\mathbb{R}_{+}\) and \(e_{i}\in\mathbb{R}_{+}\), respectively. The determination of the vector \[\mathbf{L}:=(L_{1},\ldots,L_{n})^{T}\in\mathbb{R}_{+}^{n}\] having the coating layer thicknesses as components by a nondestructive testing method is of great interest in the manufacturing process and quality control of such systems. In this paper, it is suggested that photothermal methods, such as infrared thermography, are specially suited for this purpose. A planar and periodically modulated light source is used to irradiate the top surface of a test object, which leads to a propagating thermal wave inside the object. Such thermal waves show the same behavior at interfaces as shear waves in elasticity theory or acoustic and optical waves in the visible spectral range, i.e. reflection and transmission coefficients can be calculated. The temperature response, which can be measured by an infrared camera, is a result of thermal wave interference, i.e. the superposition of thermal wave trains, which of course depend on the coating layer thicknesses and its material thermal properties. In our setup the phase angle of the temperature response is measured. Unfortunately, the thermal properties \[(\alpha_{i},e_{i}),\ \ \ i=1,\ldots,n+1\] are often partially or even completely unknown making the process of determining the coating layer vector \(\mathbf{L}\in\mathbb{R}_{+}^{n}\) even more difficult. The goal of this paper is to develop a concept utilizing the generated data of multiple samples in a way, where the unknown thermal properties can be determined as well in order to calibrate a model for the layer thickness determination. ## 2 Motivation The determination of thicknesses even in one-layered coating systems is of great interest for quality control, e.g. in the manufacturing process of electrode coatings of Lithium-ion batteries, where a Lithium cobalt oxide coating of \(50\) to \(100\)\(\mu m\) is applied to a thin (\(\pm 10\)\(\mu m\)) aluminium substrate. Undesired coating thicknesses lead to performance issues of the battery or higher production costs. In the worst case, dangerous situations can arise, such as the explosion of batteries, cf. [24]. Another application example where the control of layer thicknesses is very important, is the application of non-electrolytic zinc-aluminium flakes to protect metallic surfaces by increasing its corrosion resistance, cf. [9, 12]. A further application example showing a higher complexity, i.e. with two coating layers, is the plastic housing of laptops or smartphones. For example, the substrate might be some rigid ABS plastic of \(1\) to \(2\)\(mm\), the first Figure 1: System of \(n\) coating layer materials \(M_{1}\),..., \(M_{n}\) on a substrate material \(M_{n+1}\) thin coating layer (the so-called basecoat) might be some polyurethane (\(\pm 20\ \mu m\)) that acts like a thermal insulator against the rising heat of the electronics, and the second coating layer (the so-called topcoat of \(\pm 20\ \mu m\)) might be some UV hardened resin acting like a visually appealing surface finish. To give a last idea of an application with increasing coating layers, it is certainly worth mentioning the automotive and aviation industry ([8], [13]), where metallic or CFRP substrates are coated with different paints, e.g. a primer, basecoat, topcoat and clearcoat. Here again, each layer fulfills its own function and it is therefore necessary to control the individual layer thicknesses. Of course, like for every other industry sector with an economical objective, the goal is also to reduce resources, while maintaining the full functionality of the product. Obviously, depending on the manufacturing process, discrepancies between target and actual thickness values can arise. It is therefore necessary to build devices and develop algorithms in order to keep track of those unwanted deviations as early as possible. Instead of the conservative approach of performing quality control randomly after the completion of a bigger production line by cutting up individual samples to evaluate cross-sectional slice images under the microscope, the modern Industry 4.0 (cf. [14]) approach requires inline procedures that are nondestructive1 and monitor the process in real time. Footnote 1: In general, nondestructive testing or evaluation summarizes numerous techniques that aim to test and evaluate the properties of materials without causing damage. There are a few things to consider when choosing a measuring process (cf. [10]) in this special setting. Eddy current or inductive based measurement methods only work on metallic substrates, X-ray or beta backscattering methods only work with certain metal groups and also require compliance with strict occupational health and safety, radiation protection and disposal measures. Furthermore, ultrasonic and capacitive methods need contact with the test specimen and are therefore not suitable for measuring wet coatings (e.g. for the automotive paint process line) or uncured powder coatings. A very promising method, which is contact-free and uses harmless and non-invasive electromagnetic radiation, is presented by the terahertz technology, see [13]. Unfortunately, the usage of highly sensitive devices is needed, such as a femtosecond laser (which is able to keep track of signals in the picosecond range and) which is often prohibitively expensive for medium-sized companies. In this work, it is suggested that _photothermal methods_, especially _infrared thermography_, could solve the aforementioned problems. Such methods are contact-free with the test object and have manageable costs. A typical setting is presented in Figure 2, where the components and the operating principle are as follows: Figure 2: Measurement principle of infrared thermography 1. _Optical excitation sources_, such as lamps or lasers (cf. [1, Chapter 4]), irradiate an object under investigation with optical (i.e. visible) light. Its task is to produce the _photothermal effect_, i.e. 2. the _test object_ absorbs the optical energy (light) and converts it into thermal energy (heat). Heat transfer inside the test object occurs, which is dependent on the characteristics of the object geometry, its material composition and thermal properties. 3. In the next step, the thermal response of the test object is recorded by a _measuring device_, such as an infrared camera. 4. Finally, a _computer_ processes and evaluates the data to provide certain properties (or even defects) of the considered test object. In particular, it should be mentioned that the photothermal method requires only two conditions: * The investigated coating must be susceptible to optical radiation of a certain wavelength range, i.e. for near infrared, visible or UV light. * There must be a thermal contrast between two adjacent layers. Otherwise, a transparent interface would be created without significant reflection. As a brief note, there are mainly two classical optical excitation types in thermographic processes, i.e. the lock-in thermography and pulsed thermography, see e.g. [25], [3], [6]. The latter method analyses the transient response and propagation of heat pulses in the test object. A prominent processing technique in pulsed thermography is the TSR (Thermographic Signal Reconstruction) method, where the logarithmic time derivatives of the signal are examined, cf. [23], [22], [2]. In lock-in thermography, the surface of the test object is periodically modulated by a planar sinusoidal wave form with a fixed frequency. Radiation is absorbed and leads to the propagation of a spatially and temporally periodic temperature field in the test object, which is also referred to as the so-called _thermal wave_. Since this thermal wave has the same frequency as the excitation, especially the phase angle (or phase difference) carries a lot of information. Thermal waves show the same behavior at interfaces as shear waves in elasticity theory or acoustic and optical waves in the visible spectral range, i.e. reflection and diffraction occur. Therefore, thermal reflection and transmission coefficients can be derived, see e.g. [11], [17]. In this paper, the frequency lock-in principle is used and the test object is a multi-layered coating system, see Figure 1. The parameter estimation of the vector \(\mathbf{L}\in\mathbb{R}_{+}^{n}\) is very suitable to be understood in terms of inverse problems (cf. [16], [21], [20], [19]), since interior properties that are not directly accessible are to be determined by processing exterior data in the form of the phase angle of the measured surface temperature. ## 3 Mathematical Setting In order to understand the physical process of thermal wave interference for the general case of multi-layered coating systems, it is necessary to take a look at known models for the semi-infinite medium and for the cases with \(n=1\) and \(n=2\) coatings. The following mathematical notations and formulations in the next subsections can be found in [5],[18], [1], [15]. ### Basics of thermal wave generation Thermal waves can be mathematically characterized as solutions of the heat diffusion equation. The type of the heat source at the surface, which represents the appropriate boundary conditions, influences the surface temperature distribution and determines the generation of waves. The most common type of excitation is a periodic, planar energy input by high-performance laser beams of a single specific excitation frequency, i.e., lock-in excitation. For the sake of simplicity, we firstly consider an isotropic homogeneous semi-infinite medium \(M\) (that means an infinite extension of the medium in \(x\)-direction), which is described by the following figure: Here, \(k\) denotes the thermal _conductivity_ and \(c\) denotes the volumetric _heat capacity_ of the material \(M\). The thermal diffusivity can then be calculated by \(\alpha:=\frac{k}{c}\) and the thermal effusivity by \(e:=\sqrt{k}c\). We assume that the heated surface occupies the \(y-z\)-plane at \(x=0\). Consequently, to obtain the temperature distribution at the surface of the medium, we must solve the Fourier equation, which in this situation reduces to the one-dimensional case: \[\frac{\partial^{2}T}{\partial x^{2}}=\frac{1}{\alpha}\frac{ \partial T}{\partial t},\quad x,t>0. \tag{3.1}\] We at first need to specify the boundary conditions. We excite the medium's surface by a plane harmonic, thus temporal heating of modulation frequency \(\omega:=2\pi f\) for some frequency \(f\) and source intensity \(Q_{0}\). This can be described by an excitation term of the form \[\frac{Q_{0}}{2}\left[1+\cos(\omega t)\right],\] yielding the generation of thermal waves in the inside of the medium. Since the periodic thermal energy is subject to conduction into the solid, by using the appropriate rate equation, we obtain the following boundary condition on the surface of the medium: \[-\kappa\frac{\partial T}{\partial x} = \text{Re}\left\{\frac{Q_{0}}{2}\left[1+\exp\left(i\omega t\right) \right]\right\} \tag{3.2}\] \[= \frac{Q_{0}}{2}\left[1+\cos(\omega t)\right]\] \[= \underbrace{\frac{Q_{0}}{2}}_{\text{dc component}}+\quad \underbrace{\frac{Q_{0}}{2}\cos(\omega t)}_{\text{ac component}},\quad x=0,t>0.\] Here, DC means the _Direct Current_ and AC the _Alternating Current_. Neglecting the dc component for a moment, since this quantity will not be relevant in the later applications, and applying a time-harmonic approach yield \[T(x,t)=\text{Re}\left[T(x)\exp(i\omega t)\right].\] Plugging this into Equation (3.1), we end up with \[\exp(i\omega t)\left(\frac{\partial^{2}T(x)}{\partial x^{2}}- \frac{i\omega}{\alpha}T(x)\right)=0.\] Taking into account that \(T(x)\) must be finite for \(x\rightarrow+\infty\), we receive the solution of the boundary value problem (3.1), (3.2) as \[T(x,t)=\frac{Q_{0}}{2\kappa\sigma}\exp(-\sigma x+i\omega t),\quad \sigma:=(1+i)\sqrt{\frac{\omega}{2\alpha}}. \tag{3.3}\] Figure 3: Thermal wave generation and propagation in a semi-infinite medium By a multiplication with \(1=\frac{i+1}{\sqrt{2}}\exp\left(-i\frac{\pi}{4}\right)\) and further simplifications, we obtain a more significant expression given as \[\mathbf{T}(x,t)=\frac{Q_{0}}{2e\sqrt{\omega}}\exp\left(-\frac{x}{\mu} \right)\exp\bigg{[}i\left(\omega t-\frac{x}{\mu}-\frac{\pi}{4}\right)\bigg{]}, \tag{3.4}\] where \[\mu\coloneqq\sqrt{\frac{2\alpha}{\omega}} \tag{3.5}\] is the so-called _thermal diffusion length_. Hence, thermal waves are significantly damped and \(\mu\) controls the penetration depth into the material. For small thermal diffusivity \(\alpha\) the thermal waves do only slightly propagate into the interior of the material. In contrast, by decreasing the modulation frequency \(\omega\), we obtain a deeper penetration of the thermal waves into the material. This phenomenon is very useful in the photothermal measurement of layer thicknesses. Furthermore, note that there occurs a progressive phase shift by \[\varphi=-\frac{x}{\mu}-\frac{\pi}{4}. \tag{3.6}\] between the temperature at the surface and a point \(x\) located at the propagating thermal wave in the material. Thus, at the surface \(x=0\) there is an expressive phase difference of \(-45\) degree between the excitation source and the resulting surface temperature. ### Transmission and reflection When the irradiated object is not a semi-infinite medium but a composition of at least two materials \(M\) and \(\tilde{M}\) (with thermal effusivities \(e\) and \(\tilde{e}\), respectively), the thermal wave travels trough \(M\) first towards \(\tilde{M}\) and when the planar thermal wave propagation direction is perpendicular to its interface, the thermal reflection and transmission coefficients are \[R=\frac{1-b}{1+b},\quad T=1+R=\frac{2}{1+b}, \tag{3.7}\] where \[\mathbf{b}=\frac{\tilde{e}}{e} \tag{3.8}\] is the so-called _thermal refraction index_, which characterizes the thermal contrast between the two media. If there is no thermal contrast, i.e. when \(e=\tilde{e}\), it follows that \(R=0\). In that case there would be no significant reflection from that interface and therefore no contribution to thermal wave interference effects influencing the surface temperature. Thus, regarding the determination of coating layer thicknesses it is crucial to guarantee that materials are distinguishable. In the following, whenever we add indices \(j=1,\dots,n\) to the reflection or transmission coefficient, we refer to the interface between the materials \(M_{j}\) and \(M_{j+1}\). The direction of these coefficients has to be understood downwards, cf. Figure 1. For the upward direction, i.e. when the thermal wave approaches from below, we add a prime notation. For example, \[R_{n} :=R_{M_{n}\to M_{n+1}},\] \[T_{1}^{\prime} :=T_{M_{2}\to M_{1}}.\] The only exception regarding the direction is for \(R_{0}\) and \(T_{0}\), where the top surface of first layer material \(M_{1}\) is exposed to air \(M_{0}\), i.e. \[R_{0} :=R_{M_{1}\to M_{0}},\] \[T_{0} :=T_{M_{1}\to M_{0}}.\] We refer the reader to [1] for a more detailed derivation of the above expressions. ### Basics of thermal wave interference In this subsection, we want to discuss the basics of thermal wave interference by investigating the cases of \(n=1\) and \(n=2\) coatings. Here, mathematical formulas are well established, cf. [1], [18] and [15]. For the convenience of the reader, we summarize the findings before we extend the insights to multi-layered coating systems in the next subsection. #### 3.3.1 One-layered coatings on a substrate Consider the following system of two layers consisting of media \(M_{1}\) and \(M_{2}\), possessing different thermophysical properties: Transmitted thermal waves Assume that both media \(M_{1}\) and \(M_{2}\) have homogeneous thermophysical properties and that \(M_{1}\) has thickness \(L_{1}\) and \(M_{2}\) has thickness \(L_{2}\). Furthermore, assume that \(M_{2}\) is _thermally thick_. That means that \(L_{2}\) is significantly larger than the thermal diffusion length \(\mu_{2}\). This assumption is not unusual as the substrate is often much thicker then the applied coating. This guarantees that the transmitted thermal waves into the substrate do not have an effect on the surface temperature because they can be neglected due to the large attenuation. Moreover, assume that the whole system is exposed to air, that we denote by \(M_{0}\). Suppose that the surface is illuminated by a plane, normal and periodic heating. The single wave trains, that are generated near the surface \(x=0\), propagate towards the interface between the two media and back towards the surface of \(M_{1}\). When meeting any interfaces, the waves are partially reflected and transmitted. Thermal wave interference effects occur, meaning that the surface temperature at \(x=0\) is a sum of all thermal wave trains. In general, when a thermal wave has traveled a distance \(x>0\) the amplitude will be damped by \(\exp(-\sigma x)\), where \(\sigma=(1+i)\frac{1}{\mu}\) is the complex wave number. Hence, for the first reflection order wave train in material \(M_{1}\) the thermal wave has a propagation path of length \(2L_{1}\) leading to an attenuation by \(\exp(-2\sigma_{1}L_{1})\). We emphasize that the following figures include sketches of thermal wave trains and give the impression that there is a lateral diffusion in the material itself. This is not the case, since we have one-dimensional propagation. The sketches serve only for graphical visualization. We note that we do not consider any bulk absorption of the absorbed radiation in this paper. This means that the photothermal effect provides a surface heating only, i.e. thermal waves originate exclusively near \(x=0\). The presented literature sources distinguish between the following two cases: * Waves that are first reflected from the interface between \(M_{0}\) and \(M_{1}\) as described by the following figure: Figure 4: Thermal wave interference in one coating layer on a thermally thick substrate material Denote by \(a_{n}\) the \(n^{\text{th}}\) reflection order wave, by \(R_{0}(R_{1})\) the reflection coefficient at the interface between \(M_{0}\) and \(M_{1}\)(\(M_{1}\) and \(M_{2}\), respectively). Let furthermore denote \(T_{0}(T_{0}^{\prime})\) the transmission coefficients at the interface between \(M_{0}\) and \(M_{1}\) in the upward (downward) direction. Since the single wave trains are reflected at the interface "infinitely" often, we obtain the following series corresponding to the surface temperature: \[\begin{split}\sum_{n=0}^{\infty}a_{n}&=T_{0}A_{0} \sum_{n=1}^{\infty}\left(1+R_{0}^{n}R_{1}^{n}\exp\left(-2n\sigma_{1}L_{1} \right)\right)\\ &=T_{0}A_{0}\sum_{n=0}^{\infty}R_{0}^{n}R_{1}^{n}\exp\left(-2n \sigma_{1}L_{1}\right)\\ &=T_{0}A_{0}\sum_{n=0}^{\infty}\left[R_{0}R_{1}\exp\left(-2 \sigma_{1}L_{1}\right)\right]^{n}\\ &=T_{0}A_{0}\frac{1}{1-R_{0}R_{1}\exp\left(-2\sigma_{1}L_{1} \right)}\\ &=\frac{T_{0}A_{0}}{1-R_{0}R_{1}\exp\left(-2\sigma_{1}L_{1} \right)},\end{split}\] (3.9) where we use the geometric series formula, since \(|R_{0}R_{1}\exp\left(-2\sigma_{1}L_{1}\right)|<1\) holds true. 2. Waves that are first reflected from the interface between \(M_{1}\) and \(M_{2}\) as described by the following figure: \[\begin{split}\includegraphics[width=142.26378pt]{figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures//figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures//figures/figures//figures/figures/figures//figures/figures/figures/figures/figures//figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures/figures//figures//figures/figures//figures/figures/figures/figures/figures/figures/figures/ Denote by \(b_{n}\) the \(n^{\text{th}}\) reflection order wave. Analogous to the first case, we obtain the following expression corresponding to the surface temperature fraction: \[\begin{split}\sum_{n=0}^{\infty}b_{n}&=T_{0}A_{0}R_{1 }\exp\left(-2\sigma_{1}L_{1}\right)\sum_{n=1}^{\infty}\left(1+R_{0}^{n}R_{1}^{ n}\exp\left(-2n\sigma_{1}L_{1}\right)\right)\\ &=T_{0}A_{0}R_{1}\exp\left(-2\sigma_{1}L_{1}\right)\sum_{n=0}^{ \infty}R_{0}^{n}R_{1}^{n}\exp\left(-2n\sigma_{1}L_{1}\right)\\ &=T_{0}A_{0}R_{1}\exp\left(-2\sigma_{1}L_{1}\right)\sum_{n=0}^{ \infty}\left[R_{0}R_{1}\exp\left(-2\sigma_{1}L_{1}\right)\right]^{n}\\ &=T_{0}A_{0}R_{1}\exp\left(-2\sigma_{1}L_{1}\right)\frac{1}{1-R_ {0}R_{1}\exp\left(-2\sigma_{1}L_{1}\right)}\\ &=\frac{T_{0}AR_{1}\exp\left(-2\sigma_{1}L_{1}\right)}{1-R_{0}R_ {1}\exp\left(-2\sigma_{1}L_{1}\right)},\end{split} \tag{3.10}\] where we use the geometric series formula, since \(|R_{0}R_{1}\exp\left(-2\sigma_{1}L_{1}\right)|<1\) holds true. By summing up both of series, we obtain the thermal wave interference expression at the surface: \[\sum_{n=0}^{\infty}a_{n}+\sum_{n=0}^{\infty}b_{n}=T_{0}A_{0}\left[\frac{1+R_{ 1}\exp\left(-2\sigma_{1}L_{1}\right)}{1-R_{0}R_{1}\exp\left(-2\sigma_{1}L_{1} \right)}\right]=:\widetilde{\mathbf{T}}(x=0). \tag{3.11}\] This provides us with the following expression for the time-dependent temperature at the surface: \[\mathbf{T}(x=0,t)=\tilde{T}(x=0)\exp\Big{[}i(\omega t-\frac{\pi}{4})\Big{]}. \tag{3.12}\] Notice that the phase shift in this formula, given by \(\frac{\pi}{4}\), is excluded and has the meaning of normalizing the temperature by subtracting an "infinitely" thick layer, cf. Equation (3.6). Since the wave vector \(\sigma_{1}=(1+i)\sqrt{\frac{\omega}{2\alpha_{1}}}=(1+i)\frac{1}{\mu_{1}}\) is complex, implying that the temperature amplitude is complex valued, we can split it into its real and its imaginary part. That provides us with an expression in polar coordinates. To this end we set \[\sigma_{1}=\text{Re}(\sigma_{1})+i\text{Im}(\sigma_{1})=\frac{1}{\mu_{1}}+i \frac{1}{\mu_{1}}\coloneqq\sigma_{1}^{\prime}+i\sigma_{1}^{\prime\prime}.\] and calculate \[\begin{split}&\text{Re}\left[\tilde{T}(x=0)\right]\\ &=\frac{1-R_{0}^{2}R_{1}^{2}\exp\left(-4\frac{L_{1}}{\mu_{1}} \right)+R_{1}\exp\left(-2\frac{L_{1}}{\mu_{1}}\right)\cos(-2\frac{L_{1}}{\mu_{1 }})\left[1-R_{1}\right]}{\left[1-R_{0}^{2}R_{1}^{2}\exp\left(-2\frac{L_{1}}{ \mu_{1}}\right)\cos\left(-2\frac{L_{1}}{\mu_{1}}\right)\right]^{2}+\left[R_{0 }R_{1}\exp\left(-2\frac{L_{1}}{\mu_{1}}\right)\sin\left(-2\frac{L_{1}}{\mu_{1 }}\right)\right]^{2}}\end{split}\] and \[\begin{split}&\text{Im}\left[\tilde{T}(x=0)\right]\\ &=\frac{\left[1+R_{0}\right]\left[R_{1}\exp\left(-2\frac{L_{1}}{ \mu}\right)\sin\left(-2\frac{L_{1}}{\mu_{1}}\right)\right]}{\left[1-R_{0}^{2}R _{1}^{2}\exp\left(-2\frac{L_{1}}{\mu_{1}}\right)\cos\left(-2\frac{L_{1}}{\mu_{ 1}}\right)\right]^{2}+\left[R_{0}R_{1}\exp\left(-2\frac{L_{1}}{\mu_{1}}\right) \sin\left(-2\frac{L_{1}}{\mu_{1}}\right)\right]^{2}}.\end{split}\] Define \(x\coloneqq\frac{-2L_{1}}{\mu_{1}}\). Then we obtain the amplitude \(A_{\tilde{T}}\) as \[\begin{split}\mathbf{A}_{\tilde{T}}&=\sqrt{\text{Re} \left[\tilde{T}(x=0)\right]^{2}+\text{Im}\left[\tilde{T}(x=0)\right]^{2}}\\ &=\frac{\sqrt{\left[1+R_{1}(1-R_{0})\exp\left(x\right)\cos\left( x\right)-R_{1}^{2}R_{0}\exp(2x)\right]^{2}+\left[R_{1}(1+R_{0})\exp\left(x \right)\sin\left(x\right)\right]^{2}}}{\left[1-R_{0}R_{1}\exp\left(x\right) \cos\left(x\right)\right]^{2}+\left[R_{0}R_{1}\exp\left(x\right)\sin\left(x \right)\right]^{2}}\end{split} \tag{3.13}\] Figure 6: Waves that are reflected first at the interface between \(M_{1}\) and \(M_{2}\) as well as the phase angle \(\varphi_{\tilde{T}}\) \[\begin{split}\boldsymbol{\varphi_{\tilde{T}}}&=\tan^{- 1}\left[\frac{\text{Im}\left[\tilde{T}(x=0)\right]}{\text{Re}\left[\tilde{T}(x=0 )\right]}\right]\\ &=\tan^{-1}\left[\frac{\left[1+R_{0}\right]\left[R_{1}\exp{(x) \sin{(x)}}\right]}{1+(1-R_{0})R_{1}\exp{(x)}\cos{(x)}-R_{1}^{2}R_{0}\exp{(2x )}}\right]\end{split} \tag{3.14}\] which provides us with an expression for the complex temperature amplitude at the surface in polar coordinates, i.e. \[\tilde{T}(x=0)=A_{\tilde{T}}\exp{(i\varphi_{\tilde{T}})}.\] The advantage of this form is that the amplitude as well as the phase are measurable real valued expressions. This is the reason, why these quantities (especially the phase angle) are used in the measuring process of layer thicknesses. #### 3.3.2 Two-layered coatings on a substrate Let us now consider the following system of three layers \(M_{1}\) - \(M_{3}\) having thicknesses \(L_{1}\) - \(L_{3}\), where \(M_{3}\) is assumed to be thermally thick: Transmitted thermal waves Obviously, adding a second coating layer increases the complexity in which thermal wave trains can possibly propagate in the system, see for example the blue arrows in Figure 7. The first interface for the case \(n=1\) was a coating-to-substrate interface, while for \(n=2\) the first interface is a coating-to-coating interface. Transmitted thermal waves can Figure 7: System of two layers of coatings \(M_{1}\) - \(M_{2}\) and one layer of substrate \(M_{3}\) no longer be ignored, because the second layer is not thermally thick anymore, i.e. the reflections still contribute to the surface temperature significantly although they have a longer propagation path and are therefore damped stronger. A very elegant way of summarizing all possible wave trains is done by replacing a complex-valued _effective reflection coefficient_\(\Gamma_{1}\) for the real-valued reflection coefficient \(R_{1}\) from Equation (3.11), cf. [15], [1]. Again, by introducing a new layer, the reflection process expands after passing the interface between \(M_{1}\) and \(M_{2}\) and takes also place at a lower level. Thus, interference effects occur in the new layer with material \(M_{2}\) as well. This implies that the reflection coefficient \(R_{1}\) does not suffice to describe the whole process anymore, so we need to change the expression in \(R_{1}\) analogously to the derivation in Equation (3.9) in the following way: \[\begin{split}\boldsymbol{\Gamma_{1}}&\coloneqq R_{1 }+T_{1}T_{1}^{\prime}R_{2}\exp\left(-2\sigma_{2}L_{2}\right)\sum_{n=0}^{ \infty}\left(R_{1}^{\prime}R_{2}\exp\left(-2\sigma_{2}L_{2}\right)\right)^{n }\\ &=R_{1}+(1-R_{1}^{2})R_{2}\exp\left(-2\sigma_{2}L_{2}\right)\sum_ {n=0}^{\infty}\left(-R_{1}R_{2}\exp\left(-2\sigma_{2}L_{2}\right)\right)^{n }\\ &=R_{1}+\left(R_{2}\exp\left(-2\sigma_{2}L_{2}\right)-R_{1}^{2}R_ {2}\exp\left(-2\sigma_{2}L_{2}\right)\right)\frac{1}{1+R_{1}R_{2}\exp\left(-2 \sigma_{2}L_{2}\right)}\\ &=\frac{R_{1}+R_{2}\exp\left(-2\sigma_{2}L_{2}\right)}{1+R_{1}R_ {2}\exp\left(-2\sigma_{2}L_{2}\right)}.\end{split} \tag{3.15}\] Here, \(T_{1}\) (\(T_{1}^{\prime}\)) denote the transmission coefficients in downward (upward) direction at the \(M_{1}-M_{2}\) interface. By \(R_{1}\) (\(R_{1}^{\prime}\)) we denote the reflection coefficients at the same interface, where the thermal wave train stays in \(M_{1}\) (\(M_{2}\)). Analogously, \(R_{2}\) is the reflection coefficient for thermal waves in \(M_{2}\) that stay in \(M_{2}\). Therefore, because of \(R_{1}^{\prime}=-R_{1}\), we use the geometric series formula again, what is possibl since \(|-R_{1}R_{2}\exp\left(-2\sigma_{2}L_{2}\right)|<1\). Note that \[\lim_{L_{2}\to\infty}\Gamma_{1}=R_{1}, \tag{3.16}\] i.e. we are effectively in the case \(n=1\) again, if the second coating layer would be infinitely thick (or at least thermally thick). We call the quantity \(\Gamma_{1}\) the _effective reflection coefficient_. Since the thermal interference processes are the same in the first layer as discussed before, we end up with the following surface temperature for the case \(n=2\): \[\begin{split}\boldsymbol{T}(x=0,t)&=T_{0}A_{0} \left[\frac{1+\Gamma_{1}\exp\left(-2\sigma_{1}L_{1}\right)}{1-R_{0}\Gamma_{1} \exp\left(-2\sigma_{1}L_{1}\right)}\right]\exp\left[i\left(\omega t-\frac{\pi} {4}\right)\right]\\ &\coloneqq\boldsymbol{\tilde{T}}(x=0)\exp\left[i\left(\omega t -\frac{\pi}{4}\right)\right].\end{split} \tag{3.17}\] Once again, this expression provides us with a term for the amplitude, \[\boldsymbol{A_{\tilde{T}}}=\sqrt{\text{Re}\left[\tilde{T}(x=0)\right]^{2}+ \text{Im}\left[\tilde{T}(x=0)\right]^{2}}, \tag{3.18}\] as well as a term for the phase angle, \[\boldsymbol{\varphi_{\tilde{T}}}=\tan^{-1}\left[\frac{\text{Im}\left[\tilde{T }(x=0)\right]}{\text{Re}\left[\tilde{T}(x=0)\right]}\right], \tag{3.19}\] yielding \[\tilde{T}(x=0)=A_{\tilde{T}}\exp\left(i\varphi_{\tilde{T}}\right).\] ### Generalization to multi-layered coating systems Subject of our investigations is to make use of the insights outlined before and generalize formula (3.17) for a multi-layered coating system, see Figure 8. Consider the following system consisting of \(n\in\mathbb{N}\) layers of coatings with material \(M_{1}\) - \(M_{n}\) having thicknesses \(L_{1}\) - \(L_{n}\) on a thermally thick substrate material \(M_{n+1}\): For \(n=2\), the idea was to look at the last reflection coefficient from the case \(n=1\) and, since we add a new layer, replace it by some effective reflection coefficient \(\Gamma_{1}\) including all potential wave trains that are apparent in the new system. This principle can be extended further, which leads to nested effective reflection coefficients, i.e. we define \(\Gamma_{j}\) recursively by \[\Gamma_{n} \coloneqq R_{n}, \tag{3.20}\] \[\Gamma_{j} \coloneqq\frac{R_{j}+\Gamma_{j+1}\exp\left(-2\sigma_{j+1}L_{j+1} \right)}{1+R_{j}\Gamma_{j+1}\exp\left(-2\sigma_{j+1}L_{j+1}\right)},\ \ j=n-1,\ldots,1.\] Although the generalized formula for the surface temperature of multi-layered coating systems looks as for the case \(n=2\), i.e. Figure 8: System of \(n\) layers of coating materials \(M_{1}\) - \(M_{n}\) and one layer of substrate material \(M_{n+1}\) \[T(x=0,t)=\underbrace{\left[A_{0}T_{0}\frac{1+\Gamma_{1}\exp\left(-2\sigma_{1}L_{1} \right)}{1-R_{0}\Gamma_{1}\exp\left(-2\sigma_{1}L_{1}\right)}\right]}_{=:A_{ \tilde{x}}\exp\left(i\varphi_{\tilde{x}}\right),\,A_{\tilde{x}}\in\mathbb{R}_{+ },\varphi_{\tilde{x}}\in\mathbb{R}}\exp\Big{[}i\left(\omega t-\frac{\pi}{4} \right)\Big{]}, \tag{3.21}\] the effective reflection coefficient \(\Gamma_{1}\) now contains all significant information about the coating system such as the layer thicknesses \((L_{1},\dots,L_{n})\), the thermal diffusivities \((\alpha_{1},\dots,\alpha_{n})\) and the thermal effusivities \((e_{1},\dots,e_{n+1})\). Note that \(\alpha_{n+1}\) and \(L_{n+1}\) are not of any interest, because the substrate is thermally thick, but \(e_{n+1}\) is needed for the calculation of \(\Gamma_{n}\). ## 4 Layer Thickness Determination Now that we have a mathematical model for the physical process of thermal wave interference, we want to formulate the inverse problem of determining the layer thicknesses \((L_{1},\dots,L_{n})\) from phase angle measurements. Also we want to address the issue of unknown thermal properties and present a solution in a separate subsection. ### Forward and inverse operator As discussed in the preceding section, the thermal properties needed for the calculation of the surface temperature of a multi-layered coating system (with \(n\) coating layers) can be summarized by the vector \[\mathbf{p}:=(\alpha_{1},\dots,\alpha_{n},e_{1},\dots,e_{n+1})^{T}\in\mathbb{R }_{+}^{2n+1}. \tag{4.1}\] For \(m\in\mathbb{N}\) and frequencies \(\boldsymbol{\omega}:=(\omega_{1},\dots,\omega_{m})^{T}\), let us define the so-called _forward operator_ \[F_{n}\colon\mathbb{R}_{+}^{n}\to\mathbb{R}^{m},\quad(L_{1},\dots,L_{n})^{T}=: \boldsymbol{L}\mapsto\boldsymbol{\varphi}_{\tilde{T}}, \tag{4.2}\] which maps the coating layer thicknesses \(\mathbf{L}\in\mathbb{R}_{+}^{n}\) to the phase angle vector \(\boldsymbol{\varphi}_{\tilde{T}}\in\mathbb{R}^{m}\), i.e. component-wise for every frequency to the phase angle \(\varphi_{\tilde{T}}\) from Equation (3.21). Note that in (4.2) we dropped the dependencies on \(\mathbf{p}\) and \(\boldsymbol{\omega}\) since these are kept fixed, but it is sometimes practicable to use the extended notations, \[\boldsymbol{\varphi}_{\tilde{T}}=F_{n}(\mathbf{L})=F_{n}(\mathbf{L},\mathbf{ p})=F_{n}(\mathbf{L},\mathbf{p},\boldsymbol{\omega}). \tag{4.3}\] For the determination of \((L_{1},\dots,L_{n})\), let us assume that an infrared camera measures the temperature response of such a system for every frequency. After some post-processing of the signals (cf. [4], [25],[7]), we are given the phase angle data \[\boldsymbol{\varphi}_{\tilde{T},\text{meas}}\in\mathbb{R}^{m}.\] We define the so-called _inverse operator_ by \[G_{n}\colon\mathbb{R}^{m}\to\mathbb{R}_{+}^{n}\] \[G_{n}(\boldsymbol{\varphi}_{\tilde{T},\text{meas}})\coloneqq \operatorname*{arg\,min}_{\boldsymbol{L}\in\mathbb{R}_{+}^{n}}||F_{n}( \boldsymbol{L})-\boldsymbol{\varphi}_{\tilde{T},\text{meas}}||_{2}^{2}, \tag{4.4}\] which maps the phase angle data to the layer thickness vector by minimizing a nonlinear least-squares functional. Here, \(\|\cdot\|_{2}\) denotes the Euclidean norm. ### Unknown thermal properties and issues with all-at-once optimization For real-life applications the approach (Eq. (4.4)) seems unusable as the thermal properties of every coating layer must be known, which usually is not the case. For example, in the automotive industry paint mixtures are changed occasionally and it does not make sense to perform a complex thermal analysis each time. In general, what is available for the calibration of any nondestructive testing device are different samples of a fixed multi-layered coating setup with varying but known coating layer thicknesses. These are typically measured either in a destructive (e.g. cross-sectional images under the microscope, etc.) or non-destructive (e.g. laser triangulation, x-ray, etc.) way. Whether this coating layer thickness information of every layer in every sample is gathered before or after the frequency scans and phase angle data collection with an infrared camera depends on the method in use. However, it is worth noting that this process only needs to be performed once for each coating system setup. In the following we present a concept to identify the needed thermal properties in order to determine \((L_{1},\ldots,L_{n})\) of such coating systems. As a first idea, we replace the functional from Equation (4.4) by \[\min_{(\mathbf{p},\mathbf{L})}||F_{n}(\mathbf{p},\mathbf{L})-\mathbf{\varphi}_{\widehat{\mathbf{T}}, \textit{meas}}||_{2}^{2}, \tag{4.5}\] which represents an all-at-once approach, i.e. all unknown parameters are determined simultaneously. Unfortunately, this approach shows the following ambiguity issue: For every \(j=1,\ldots,n\), terms of the form \[\exp(-2\sigma_{j}L_{j})=\exp(-2(1+i)\sqrt{\frac{\omega}{2\alpha_{j}}}L_{j})\] imply \[\frac{L_{j}}{\sqrt{\alpha_{j}}}=\frac{\tilde{c}\cdot L_{j}}{\sqrt{\tilde{c}^{ 2}\cdot\alpha_{j}}}\text{ for }\tilde{c}>0,\] i.e. \[||F_{n}(\mathbf{p},\mathbf{L})-\mathbf{\varphi}_{\widehat{\mathbf{T}},\textit{meas}}||_{2}=|| F_{n}(\tilde{c}^{2}\mathbf{p},\tilde{c}\mathbf{L})-\mathbf{\varphi}_{\widehat{\mathbf{T}}, \textit{meas}}||_{2},\] since the thermal contrast between the layers and thus the reflection coefficients are identical. Hence, it is necessary to decouple the determination of \(\mathbf{p}\in\mathbb{R}_{+}^{2n+1}\) and \(\mathbf{L}\in\mathbb{R}_{+}^{n}\). This can be done by utilizing the sample data in a clever way. Let us assume that \(k\in\mathbb{N}\) samples (\(n\) coating layers with different but known thicknesses \(\mathbf{L}_{j}:=(L_{1,j},\ldots,L_{n,j})^{T}\) for \(j=1,\ldots,k\)) are available for the calibration process. Modulation of every sample surface by the frequencies \(\mathbf{\omega}\in\mathbb{R}_{+}^{m}\) leads to a total of \(m\times k\) phase angle values, i.e. \[\left(\mathbf{\varphi}_{\widehat{\mathbf{T}},\textit{meas},j}\right)_{j=1,\ldots,k}.\] For \(1<k_{1}<k\), we divide the entire batch of sample data in two sets \[\underbrace{S_{1}=\left(\mathbf{\varphi}_{\widehat{\mathbf{T}},\textit{meas},j}\right) _{j=1,\ldots,k_{1}}}_{\text{Training or calibration set}}\text{ and }\underbrace{S_{2}=\left(\mathbf{\varphi}_{\widehat{\mathbf{T}},\textit{meas},j} \right)_{j=k_{1}+1,\ldots,k}}_{\text{Test or confirmation set}} \tag{4.6}\] and proceed with the following steps. STEP 1: Determine thermal properties with set \(S_{1}\), i.e. \[\bar{\mathbf{p}}:=\operatorname*{arg\,min}_{\mathbf{p}\in\mathbb{R}_{+}^{2n+1}}\sum_{ j=1}^{k_{1}}||F_{n}(\mathbf{p},\mathbf{L}_{j})-\mathbf{\varphi}_{\widehat{\mathbf{T}},\textit{meas},j}|| _{2}^{2}.\] STEP 2: Feed the thermal properties from STEP 1 into the objective functional and determine the coating layer thicknesses of sample \(j\) by \[\bar{\mathbf{L}}_{j}:=\operatorname*{arg\,min}_{\mathbf{L}\in\mathbb{R}_{+}^{n}}||F_{ n}(\bar{\mathbf{p}},\mathbf{L})-\mathbf{\varphi}_{\widehat{\mathbf{T}},\textit{meas},j}|| _{2}^{2}\] for \(j=k_{1}+1,\ldots,k\). Evaluate the results for set \(S_{2}\) by calculating the error \[\sum_{j=k_{1}+1}^{k}\|\bar{\mathbf{L}}_{j}-\mathbf{L}_{j}\|_{2}^{2}. \tag{4.7}\] If the error (4.7) is sufficiently small, what of course depends on the accuracy requirements of the manufacturing process itself, the coating system setup is calibrated and tested successfully. If a higher accuracy is needed, further samples need to be processed or \(k_{1}\) needs to be adjusted. Of course, the usage of more frequencies also improves the quality of the data. ## 5 Conclusion In this article multi-layered coating systems have been investigated consisting of \(n\in\mathbb{N}\) coating layers on a thermally thick substrate, which are periodically illuminated by a planar, sinusoidal wave form with a fixed frequency. This illumination generates a thermal wave with the same frequency, which is reflected and transmitted at layer interfaces. The surface temperature, which can be measured by an infrared camera, is a result of the superposition of all thermal wave trains propagating through the system. We developed a new model that describes the physical process of 1D thermal wave interference in such setups. This model describes the dependencies of the coating layer thicknesses, the frequency used and the thermal properties of the layers to the measured phase angle data. Given measured phase angle data, we then defined the inverse operator for computing the coating layer thicknesses. We also discussed the problem of unknown thermal properties and proposed a concept to determine these in advance. ## 6 Acknowledgments This research was funded by the European Fund for Regional Development from the Operational Program EFRE Saarland 2014-2020 with the objective "Investments in Growth and Employment".
2306.14291
Hyp-OW: Exploiting Hierarchical Structure Learning with Hyperbolic Distance Enhances Open World Object Detection
Open World Object Detection (OWOD) is a challenging and realistic task that extends beyond the scope of standard Object Detection task. It involves detecting both known and unknown objects while integrating learned knowledge for future tasks. However, the level of "unknownness" varies significantly depending on the context. For example, a tree is typically considered part of the background in a self-driving scene, but it may be significant in a household context. We argue that this contextual information should already be embedded within the known classes. In other words, there should be a semantic or latent structure relationship between the known and unknown items to be discovered. Motivated by this observation, we propose Hyp-OW, a method that learns and models hierarchical representation of known items through a SuperClass Regularizer. Leveraging this representation allows us to effectively detect unknown objects using a similarity distance-based relabeling module. Extensive experiments on benchmark datasets demonstrate the effectiveness of Hyp-OW, achieving improvement in both known and unknown detection (up to 6 percent). These findings are particularly pronounced in our newly designed benchmark, where a strong hierarchical structure exists between known and unknown objects. Our code can be found at https://github.com/boschresearch/Hyp-OW
Thang Doan, Xin Li, Sima Behpour, Wenbin He, Liang Gou, Liu Ren
2023-06-25T16:45:20Z
http://arxiv.org/abs/2306.14291v4
Hyp-OW: Exploiting Hierarchical Structure Learning with Hyperbolic Distance Enhances Open World Object Detection ###### Abstract Open World Object Detection (OWOD) is a challenging and realistic task that extends beyond the scope of standard Object Detection task. It involves detecting both known and unknown objects while integrating learned knowledge for future tasks. However, the level of 'unknownness' varies significantly depending on the context. For example, a tree is typically considered part of the background in a self-driving scene, but it may be significant in a household context. We argue that this external or contextual information should already be embedded within the known classes. In other words, there should be a semantic or latent structure relationship between the known and unknown items to be discovered. Motivated by this observation, we propose Hyp-OW, a method that learns and models hierarchical representation of known items through a SuperClass Regularizer. Leveraging this learned representation allows us to effectively detect unknown objects using a Similarity Distance-based Relabeling module. Extensive experiments on benchmark datasets demonstrate the effectiveness of Hyp-OW achieving improvement in both known and unknown detection (up to 6 points). These findings are particularly pronounced in our newly designed benchmark, where a strong hierarchical structure exists between known and unknown objects. ## 1 Introduction Advances in Object Detection (OD) have unlocked a plethora of practical applications such as robotics Zhou et al. (2022), self-driving cars Balasubramaniam and Pasricha (2022), manufacturing Malburg et al. (2021), and medical analysis Yang and Yu (2021). Recent breakthroughs in attention-based neural network architecture, such as Deformable Transformers Zhu et al. (2021), have yielded impressive performance in these settings. However, most of these approaches assume a fixed number of classes (closed-world assumption), which is rare in reality. Continual Object Detection Menezes et al. (2023) takes a step further by incrementally adding new classes, resulting in a distribution shift in the input and the well-known phenomenon of _catastrophic forgetting_(Kirkpatrick et al., 2017; Doan et al., 2021) where the network forgets previously learned knowledge. Open World (OW) Bendale and Boult (2015) takes these assumptions even further, introducing the detection and integration of newly discovered classes. While the seminal work by Bendale and Boult (2015) introduced the Open World (OW) framework, further advancements by Joseph et al. (2021) extended it in two key aspects: the detection task and continual learning. However, a significant challenge within this framework lies in the absence of annotations for unknown objects, leading to biases towards known labels and potential confusion be tween unknown items and the background. This bias significantly impedes the accurate identification of unknown objects and presents a major hurdle in the detection process. We can summarize previous attempts to solve this problem into three main categories. The first category includes works such as Joseph et al. (2021); Gupta et al. (2022); Zohar et al. (2022), which relied on a learned objectness score to relabel the background as potential unknowns. Another direction of research focused on clustering classes to better isolate unknowns (Wu et al., 2022b; Yu et al., 2022). Additionally, some works, like Kim et al. (2022); Wu et al. (2022a), introduced a decoupled approach where the classification and localization heads are separated. The objective is to remove label information and instead capture the shared features that makes these labelled items relevant as objects. However, these works fail to address a crucial problem, which is defining what constitutes an "unknown" object. Currently, there is no clear definition or prior knowledge available to effectively distinguish unknowns from the background. Its interpretation greatly varies depending on the context. For example, in a driving scene, a "debris on the road" could be considered an unknown object Balasubramanian and Pasricha (2022), while in a camera surveillance context, it might be perceived as part of the background Ingle and Kim (2022). Without considering the context, these works can only learn to differentiate knowns and unknowns at low level features such as texture or shape. As a consequence, they miss to model any hierarchical structures and similarities between known and unknown items, whether at the image level or dataset level. Acknowledging this context information, we argue that a hierarchical structural relationship must exist between the objects to be discovered and the known items Hosoya et al. (2022). This hierarchy is characterized by classes that share the same semantic context, belonging to the same category such as vehicles, animals, or electronics. Such hierarchical relationships enable the retrieval of common features and facilitate the discovery of unknown objects. For instance, a model trained on objects related to driving scenes can adequately detect stop signs or traffic lights but is not expected to recognize unrelated objects like a couch or any furniture. In light of this discrepancy, we propose to learn hierarchical relationships between items in order to effectively utilize the representation of known objects for the discovery of unknown items. Ideally, items belonging to the same family (or category) should be closer to each other while being further away from different families (e.g., animals versus vehicles). To capture these structures, Hyperbolic Distance (Nickel and Kiela, 2018; Park et al., 2021), which naturally maps hierarchical latent structures, such as graphs or trees, emerges as a natural distance metric. This mapping in the hyperbolic space exhibits the desirable property of capturing the affinity between unknown items and known items, thereby enhancing the detection of unknown objects. ContributionMotivated by the aforementioned literature gap, we propose a **Hyper**bolic Distance-based Adaptive Relabeling Scheme for **O**pen **W**orld Object Detection (dubbed Hyp-OW). Our contribution can be summarized in three parts: Figure 1: **t-SNE plot of the learned class representations, with colors representing their respective categories. Our SuperClass Regularizer (right) enhances the hierarchical structure by grouping together classes from the same category while pushing apart classes from different categories.** * Hyp-OW is a simple yet effective method that learns inherent hierarchical structure between objects grouping item from the same category closer while pushing classes from different categories further apart through a SuperClass Regularizer (illustrated in Figure 1, right). * We propose an Adaptive Relabeling Scheme that enhances the detection of unknown objects by leveraging the semantic similarity between known and unknown objects in the hyperbolic space. * Our experiments demonstrate significant improvements in both unknown detection (up to \(6\%\)) and known accuracy performance (up to \(5\%\)) with Hyp-OW. These gains are particularly prominent when evaluating on our (designed) Hierarchical dataset, highlighting the advantages of our method in the presence of high inherent hierarchical structures. ## 2 Related Work ### Open World Object Detection The OWOD framework, introduced by Joseph et al. (2021), has inspired many recent works due to its realistic and close-to-real-world setting that integrates newly discovered items into the base knowledge progressively. While the first stream of work was originally based on the Faster-RCNN model (Joseph et al., 2021; Yu et al., 2022; Wu et al., 2022, 2022), more recent works have utilized Deformable Transformers due to their superior performance (Gupta et al., 2022; Zohar et al., 2022). Joseph et al. (2021) introduced ORE, a Faster-RCNN-based model that learns class prototypes using contrastive learning with Euclidean distance. However, their approach relied on a held-out validation set where unknown items are explicitly labeled to learn an energy-based model to discriminate unknown items. Yu et al. (2022) extended this setting by minimizing the overlap between the distributions of unknown and known classes. OW-DETR (Gupta et al., 2022) designed a novelty-branch head to relabel the top-k highest background scores as unknowns. These pseudo-labels relied on unmatched bounding box proposals with high backbone activation being selected as unknown objects. On the other hand, Wu et al. (2022) decoupled the localization and classification tasks (introduced by (Kim et al., 2022)) by learning a class-free head to localize objects. Recently, PROB (Zohar et al., 2022) learned a probabilistic objectness score by learning common statistics for all objects using Mahalanobis distance (Lee et al., 2018) and considered all the remaining bounding box proposals as unknown items. During the evaluation phase, they filter out proposal bounding boxes using the latter probabilistic models. ### Class-Agnostic Object Detection Another stream of work in the field of object detection is dubbed class-agnostic object detection, which focuses on localizing objects (Kim et al., 2022; Wu et al., 2022; Jiawal et al., 2021). The objective is to remove the class label information and learn a shared low-level feature representation that effectively captures the essence of an object. Kim et al. (2022) designed a pure localization head by introducing a second branch that is decoupled from the classification head. Jaiswal et al. (2021) introduced an adversarial objective loss function that penalizes label information in the encoded features. Pixel-wise class-free object detection Goncalves et al. (2022) used texture gray level quantization to retrieve objects. Saito et al. Saito et al. (2022) designed a new data augmentation method that pastes an annotated object onto an object-free background. Maaz et al. (2022) leveraged language models to improve unknown detection with their Multi-Modal Vision Transformers. ### Learning hierarchical Representation with Hyperbolic Distance Pointcare embeddings have been widely used in the literature to learn hierarchical structures from complex symbolic or multi-relational data, which can be represented by graphs or trees, such as social networks or taxonomies (Nickel and Kiela, 2018; Law et al., 2019). Due to its good performance, it has been applied to image classification as well (Khrulkov et al., 2020; Yan et al., 2021; Yue et al., 2023; Ermolov et al., 2022). For example, Yan et al. (2021) used hierarchical clustering to approximate a multi-layered tree structure representation that guides the hyperbolic distance learning process. Similarly, Liu et al. (2020) used taxonomy embedding from GloVe (Pennington et al., 2014) to learn a finer-grained representation. Hyperbolic distance has also been used for object detection (Lang et al., 2022; Ge et al., 2022). Ge et al. (2022) was interested in learning context-object association rules by reasoning on different image scales. However, none of them leveraged the learned hyperbolic distance to retrieve unknowns items for OWOD. ## 3 Background ### Problem Formulation OWOD framework describes the setting where a user receives over the time a stream of \(T\) tasks indexed by \(t\in[1,T]\). Every task \(t\) contains \(C_{t}\in\mathbb{N}^{*}\) known classes ( denoted by set \(\mathcal{K}^{t2}\)). The goal is to train an object detector module \(f\) to accurately recognize the known classes but also discovering unknown classes (denoted by set \(\mathcal{U}^{t}\)). At the end of task \(t\), \(\Delta_{t}\) unknown classes are labelled (with an oracle) and included in the next task \(t+1\) (containing now \(\sum_{j=1}^{t}\Delta_{j}=C_{t}\) known classes). The process repeat until task \(T\) that does not contain anymore unknowns. The dataset of task \(t\) is defined as \(\mathcal{D}^{t}=\{\mathcal{I}^{t},\mathcal{Y}^{t}\}\) where \(\mathcal{I}^{t}=\{\mathbf{I}_{t,1},\mathbf{I}_{t,2},...,\mathbf{I}_{t,N_{t}}\}\), are image inputs and \(\mathcal{Y}^{t}=\{\mathbf{y}_{t,1},\mathbf{y}_{t,2},...,\mathbf{y}_{t,N_{t}}\}\) are the corresponding labels (There are \(N_{t}\) images for task \(t\)). Each image \(I_{t,i}\) contains a set of annotations \(\mathbf{Y}_{t,i}=[\mathbf{I}_{t,i},\mathbf{x}_{t,i},\mathbf{y}_{t,i},\mathbf{ w}_{t,i},\mathbf{h}_{t,i}]\) where \(\mathbf{I}_{t,i}\in\{0,1\}^{C}\) denotes the object classes and \([\mathbf{x}_{t,i},\mathbf{y}_{t,i},\mathbf{w}_{t,i},\mathbf{h}_{t,i}]\) are the bounding box coordinates. Throughout the training, we will be storing item in a replay buffer \(\mathcal{M}\) with a capacity of \(m\) exemplar per class. We denote \(\mathcal{B}\) the incoming batch. We follow the setting of OWOD Joseph et al. (2021) where a set of \(K\) examplars of each class is stored in a replay buffer at the end of each task \(t\) (to mitigate forgetting). Those examplars are then replayed, i.e after task \(2\), we replay \((2\cdot K\cdot 20)\) examplars both coming from task \(1\) and \(2\) (\(K\) examplars from each task and \(20\) classes per task). ### Deformable Transformers for OWOD We adopt Deformable Transformers (Zhu et al., 2021) as our base detector, as suggested by Gupta et al. (2022), due to its simplicity and high performance. This architecture processes the image input through a set of encoder-decoder modules to produce \(Q\) queries output embeddings \(\{\mathbf{q_{i}}\}_{i=1}^{Q}\), where \(\mathbf{q_{i}}\in\mathbb{R}^{d}\) (\(d\) being the query embedding dimension). These queries are then passed to the bounding box and classification heads, which respectively localize the labeled items and predict their classes. A bipartite matching algorithm (specifically, the Hungarian algorithm (Kuhn, 1955)) is used to match the labeled ground-truth items with each query. The remaining challenge is then to determine whether unmatched queries contain potential unknowns. ### Hyperbolic Embeddings A Hyperbolic space is a \(n\)-dimensional Riemann manifold defined as \((\mathbb{B}_{c}^{n},g^{\mathrm{M}})\) with its Poincare ball \(\mathbb{B}_{c}^{n}=\{x\in\mathbb{R}^{n}:c\|x\|^{2}\leq 1,c\geq 0\}\) (\(c\) being the constant curvature) and equipped with a Riemannian metric \(g^{\mathrm{M}}=\lambda_{x}^{2}g^{E}\) where \(\lambda_{x}^{c}=\frac{2}{1-c\|x\|^{2}}\), \(g^{E}=\mathbf{I}_{n}\) is the Euclidian metric tensor. The transformation from the Euclidian to hyperbolic space is done via a bijection termed _exponential_ mapping \(\exp_{\mathbf{b}}^{c}:\mathbb{R}^{n}\rightarrow\mathbb{B}_{c}^{n}\). \[\exp_{\mathbf{b}}^{c}(\mathbf{x})=\mathbf{b}\otimes_{c}\left(\tanh\left( \sqrt{c}\frac{\lambda_{\mathbf{b}}^{c}\|\mathbf{x}\|}{2}\right)\frac{\|\mathbf{ x}\|}{\sqrt{c}\|\mathbf{x}\|}\right) \tag{1}\] where \(\mathbf{b}\) represents the base point. The latter is often empirically taken as \(\mathbf{b}=\mathbf{0}\) to simplify the formulas without impacting much the results (Ermolov et al., 2022). We will also adopt this value in our study. Inside this hyperbolic space, the distance between two points \(\mathbf{x},\mathbf{y}\in\mathbb{B}_{c}^{n}\) is computed as: \[d_{hyp}(\mathbf{x},\mathbf{y})=\frac{2}{\sqrt{c}}\arctan\left(\sqrt{c}\|- \mathbf{x}\otimes_{c}\mathbf{y}\|\right) \tag{2}\] where the addition operation \(\otimes_{c}\) is defined as : \(\mathbf{x}\otimes_{c}\mathbf{y}=\frac{(1+2c\langle\mathbf{x},\mathbf{y}\rangle+ c\|\mathbf{y}\|^{2})\mathbf{x}+(1-c\|\mathbf{x}\|^{2})\mathbf{y}}{1+2c \langle\mathbf{x},\mathbf{y}\rangle+c^{2}\|\mathbf{x}\|^{2}\|\mathbf{y}\|^{2}}\). From now on, we will denote \(\mathbf{z_{i}}\) the projection of the queries \(\mathbf{q_{i}}\) into the hyperbolic embedding space, i.e, \(\mathbf{z_{i}}=\exp_{\mathbf{b}}^{c}(\mathbf{q_{i}})\). Note that when \(c=0\), the distance boils down to the cosine similarity defined as: \(D_{cos}(\mathbf{x},\mathbf{y})=2-2\frac{<\mathbf{x},\mathbf{y}>}{\|\mathbf{x} \|_{2}\cdot\|\mathbf{y}\|_{2}}\) (in this case no exponential mapping is needed, i.e \(\mathbf{z_{i}}=\mathbf{q_{i}}\)3) Footnote 3: from now on, we will refer to \(\mathbf{z_{i}}\) interchangeably for both the query and its corresponding embedding \(\mathbf{q_{i}}\) ## 4 Hyp-OW In this section, we provide a detailed explanation of each module of our proposed method. Hyp-OW can be summarized by three main components (Figure 2): a Hyperbolic Metric Distance learning (Section 4.1), a SuperClass Regularizer (Section 4.2) and an Adaptive Relabeling Scheme to detect unknowns (Section 4.3). ### Metric Learning with Hyperbolic Distance We learn feature representation in the hyperbolic embedding space using a contrastive loss. The idea is to move closer features belonging to the same class \(c\) while repelling then away from any features from class \(j\neq c\). Let's denote \(\mathbf{z_{i}^{c}}\) any query \(i\) matched with class \(c\in\mathcal{K}^{4}\). During the training, we maintain a replay buffer \(\mathcal{M}\) where we store \(m\) embedding features per class. For every query element \(\mathbf{z_{i}^{c}}\) of the incoming batch \(\mathcal{B}\), we sample \(k=1\) elements of the same class \(c\in\mathcal{K}\) from the replay buffer \(\mathcal{M}\) (which will be considered as the positive examples, the \(2|\mathcal{B}|-2\) remaining samples be considered as the negative examples). Figure 2: **Overview of each component of Hyp-OW.There are three main components: the _Hyp-perbolic Contrastive Loss_, which learns a hierarchical structural representation of each class; the _SuperClass Regularizer_, which models the semantic relationship among classes to ensure proximity within the same category and distance from different categories; and the _Adaptive Relabeling_ module, which utilizes the hierarchical structure to compute the hyperbolic distance between candidate proposals and known items. If this distance \(d\) is lower than a certain threshold (\(\delta\)), the proposal is relabelled as unknown.** For simplicity we denote all the elements from the buffer and the batch \(\mathcal{A}=\mathcal{B}\cup\mathcal{M}\). Every element \(\mathbf{z}_{i}^{\mathbf{c}}\in\mathcal{B}\) (respectively \(i\in\mathcal{M}\)) has its positive counterpart examples \(\mathbf{z}_{i^{+}}\) of same class \(c\) from \(\mathcal{M}\) (respectively from \(\mathcal{B}\)) and \(|\mathcal{A}|-2\) negative examples (both from \(\mathcal{B}\) and \(\mathcal{M}\)) denoted as \(\mathbf{z}_{i^{-}}\). Defining a temperature \(\tau_{1}\), the contrastive loss is then expressed as: \[\mathscr{L}_{hyp}=-\sum_{i\in\mathcal{A},c\in\mathcal{K}}\log\frac{\exp(\frac {-d_{hyp}(\mathbf{z}_{i}^{\mathbf{c}},\mathbf{z}_{i+})}{\tau_{1}})}{\sum_{i^{ -}\in\mathcal{A}\setminus\{i,i^{+}\}}\exp(\frac{-d_{hyp}(\mathbf{z}_{i}^{ \mathbf{c}},\mathbf{z}_{i^{-}})}{\tau_{1}})} \tag{3}\] This loss aims at attracting representation of \(\mathbf{z}_{i}^{\mathbf{c}}\) closer to its positive counterpart \(\mathbf{z}_{i^{+}}\) while repelling from other classes representation \(\mathbf{z}_{i^{-}}\),\(i\in\mathcal{A},c\in\mathcal{K}\). ### SuperClass Regularization Many real-world datasets exhibit a natural hierarchical structure, where classes can be organized into categories. For instance, dogs and cats belong to the animal category, while cars and trucks belong to vehicles category. To leverage this hierarchical information, we propose a SuperClass Regularizer (we will use SuperClass and category interchangeably throughout this study), which encourages classes within the same category to be closer in the embedding space while pushing them away from classes in different categories. Let's denote \(\mathcal{S}_{p}\) the set of class indexes belonging to Category \(p=1...P\). We approximate the category \(p\) embedding by computing the Hyperbolic Average (Khrulkov et al., 2020) (dubbed _HypAve_) of every embedding \(\{\mathbf{z}_{i}^{\mathbf{c}}\}_{i\in\mathcal{M}}\) of classes \(c\in\mathcal{S}_{p}\) from the buffer \(\mathcal{M}\), that is: \[HypAve(\{\mathbf{z}_{i}^{\mathbf{c}}\}_{i\in\mathcal{M},c\in\mathcal{S}_{p}}) =\frac{\sum_{i\in\mathcal{M},c\in\mathcal{S}_{p}}\gamma_{i} \mathbf{z}_{i}^{\mathbf{c}}}{\sum_{i\in\mathcal{M}} \gamma_{i}} \tag{4}\] where \(\gamma_{i}=\frac{1}{\sqrt{1-c\|x_{i}\|^{2}}}\) are the Lorentz factors. To de-clutter the notation, we will denote \(\overline{\mathbf{z}}_{\mathbf{p}}=HypAve(\{\mathbf{z}_{i}^{\mathbf{c}}\}_{i \in\mathcal{M},c\in\mathcal{S}_{p}})\) the Hyperbolic Average feature representation of category \(p\). For every element \(\mathbf{z}_{i}^{\mathbf{c}}\) of a batch \(\mathcal{B}\), we sample its category embedding \(\overline{\mathbf{z}}_{\mathbf{p}}(c\in\mathcal{S}_{p})\) from the buffer \(\mathcal{M}\) and use the same contrastive loss: \[\mathscr{L}_{reg}=\sum_{i\in\mathcal{A},c\in\mathcal{S}_{p}}-\log\frac{\exp( \frac{-d_{hyp}(\mathbf{z}_{i}^{\mathbf{c}},\overline{\mathbf{z}}_{\mathbf{p} })}{\tau_{2}})}{\sum_{k\neq p}\exp(\frac{-d_{hyp}(\mathbf{z}_{i}^{ \mathbf{c}},\overline{\mathbf{z}}_{\mathbf{k}})}{\tau_{2}})} \tag{5}\] This loss encourages the features \(\mathbf{z}_{i}^{c}\) of each class \(c\) to be closer to its corresponding category embedding \(\overline{\mathbf{z}}_{p}\), while simultaneously pushing it away from embeddings of other categories \(\overline{\mathbf{z}}_{k}\), \(k\neq p\). ### Adaptive Relabeling of Unknowns with Hyperbolic Distance We introduce our Adaptive Relabeling module, which _dynamically adapts_ to the batch statistics to effectively retrieve unknown items (leveraging the learned hyperbolic embedding). Recall for the classification task, we have two types of queries: matched queries with ground-truth labels (found with the Hungarian Algorithm), and unmatched queries that may contain unknown items. The matched queries will establish a threshold criterion to select unknown queries from the unmatched set. Let's denote \(\mathbf{z}^{m}\) (respectively \(\mathbf{z}^{m}\) ) the query from batch \(\mathcal{B}\) that is matched to a ground truth label (respectively that is not matched with any ground truth labels). Next, we denote \(\mathbf{z}_{e}\)5 the Hyperbolic Average of class \(c\in\mathcal{K}\) computed from samples of the buffer \(\mathcal{M}\) as: Footnote 5: we differentiate from \(\overline{\mathbf{z}}_{\mathbf{p}}\) with an underline to distinguish Hyperbolic Average of class and category \(HypAve(\{\mathbf{z_{i}^{c}}\}_{i\in\mathcal{M}})=\dfrac{ \sum_{i\in\mathcal{M}}\gamma_{i}\mathbf{z_{i}^{c}}}{ \sum_{i\in\mathcal{M}}\gamma_{i}}\) which can be seen as the centroid of each class \(c\) in the hyperbolic embedding space. Next, we define an important quantity: \(\delta_{\mathcal{B}}=\max\limits_{m\in\mathcal{B},c\in\mathcal{ K}}d_{hyp}(\mathbf{z}^{m},\mathbf{z_{c}})\). Intuitively, \(\delta_{\mathcal{B}}\) represents the highest distance from every matched query of the batch \(\mathcal{B}\) to all centroid \(\mathbf{z_{c}},c\in\mathcal{K}\) from the replay buffer \(\mathcal{M}\). The latter will serve as a threshold to relabel every unmatched query \(\mathbf{z}^{m}\) as unknown if: \[\min\limits_{c\in\mathcal{K}}d_{hyp}(\mathbf{z}^{m},\mathbf{z_{c}})\leq \delta_{\mathcal{B}} \tag{6}\] The underlying idea is that if any unmatched query \(\mathbf{z}^{m}\) has a distance to any centroid smaller than \(\delta_{\mathcal{B}}\), it is likely to be an unknown. It will then be relabeled accordingly and forwarded to the classification head. Overall lossAll the aforementioned losses are finally optimized together as: \[\mathcal{L}=\mathcal{L}_{cls}+\mathcal{L}_{bbox}+\alpha\mathcal{L}_{hyp}+ \beta\mathcal{L}_{reg} \tag{7}\] Where \(\alpha,\beta\geq 0\) are coefficient controlling respectively the Hyperbolic and Regularizer importance. ## 5 Experiments In this section, we begin by describing our experimental setup. We then present comparative results against benchmark baselines, followed by in-depth ablation analysis of each component of Hyp-OW. Due to space limitations, we will defer detailed information to the Appendix Section C and D. ### Experimental Setup DatasetsWe consider two benchmarks from the literature: the OWOD Split Joseph et al. (2021) and the OWDETR Split Gupta et al. (2022). While the latter (OWDETR Split) strictly separates superclasses across tasks the first (OWOD) has mild semantic overlap between knowns and unknowns across tasks (See Appendix A). To validate our hypothesis regarding the semantic relationship between knowns and unknowns, we introduce a third dataset called the _Hierarchical Split_. This dataset ensures that each task includes at least one class from each category, promoting a higher level of semantic similarity. This will be discussed in paragraph 5.2. Each dataset is defined by four tasks \(t=1,2,3,4\), containing 20 labelled classes each, for a total of 80 classes. When task \(t\) starts, only the label of classes belonging to that task are revealed. For instance, task \(1\) only contains labels of classes from 0 to 19, while task \(2\) only contains labels of classes from 21 to 39, and so on. Implementation DetailsWe use Deformable DETR (Zhu et al., 2021) pretrained in a self-supervised manner (DINO Caron et al. (2021)) on Resnet-50 (He et al., 2016) as our backbone. The number of deformable transformer encoder and decoder layers are set to \(6\). The number of queries is set to \(Q=100\) with a dimension \(d=256\). During inference time, the top-100 high scoring queries per image are used for evaluation. Additional detail are provided in Appendix Section B. Metrics and BaselinesFollowing current metrics used for OWOD, we use mean average precision (mAP) for known items while U-Recall is the main metric used to quantify the unknown detection quality or each method Gupta et al. (2022); Wu et al. (2022); Zohar et al. (2022); Maaz et al. (2022); Yu et al. (2022). Additional metric is discussed in Table 9 We consider the following baselines from literature: ORE-EBUI (Joseph et al., 2021), UC-OWOD (Wu et al., 2022), OCPL (Yu et al., 2022), 2B-OCD (Wu et al., 2022), OW-DETR (Gupta et al., 2022) and PROB (Zohar et al., 2022). ### Benchmark Results Dataset SimilarityTo gain insights into the structure of each dataset, we introduce a semantic similarity measure based on GloVe's embedding Pennington et al. (2014) (defined in Appendix Section A.1). This measure quantifies the similarity overlap between known and unknown items, with higher values indicating larger overlap. The three splits, OW-DETR Split (Low regime ), OWOD Split (Medium regime ), and Hierarchical Split (High regime ), exhibit a monotonic ranking in terms of similarity, with respective values of \(0.27\), \(0.33\), and \(0.41\). This serves as a starting point for evaluating baseline methods under various scenarios. Unknown Detection (U-Recall) Table 1 shows the high performance gain of Hyp-OW over PROB on Medium regime and High regime of \(3\%\) on average. This highlights the utility of learning hierarchical structured representations and retrieving unknowns based on their similarity with known objects, as opposed to PROB, which learns a single mean representation for all objects. For the Low regime our method is performing on-par with PROB except for task 1 which shows a surprising improvement of \(6\) points. A possible explanation may come from the nature of the Object Detection (OD) task, where there can be significant overlap between bounding boxes. This encourages the model to learn classes that frequently co-occur, such as "person" and "backpack" or "teddy bear" and "bicycle" (See Figure 3). We provide qualitative and quantitative explanations in Appendix Section C. Known Accuracy (mAP) From the known accuracy, Hyp-OW surpasses baseline benchmark on all tasks of Hierarchical Split and shows notable performance for the two last tasks of OW-DETR Split. This can be credited to the structural hierarchy learned that groups together class of same category (See t-SNE Figure 1 middle and right). \begin{table} \begin{tabular}{c|l|c c||c c|c c|c} & & \multicolumn{3}{c|}{**Task 1**} & \multicolumn{3}{c|}{**Task 2**} & \multicolumn{3}{c}{**Task 3**} & \multicolumn{1}{c}{**Task 4**} \\ \cline{3-10} & & U-Recall (\(\uparrow\)) & mAP(\(\uparrow\)) & U-Recall (\(\uparrow\)) & mAP(\(\uparrow\)) & U-Recall (\(\uparrow\)) & mAP(\(\uparrow\)) & mAP(\(\uparrow\)) \\ \hline \multirow{6}{*}{**U-Recall**} & ORE - EBUI & 1.5 & 61.4 & 3.9 & 40.6 & 3.6 & 33.7 & 31.8 \\ & OW-DETR & 5.7 & 71.5 & 6.2 & 43.8 & 6.9 & 38.5 & 33.1 \\ & PROB & 17.6 & **73.4(+0.7)** & 22.3 & **50.4** & 24.8 & 42.0 & 39.9 \\ & Hyp-OW (Ours) & **23.9(+6.3)** & 72.7 & **23.3(+1.0)** & **50.6** & **25.4** & **46.2(+4.2)** & **44.8(+4.9)** \\ \hline \hline \multirow{6}{*}{**U-Recall**} & ORE - EBUI & 4.9 & 56.0 & 2.9 & 39.4 & 3.9 & 29.7 & 25.3 \\ & UC-OWOD & 2.4 & 50.7 & 3.4 & 8.7 & 16.3 & 24.6 & 23.2 \\ & OCPL & 8.26 & 56.6 & 7.65 & 39.1 & 11.9 & 30.7 & 26.7 \\ & 2B-OD & 12.1 & 56.4 & 9.4 & 38.5 & 11.6 & 29.2 & 25.8 \\ & OW-DETR & 7.5 & 59.2 & 6.2 & 42.9 & 5.7 & 30.8 & 27.8 \\ & PROB & 19.4 & 59.5 & 17.4 & **44.0** & 19.6 & 36.0 & 31.5 \\ & Hyp-OW (Ours) & **23.5(+4.1)** & 59.4 & **20.6(+3.2)** & **44.4** & **26.3(+6.7)** & **36.8** & **33.6(+6.2)** \\ \hline \hline \multirow{6}{*}{**U-Recall**} & OW-DETR & 7.0 & 47.3 & 11.0 & 38.6 & 8.8 & 38.3 & 38.2 \\ & PROB & 29.4 & **49.6** & 43.9 & 42.9 & 52.7 & 41.3 & 41.0 \\ \cline{1-1} & Hyp-OW (Ours) & **34.9(+5.5)** & **49.9** & **47.5(+3.6)** & **45.5(+2.6)** & **55.2(+2.5)** & **44.3(+3.1)** & **43.9(+2.9)** \\ \hline \hline \end{tabular} \end{table} Table 1: **State-of-the-art comparison on the three splits for unknown detection (U-Recall) and known accuracy (mAP).** Hyp-OW improves significantly the unknown detection (U-Recall) for the medium and high regime and known detection (mAP) for the low regime. Task 4 does not have U-Recall since all 80 classes are known at this stage. Figure 3: **Bounding boxes overlap.** In the OD task, the high overlap between bounding boxes of frequently co-occurring objects can influence the model to learn their associations and correlations. ### Ablation Analysis We now propose an in-depth understanding of Hyp-OW by removing one by one each component and see its direct impact (illustrated quantitatively in Table 2 for Hierarchical Split). Adaptive Relabeling:This module uses Eq 6 to relabel unmatched bounding boxes as unknowns. To assess the impact of this relabeling strategy, we compare it with an alternative approach used by PROB Zohar et al. (2022), where all unmatched queries are labeled as unknowns. Although the decrease in U-Recall is marginal, we observe a significant reduction in known accuracy (mAP). This performance degradation can be attributed to the over-prediction of patches as unknowns, which results in misclassification of known objects. Heatmaps in Figure 10 and 11 (Appendix) demonstrate the effectiveness of our module where we observe that unknowns belonging to the same category (sharing some color) as knowns exhibit lower Hyperbolic Distance. SuperClass Regularizer:By setting \(\beta=0\), we no longer enforce the grouping of items from the same category in the hyperbolic space (compare t-SNE plots in Figure 1). As a result, we observe a reduction in U-Recall of \(2.9\), \(0.4\), and \(2.3\) points, respectively (Table 2, second line). Heatmap Figure 4 illustrates the hyperbolic distance from each class to every category's embedding (computed using Eq 4) with lighter colors indicating smaller distances. With our Regularizer (right plot), we observe a wider range of values spanning from \(0.7\) to \(2.30\), compared to a smaller range of \(0.78\) to \(1.2\) without the Regularizer. This highlights the effect of our Regularizer that pushes classes from different categories apart (shown as dark color in the right plot) while bringing classes of similar categories closer together. A more detailed plot can be found in Appendix Figure 12. Conclusion The Open World Object Detection framework presents a challenging and promising setting, encompassing crucial aspects such as lifelong learning and unknown detection. In our work, we have emphasized the lack of a clear definition of unknowns and the need of a hierarchical or semantic relationship between known and unknown classes. This led us to propose Hyp-OW that focuses on learning and modeling the structural hierarchy within the dataset, which is then utilized for unknowns retrieval. Extensive experiments demonstrates significant improvement of Hyp-OW for both known and unknown detection (up to 6 points) particularly in the presence of inherent hierarchy between classes. ## Acknowledgement We would like to thank (in no particular order): Jiajing Guo, Shabnam Ghaffarzadegan, Yulian Guo, Sharath Gopal, Sanbao Bu and Jorge Ono for useful discussions and feedbacks throughout the work.
2303.09378
A phase-field model for non-small cell lung cancer under the effects of immunotherapy
Formulating tumor models that predict growth under therapy is vital for improving patient-specific treatment plans. In this context, we present our recent work on simulating non-small-scale cell lung cancer (NSCLC) in a simple, deterministic setting for two different patients receiving an immunotherapeutic treatment. At its core, our model consists of a Cahn-Hilliard-based phase-field model describing the evolution of proliferative and necrotic tumor cells. These are coupled to a simplified nutrient model that drives the growth of the proliferative cells and their decay into necrotic cells. The applied immunotherapy decreases the proliferative cell concentration. Here, we model the immunotherapeutic agent concentration in the entire lung over time by an ordinary differential equation (ODE). Finally, reaction terms provide a coupling between all these equations. By assuming spherical, symmetric tumor growth and constant nutrient inflow, we simplify this full 3D cancer simulation model to a reduced 1D model. We can then resort to patient data gathered from computed tomography (CT) scans over several years to calibrate our model. For the reduced 1D model, we show that our model can qualitatively describe observations during immunotherapy by fitting our model parameters to existing patient data. Our model covers cases in which the immunotherapy is successful and limits the tumor size, as well as cases predicting a sudden relapse, leading to exponential tumor growth. Finally, we move from the reduced model back to the full 3D cancer simulation in the lung tissue. Thereby, we show the predictive benefits a more detailed patient-specific simulation including spatial information could yield in the future.
Andreas Wagner, Pirmin Schlicke, Marvin Fritz, Christina Kuttler, J. Tinsley Oden, Christian Schumann, Barbara Wohlmuth
2023-03-16T15:07:26Z
http://arxiv.org/abs/2303.09378v1
A phase-field model for non-small cell lung cancer under the effects of immunotherapy ## Abstract Formulating tumor models that predict growth under therapy is vital for improving patient-specific treatment plans. In this context, we present our recent work on simulating non-small-scale cell lung cancer (NSCLCLC) in a simple, deterministic setting for two different patients receiving an immunotherapeutic treatment. At its core, our model consists of a simple Cahn-Hilliard-based phase-field model describing the evolution of proliferative and necrotic tumor cells. These are coupled to a simple nutrient model that drives the growth of the proliferative cells and their decay into necrotic cells. A single scalar value represents the immunotherapeutic agents in the entire lung, which decreases the proliferative cell concentration during therapy. An ordinary differential equation (ODE) model describes their evolution. Finally, reaction terms provide a coupling between all these equations. By assuming spherical, symmetric tumor growth and constant nutrient inflow, we simplify this full 3D cancer simulation model to a reduced 1D model. We can then resort to patient data gathered from computed tomography (CT) scans over several years to verify our model. For the reduced 1D model, we show that our model can qualitatively describe observations during immunotherapy by fitting our model parameters to existing patient data. Our model covers cases in which the immunotherapy is successful and limits the tumor size, as well as cases predicting a sudden relapse, leading to exponential tumor growth. We then move from the reduced model back to the full 3D cancer simulation in the lung tissue. Thereby, we show the predictive benefits a more detailed patient-specific simulation including spatial information could yield in the future. ## Author summary Lung cancer is one of the deadliest diseases, with low long-term survival rates. Its treatment is still very heuristic since patients respond differently to the same treatment plans. Therefore, patient-specific models for predicting tumor growth and the treatment response are necessary for clinicians to make informed decisions about the patient's therapy and avoid a trial and error based approach. We make a small step in that direction by introducing a model for simulating cancer growth and its treatment inside a 3D lung geometry. In this model, we represent tumor cells by a volume fraction field that varies over space and time. We describe their evolution by a system of partial differential equations, which include patient- and treatment-specific parameters capturing the different responses of patients to the therapies. Our simulation results are compared to clinical data and show that we can quantitatively describe the tumor's behavior with some parameter set. This enables us to change therapies and analyze how these changes could have impacted the patient's health. ## Introduction A major challenge of mathematical oncology is predicting the growth of tumors [1]. Cancer is a class of diseases characterized by numerous point mutations in the genome that result in the uncontrolled growth and spread of cells. Overviews of its biological details are, e.g., presented in [2, 3]. The body's immune system can suppress tumor growth by inhibiting cell growth or by destroying cancer cells. On the other hand, it can also promote tumor progression by selecting tumor cells that are better able to survive in an immunocompetent host or by establishing conditions within the tumor microenvironment that facilitate tumor outgrowth [4, 5]. Immunotherapy attempts to boost the body's immune system and immune responses against cancerous cells to eliminate them and understanding this ability has revolutionized the field of oncology [6]. For a variety of cancer types, immunotherapy has been proven to feature a significant clinical benefit in patients with advanced stages of cancer and is, as of today, well-established as a standard treatment method [7, 8, 9]. However, it is extremely difficult to accurately identify spatial tumor growth on a patient-specific level, especially under a treatment plan [10, 11]. A considerable variety of mathematical models has helped to improve the understanding of biological principles in cancer growth and treatment response [12, 13, 14, 15, 16] and their predictive power [17, 18]. Globally, lung cancer is the leading cancer-related mortality cause. Roughly 85% of all lung cancer cases are non small cell lung cancer (NSCLC). Its five year survival probability is approximately 22% [19]. For an overview of how to model spatial tumor growth, including continuum, discrete, and hybrid models, we refer to the reviews in [20, 21]. One class of continuum models relies on simple reaction-diffusion-equations and elastic models for the mass effect [22, 23, 24]. We will rely on phase-field models commonly used for modeling cell dynamics, since they allow us to model the observed cell-to-cell adhesion [25] between tumor cells by energy terms. Simple two-species models consisting of a phase-field model for the tumor and a nutrient equation of reaction-diffusion type are introduced and analyzed in [26, 27]. Models, including increasingly more complicated flow models, are given in [28, 29, 30, 31, 32]. A more general theoretical approach to multispecies models can be found in [33, 34, 35, 36, 37]. The number of modeled species and fields strongly depends on the choice of the particular studied effects. More specialized large models can be found in [38, 39, 40, 41, 42]. Regarding spatial models, including cancer therapy [43] introduces a hybrid model to study the impact of different chemo- and radiation therapies on tumor cells. The chemotherapy of breast cancer with a drug-delivery model is discussed in [44], and with a reaction-diffusion model in [32, 45]. In [46], a continuum model specialized in radiation therapy for brain tumors is discussed. For prostate cancer, a combination of chemotherapy with antiangiogenic therapy and the optimization of the treatment are given in [47]. Chemotherapy is also included in the multispecies phase-field model approach of [48]. The present work aims to develop a mathematical model for the spatial growth behavior of solid tumors in NSCLC in a simple, deterministic setting. It addresses the effects of immunotherapy application and allows the description of its influences on the tumor's spatial structure. The model framework is applied to two data sets acquired from clinical patients that have shown qualitatively different therapy outcomes. Analysis of the model sheds light on the corresponding parameter relations that determine different clinical outcomes that could potentially be estimated in a prognostic setting to improve the prediction of clinical outcomes and therapy choices. Here, passive immunotherapy is considered that uses monoclonal antibodies to improve the immune response by regulating T-cell activity in the effector phase of the immune response via the so-called PD-1/PD-L1-pathway. The downregulation usually caused by PD-1 activation prevents collateral damage during an immune response [49, 50]. However, tumor cells can impair this pathway by expressing the corresponding ligands PD-L1 and PD-L2 that bind to the T-cells PD-1 receptor and inactivate the T-cell to decrease the immune response towards the tumor cells [51, 52, 49, 53]. Cells expressing the mentioned ligands are targets for, e.g., the drugs Nivolumab and Pembrolizumab with which patients of our clinical data set were treated. ## Model Our tumor model consists of the volume fraction of tumor cells \(\phi_{T}\), which we divide into the two cell species of proliferative tumor cells \(\phi_{P}\), and necrotic tumor cells \(\phi_{N}\), such that \(\phi_{T}=\phi_{P}+\phi_{N}\). To keep the model as simple as possible, we do not explicitly model hypoxic cell species. Furthermore, we introduce the nutrient concentrations \(\phi_{\sigma,v}\) and \(\phi_{\sigma,i}\) inside the vasculature and interstitial space. The concentration of immunotherapeutic agents is given by \(\phi_{\tau}\). We will first start with the full 3D-Model and then introduce its simplifications for a 1D-Model. The full 3D-ModelWe use a generalized Cahn-Hilliard model as in [38, 41] with reaction terms to describe the evolution of proliferative cells. By introducing the chemical potential \(\mu_{P}\), the model is characterized by the fourth-order equation given as \[\begin{split}\partial_{t}\phi_{P}&=\nabla\cdot(c_{ m}m_{P}(\phi_{P},\phi_{T})\nabla\mu_{P})+S_{P}(\phi_{P},\phi_{\sigma,i},\phi_{ \tau})\quad\text{ in }\Omega,\\ \mu_{P}&=\partial_{\phi_{P}}\Psi(\phi_{P},\phi_{T} )-\varepsilon_{P}^{2}\Delta\phi_{P}-\chi\phi_{\sigma,i}\quad\text{ in }\Omega,\end{split} \tag{1}\] with boundary conditions \[\frac{\partial\phi_{P}}{\partial n}=0,\quad\text{ and }\quad\frac{\partial\mu_{P} }{\partial n}=0\quad\text{ on }\partial\Omega,\] where \(\Psi(\phi_{P},\phi_{T})=\tilde{\Psi}(\phi_{P})+\tilde{\Psi}(\phi_{T})\), with \(\tilde{\Psi}(\phi)=c_{\Psi}\phi^{2}(1-\phi)^{2}\) is a double-well potential with scaling factor \(c_{\Psi}\), and \(m_{P}(\phi_{P},\phi_{T})=\phi_{P}^{2}(1-\phi_{T})^{2}\) with the constant \(c_{m}>0\) describing the cell mobility of the tumor cells. The reaction term, \(S_{P}\), is given by \[\begin{split} S_{P}(\phi_{P},\phi_{\sigma,i},\phi_{\tau})=& \lambda_{P}^{pro}\phi_{\sigma,i}\left(\phi_{P}\right)^{\lambda}\ln \left(\frac{1+\epsilon_{g}}{\phi_{P}+\epsilon_{g}}\right)-\left(\lambda_{\tau }^{\text{eff}}\frac{1}{\omega}\right)\phi_{P}\frac{\phi_{\tau}}{\phi_{\tau}+ \phi_{\tau}^{50}}\\ &-\lambda_{PN}\mathcal{H}(\sigma_{PN}-\phi_{\sigma,i})\phi_{P}. \end{split} \tag{2}\] The first term on the right-hand side of (2), is a Gompertzian growth term [54] acting on the tumor interface, with a small parameter \(\epsilon_{g}\) for regularization. We assume that the growth is proportional to the nutrient concentration and scale it by the parameter \(\lambda_{P}^{pro}\), which has to be calibrated beforehand. For increasing \(\lambda\geq 1\), the growth of smaller tumor cell concentrations is penalized, which inhibits the tumor from spreading over the entire lung. The second term on the right-hand side of (2) models the decay of tumor cells due to immunotherapy and acts on the tumor volume. Here, \(\frac{1}{\omega}\) is the ratio of the lung weight to the body weight, \(\lambda_{\tau}^{\text{eff}}\) describes the patient-specific effect of the immunotherapy on the tumor, and \(\phi_{\tau}^{50}\) is the drug concentration for the half-maximal response. The last term of (2) models a decay of proliferative cells due to low nutrient concentrations with rate \(\lambda_{PN}\). It is activated by the Heaviside function, \(\mathcal{H}\), if the nutrient concentration becomes smaller than \(\sigma_{PN}\). Finally, we have a nutrient-dependent term in the equation for \(\mu_{P}\), which describes the chemotactic effect, reflecting that tumor cell growth follows nutrient gradients. The necrotic cell field is described by a simple spatial ODE model: \[\partial_{t}\phi_{N}=S_{N}(\phi_{\sigma,i},\phi_{P})\quad\text{with}\quad S_{ N}(\phi_{\sigma,i},\phi_{P})=\lambda_{PN}\mathcal{H}(\sigma_{PN}-\phi_{ \sigma,i})\phi_{P}, \tag{3}\] where the growth term on the right-hand side mirrors the decay terms of the proliferative cells. For the nutrients, we follow the approach in [55] with a stationary diffusion model, and get \[-\nabla\cdot\kappa_{v}\nabla\phi_{\sigma,v}=-\xi_{va}\phi_{\sigma,v}-\eta_{vi }\phi_{\sigma,v}+\eta_{iv}\phi_{\sigma,i}+\beta\delta_{\Gamma}, \tag{4}\] and \[-\nabla\cdot\kappa_{i}\nabla\phi_{\sigma,i}=\eta_{vi}\phi_{\sigma,v}-\eta_{ iv}\phi_{\sigma,i}-\alpha_{H}(1-\phi_{P})\phi_{\sigma,i}-\alpha_{P}\phi_{P} \phi_{\sigma,i}\qquad\text{in}\ \Omega, \tag{5}\] with zero Neumann boundary conditions: \(\frac{\partial\phi_{\sigma,v}}{\partial n}=0\) and \(\frac{\partial\phi_{\sigma,i}}{\partial n}=0\) on \(\partial\Omega\). Static nutrient models are used in [56, 57] and are based on the observation that the tumor evolves on a timescale of days and weeks, while the nutrient model works on a much faster time scale. Here \(-\xi_{va}\phi_{\sigma,v}\) model the coupling with the arteries, while \(-\eta_{vi}\phi_{\sigma,v}\) and \(\eta_{iv}\phi_{\sigma,i}\) model the nutrient exchange between the vasculature and interstitial space. The source term couples the equation to a 2D manifold \(\Gamma\), approximating the surface of arteries and bronchial tubes, by a Delta-Distribution. The amount of nutrients entering the system from outside is controlled by the parameter \(\beta>0\). The nutrient consumption by the healthy cells is modeled by the decay term \(-\alpha_{H}(1-\phi_{P})\phi_{\sigma,i}\), while \(-\alpha_{P}\phi_{P}\phi_{\sigma,i}\) models the enhanced consumption by proliferative cells. Finally, \(\kappa_{v}\) and \(\kappa_{i}\) are the vessel's and interstitial space's scalar diffusion constants. A simple ODE model governs the immunotherapy concentration: \[\partial_{t}\phi_{\tau}= -\frac{\ln(2)}{t_{1/2}}\phi_{\tau}+\frac{N_{A}}{M_{\tau}}d_{\tau} \mathbf{1}_{t\in I_{\tau}}, \tag{6}\] where the first term describes the decay, and the second term is the drug influx due to injections. The treatment plan is described by the set \(I_{\tau}\subset[0,\infty)\), which contains the times at which the drug is administered. Hence, the indicator function acts as an activation function for the source term. Furthermore, \(N_{A}=6.022140857\cdot 10^{23}\) denotes the Avogadro constant, \(M_{\tau}\) is the molar mass of the drug, and \(d_{\tau}=0.24\) is the administered dosage of the antibody. As done in [16], we assume that the immunotherapeutic concentration follows the behavior of a Hill-Langmuir equation. All the model parameters are summarized on the left of Table 1. The reduced 1D-ModelSince the full 3D model has high computational costs, we assume a spherical symmetric tumor to calibrate our model parameters. Thus, we begin with a model in which we assume a radial dependency of \(\phi_{P}=\phi_{P}(r)\) and \(\phi_{\sigma,v}=\phi_{\sigma,v}(r)\), which immediately implies \(\mu_{P}=\mu_{P}(r)\), \(\phi_{\sigma,i}=\phi_{\sigma,i}(r)\) and \(\phi_{N}=\phi_{N}(r)\). As a first step, we assume spherical symmetry and, hence, obtain the following simplified model in the radial coordinates: \[\partial_{t}\phi_{P} =\frac{1}{r^{2}}\frac{\partial}{\partial r}\left((c_{m}m_{P}( \phi_{P},\phi_{T}))r^{2}\frac{\partial\mu_{P}}{\partial r}\right)+S_{P}(\phi_{ P},\phi_{\sigma,i},\phi_{\tau}) \text{in }\Omega, \tag{7}\] \[\mu_{P} =\partial_{\phi_{P}}\Psi(\phi_{P},\phi_{T})-\epsilon_{P}^{2} \frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^{2}\frac{\partial\phi_{P}}{ \partial r}\right)-\chi\phi_{\sigma} \text{in }\Omega, \tag{8}\] with boundary conditions \[\frac{\partial\phi_{P}}{\partial r}=0,\text{ \ \ \ and \ \ \ }\frac{\partial\mu_{P}}{ \partial r}=0 \text{on }\partial\Omega. \tag{9}\] The equations for the necrotic cells and immunotherapeutic agents stay unchanged. For the nutrient model, we assume that the nutrient concentration \(\phi_{\sigma,v}\) can be approximated by a pre-given constant value and use \[-\kappa_{i}\frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^{2}\frac{ \partial\phi_{\sigma,i}}{\partial r}\right)=\eta_{vi}\phi_{\sigma,v}-\eta_{ iv}\phi_{\sigma,i}-\alpha_{H}(1-\phi_{P})\phi_{\sigma,i}-\alpha_{P}\phi_{P} \phi_{\sigma,i}\text{ \ \ \ in }\Omega, \tag{10}\] \begin{table} \begin{tabular}{|l|l|l|c|c|} \hline & & \multicolumn{2}{c|}{**Values 1D**} & \multicolumn{2}{c|}{**3D**} \\ **Parameter** & **Dimension** & **Description** & **Patient 1** & **Patient 2** & **Patient 1** \\ \hline \(c_{m}\) & \(\lfloor m^{2}/d\rfloor\) & Mobility factor of proliferative tumor cells. & \multicolumn{3}{c|}{\(5\cdot 10^{-4}\)} \\ \hline \(\lambda_{P}^{\text{pro}}\) & \(\lfloor 1/d\rfloor\) & Proliferation rate of proliferative tumor cells. & 0.38 & 0.0038 & 0.53 \\ \hline \(\lambda\) & \(\lfloor 1\rfloor\) & Power-law for growth of small cell concentrations. & 2 & 1 & 2 \\ \hline \(\lambda_{PN}\) & \(\lfloor 1/d\rfloor\) & Decay rate proliferative to necrotic cells. & 0.1 & 20 & 0.025 \\ \hline \(\sigma_{PN}\) & \(\lfloor 1\rfloor\) & Nutrient threshold for necrotic growth. & \multicolumn{3}{c|}{0.2} \\ \hline \(\varepsilon_{P}\) & \(\lfloor m\rfloor\) & Tumor interface width. & \multicolumn{3}{c|}{\(5\cdot 10^{-4}\)} \\ \hline \(c_{\Psi}\) & \(\lfloor 1\rfloor\) & Scaling double-well potential. & 2 & \\ \hline \(\varepsilon_{g}\) & \(\lfloor 1\rfloor\) & Regularization for Gompertzian growth term. & 0.1 & \\ \hline \(\chi\) & \(\lfloor 1\rfloor\) & Chemotaxis factor. & 0 & \\ \hline \(\kappa_{v}\) & \(\lfloor m^{2}/d\rfloor\) & Nutrient diffusion factor in the vasculature. & - & \(10^{-3}\) \\ \hline \(\kappa_{i}\) & \(\lfloor m^{2}/d\rfloor\) & Nutrient diffusion factor in the interstitial space. & \multicolumn{3}{c|}{\(10^{-5}\)} \\ \hline \(\xi_{va}\) & \(\lfloor 1/d\rfloor\) & Coupling smaller to larger arteries. & - & 0 \\ \hline \(\eta_{vi}\) & \(\lfloor 1/d\rfloor\) & Coupling vasculature to interstitial space. & 1 & \\ \hline \(\eta_{iv}\) & \(\lfloor 1/d\rfloor\) & Coupling interstitial space to vasculature. & 0 & \\ \hline \(\alpha_{H}\) & \(\lfloor 1/d\rfloor\) & Nutrient consumption rate healthy cells. & 4 & \\ \hline \(\alpha_{P}\) & \(\lfloor 1/d\rfloor\) & Nutrient consumption rate proliferative cells. & 1 & \\ \hline \(\beta\) & \(\lfloor m/d\rfloor\) & Source term large vessels. & - & 0.1 \\ \hline \(\lambda_{r}^{\text{eff}}\) & \(\lfloor kg/d\rfloor\) & Immunotherapeutic effect under application. & 4.49 & 0.55 & 4.49 \\ \hline \(\phi_{\sigma}^{50}\) & \(\lfloor mol\rfloor\) & Drug concentration for half-maximal response. & \multicolumn{3}{c|}{\(1.012\cdot 10^{16}\)} \\ \hline \(d_{\tau}\) & \(\lfloor g\rfloor\) & Dosage per medication interval. & 0.24 & 0.20 & 0.24 \\ \hline \(M_{\tau}\) & \(\lfloor g/mol\rfloor\) & Molar mass of medication. & 146.000 & 143.600 & 146.000 \\ \hline \(t_{1/2}\) & \(\lfloor d\rfloor\) & Drug serum half-life time. & 26.7 & 22.0 & 26.7 \\ \hline \(\omega\) & \(\lfloor kg\rfloor\) & Patient weight. & \multicolumn{3}{c|}{80} \\ \hline \end{tabular} Left columns: Mathematical symbol. Center columns: Dimension and description. Right columns: Numerical value for our simulation. Top rows: Parameters for the tumor growth. Middle rows: Parameters for the nutrient model. Bottom rows: Parameters for the therapy. \end{table} Table 1: List of model parameters. for the nutrients in the interstitial space. We remark that the \(r^{-2}\) term does not pose any numerical issues for finite elements since we have to apply the spherical volume measure \(4\pi r^{2}\mathrm{d}r\) when we bring the system into its weak form and, thus, the \(r\)-terms cancel. ## Data Datasets from two clinical patients were available to verify our model. The examined patient data consist of anonymized volumetric measurements of primary tumors and metastases in patients with NSCLC and was acquired from routinely generated CT slices and computations of the secondary appraisal environment _syngo.CT_ Lung Computer Aided Detection (LCAD) workflow of Siemens Healthineers, provided in the _syngo.Via_ framework.1 As measurements, we have the manually segmented tumor volume, the response evaluation criteria in solid tumors (RECIST v1.1, [58]), and the bidimensional World Health Organization criteria (WHO), all indicating the tumor response with respect to therapy in different defined ways. Footnote 1: The patients were treated in the Clinic of Pneumology, Thoracic Oncology, Sleep and Respiratory Critical Care of the Klinikverbund Allğu in Germany. The ethics commission of BLAEK (Ethik-Kommission der Bayerischen Landesäzratekammer), reference number 19021, approved the use of the data. Patient 1The patient data during the therapy is depicted in Fig. 1A, which contains the tumor volume and the RECIST and WHO values. The tumor was diagnosed at time \(t=0\), and the therapy started at time \(t_{S}=296\) days. The patient received Nivolumab as an immunotherapeutic antibody every two weeks, for 30-60 minutes, in an intravenous dose administration from this time on. At time point (E), the patient Figure 1: **Measured patient data from CT scans.** Top row: tumor volume. Middle row: RECIST value. Bottom row: WHO value. Vertical dashed lines indicate events that might affect the data quality or therapy. For Patient 1 **E** indicates an Emblie, **P** a therapy break, and **D** the treatment with Dexamethasone. For Patient 2 **Q3W** is the beginning of a tree weekly drug administration schedule, **Q6W** the change to a six-weekly administration, **PD** the therapy end, and **EL** the Exitus Letalis. developed a temporary lung emboly, significantly influencing the data quality and LCAD accuracy. From (P) on, a break of immunotherapeutic drug administration was conducted due to the evident presence of brain metastases. At that very time point, the treating medical doctors diagnosed a complete remission for the primary tumor in the lung. All in all, we model our therapy as \[I_{\tau}:=\left\{t:t_{S}+14\cdot k\leq t\leq t_{S}+14\cdot k+\frac{1}{24},\text{ with }t\leq t_{P}\right\}. \tag{11}\] Patient 2For our second patient, the therapy starts before we have our first data point. The patient receives 200 mg of Pembrolizumab every 3 weeks. The therapy plan changes at time \(t_{Q6W}=514\) days, where the application interval is adjusted from three to six weeks. Finally, it was terminated at time \(t_{PD}=1063\) days with the patient's death (exitus letalis) at time 1306 days. The final CT image shows a very large tumor mass that has spread over the entire lung. The corresponding therapy for this patient is modeled as \[\begin{split} I_{\tau}:=&\left\{t:t_{S}+21\cdot k \leq t\leq t_{S}+21\cdot k+\frac{1}{24},\text{ with }t\leq t_{Q6W},k\in\mathbb{N}\right\}\cup\\ &\cup\left\{t:t_{Q6W}+42\cdot k\leq t\leq t_{Q6W}+42\cdot k+\frac {1}{24},\text{ with }t\leq t_{PD},k\in\mathbb{N}\right\}.\end{split} \tag{12}\] ## Results In this section, we will discuss our simulation results. All the used parameter values are summarized on the right of Table 1. In CT images, only a small part of the tumor cells are visible. To take this into account, we define the visible tumor volume from our simulation as \(V_{vis}=\int_{\Omega}\mathbf{1}_{\{x\in\Omega:\phi(x)>0.3\}}(x)\mathrm{d}x\). Numerically, we apply a time step size of \(\nicefrac{{1}}{{24}}\) for the tumor and 32 substeps for the evolution of the immunotherapeutic agents. For the 1D model, we use spatially 500 elements in a spherical domain with radius \(4\,\mathrm{cm}\). 1D resultsWe first discuss the results obtained with the simplified 1D model in spherical coordinates. Fig. 2A shows a comparison of the results obtained for the first patient. The manually calibrated parameters are \(\lambda_{P}^{\text{pro}}=0.38,\lambda_{\tau}^{\text{eff}}=4.49\), \(\lambda=2\) and \(\lambda_{PN}=0.1\). We start with an initial tumor volume with enough temporal distance from the previous therapy window. At the top, the visible tumor volume of our simulation is shown as a dotted line, which qualitatively follows the segmented data points. The number of immunotherapeutic agents is depicted in the middle. Note that after a few months, a quasi-periodic state is reached. The lower plot depicts the entire tumor mass in the whole domain and therefore includes cell concentrations that cannot be detected yet. Its evolution mainly depends on the parameter \(\lambda\), such that larger values of \(\lambda\) impede the tumor cells from spreading over the entire domain. In this case, the tumor mass curve follows the tumor volume curve. All in all, the given immunotherapy manages to efficiently eliminate the proliferative tumor cells. Since, at the end of the therapy only necrotic cells remain, the tumor stays under control, even though no immunotherapeutic agents hamper its growth. Fig. 3 shows the simulation state in spherical coordinates. In Fig. 3A, the initial state of the tumor simulation is shown. The entire tumor consists of proliferative cells, and no necrotic cells have been able to form yet. We see a large drop in the nutrient concentration due to the increased consumption of the tumor cells. In Fig. 3B, we observe the tumor just before the start of the immunotherapeutic treatment. Because of the scarcity of nutrients at the tumor core, some of the inner proliferative cells have Fig 3: **Cell- and nutrient fields for Patient 1 plotted over radial distance.** A: Start of the simulation. B: Just before the therapy. C: Just after therapy starts. D: During the therapy. Fig 2: **Patient data and simulation results:** Contains the visible tumor volume, the number of immunotherapy agents, and the tumor mass over time. A: Simulation results for Patient 1. B: Simulation results for Patient 2. ncretized. Since necrotic cells do not deplete nutrients, the nutrient concentrations do not decrease even though the entire tumor had grown. Fig. 3C is a snapshot taken during the therapy. Since the immunotherapeutic term acts on the entire volume of proliferative cells, while the growth terms only act on the interfaces, the proliferative cells have increased their interface to the other cell species by becoming active at the tumor boundary, which leads to an unexpected increase of nutrients near the necrotic tumor core. We also observe a large undershoot of the proliferative cells inside the tumor. In Fig. 3D, the proliferative tumor shell contracts due to the therapy until the necrotic and proliferative cells form a small tumor sphere. Due to numerous nutrients, no conversion of proliferative to necrotic cells happens at this point. The ongoing immunotherapeutic treatment decreases the proliferative cells until, in the end, only the already present necrotic core remains, which is in agreement with our clinical observations. Fig. 2B shows a comparison of the results obtained for the second patient. Here, the therapy is already in progress, and different from the first patient, we have no time window to observe the tumor without any therapy. The manually calibrated parameters are \(\lambda_{P}^{\text{pro}}=0.0038,\lambda_{r}^{\text{eff}}=0.499\), \(\lambda=1\) and \(\lambda_{PN}=20\). With respect to the tumor volumes in the topmost plot, the first data point is ignored and a plausible tumor volume is assumed, more in line with the previous values. The visible tumor diminishes in size until it is able to sustain itself with the given nutrients despite ongoing therapy and stays constant. The change in the injection schedule from 3 to 6 weeks shows itself in a decreased amount of immunotherapeutic agents, but has no notable impact on the visible tumor. Even though the visible tumor does not grow, its cells start to spread over the entire lung, which leads to a consistent increase in the total tumor mass in the lower plot. After the tumor has spread over the entire domain, the visible tumor starts to grow exponentially, which is in line with the observed outcome of the tumor patient. Fig. 4 gives a more detailed view of what happens spatially. Fig. 4A describes a similar initial setup to the first patient: The entire tumor consists of proliferative cells. No necrotic core has formed yet, but the nutrient concentration is notably lower inside the tumor. After 170 days, Fig. 4B is reached, where a necrotic core has formed, the proliferative cells have developed an outer shell, and the nutrient concentration inside the tumor has stabilized. We also observe that a small amount of proliferative cells has moved away from the primary tumor and spread towards the outer boundary. In Fig 4C, after 600 days, the proliferative shell has collapsed, and we have a tiny amount of proliferative cells which can sustain themselves due to the increased nutrient concentration. This configuration is stable over the next year, as we see from Fig. 4D, Fig 4: **Cell- and nutrient fields for patient 2 plotted over radial distance.** A: Simulation start. B: After 170 days. C: After 600 days. D: After 1.175 days. which is 1.175 days after the start of the simulation. This is just before the visible tumor starts to grow again. We observe that the tumor has spread over the entire domain, which leads to a visible decrease in nutrient concentration from 1 to roughly 0.9. At this point, the phase separation from the Cahn-Hilliard is initiated, resulting in drastic tumor growth within a short period. We point out that, especially in the case of explosive growth in the second patient, the simulation geometry does play a role. This is the main motivation behind moving from the simplified model in spherical coordinates to more elaborate 3D simulations. Under the assumption that we have meaningful parameter values for both patients, an interesting follow-up question is to investigate how changes in the therapy qualitatively affect its outcomes. In Fig. 5A, we have hypothetically decreased the dosage given to Patient 1 by 77, 78, and 82 percent relative to its original concentration. At 22, and 23 percent of the original dosage, the proliferative cell mass steadily decreases during the therapy. While 23 percent is enough to defeat the tumor before the therapy stops, for 22 percent, a tiny amount of proliferative cells remain. These remaining cells then lead to a relapse of tumor growth, such that the tumor exceeds its largest volume even before our simulation time frame is over. At 18 percent of the original dosage, we arrive at an equilibrium of the proliferative cells, in which the immunotherapy manages to control its growth but fails to eradicate it. Thus, as soon as the therapy is terminated, the proliferative tumor starts to grow again at the original rate. Finally, we note that even considering safety margins, 50 percent of the dosage would have been more than enough to cure the cancer. For Patient 2, an interesting question is how the changes in the therapy schedule might have affected the tumor growth. In Fig. 5B, in addition to the original therapy (Q3W/Q6W), we have added the scenario where the patient is given the same dosage continuously every three weeks (Q3W) and the case where the switch to the six weekly cycles was performed, but no therapy break occurred (Q3W/Q6W ext.). We note that extending the therapy at the given point had close to no effect on the tumor growth. Even the more aggressive therapy only managed to slow down the growth dynamics by Fig 5: **Patient data and simulation results:** Contains the visible tumor volume, the number of immunotherapy agents, and the tumor mass over time for different therapies. A: Different drug dosages for Patient 1. B: Different therapy schedules for Patient 2. roughly three months before the tumor relapsed. It appears that according to our model, the given tumor cannot be cured with the used drug, and the therapy only manages to prolong the patient's life by a certain time. Even significant increases in the dosage are not enough to cure the cancer, since the Hill-Langmuir equation which models the impact of the immunotherapy, approaches a stationary value for \(\phi_{\tau}\rightarrow\infty\). With a continuous three-weekly drug administration scheme, further simulations show that increasing the dosage by a factor of 2 increases the time until the explosive growth occurs by half a year. A factor of 10 could give the patient an additional year. We will close this part with a short discussion of the parameter values. Note that the dosage \(d_{\tau}\), the drug's molar mass \(M_{\tau}\) and the serum's half-life time \(t_{\nicefrac{{1}}{{2}}}\) are known a priori from the given therapy. The parameters \(\lambda_{P}^{\mathrm{pro}}\), \(\lambda\), \(\lambda_{PN}\) and \(\lambda_{\tau}^{\phi_{\mathrm{EFF}}}\) were assumed patient specific and vary in our examples over a wide range. In particular, \(\lambda\) has a strong influence if the tumor stays localized \(\lambda=2\) or spreads over the entire domain \(\lambda=1\), leading to explosive growth. The parameters \(\lambda_{P}^{\mathrm{pro}}\) and \(\lambda_{\tau}^{\phi_{\mathrm{EFF}}}\) have to counterbalance each other. Especially for Patient 2 where both growth and therapy are always active, they are hard to determine. 3D resultsFinally, Fig. 6 shows the tumor evolution for the full 3D model applied to Patient 1. On the top, a depiction of the lung mask in light gray, of the vasculature \(\Gamma\) in red, a vertical and horizontal slice for \(\phi_{\sigma,i}\), and the tumor in the lower right are shown. The proliferative tumor cells are colored blue, while the necrotic cells are depicted in yellow. A threshold of 0.3 is chosen for both the contours of the proliferative and necrotic cells. For the given snapshot in time, the shell of proliferative cells is seen to be oriented in the direction of the nutrient gradient. On the right, the nutrient concentration at different times is shown. Already at day 20, a nutrient shortage close to Fig 6: **3D tumor simulation:** Top-left: Nutrient concentration \(\phi_{\sigma,i}\) after 240 days have elapsed. Top-right: Nutrient concentrations at 20, 140 and 300 days. Bottom: Tumor shapes consisting of proliferative (blue) and necrotic (orange) cells at different points in time. the tumor is observed. This area grows with the tumor until it reaches its maximal size after 140 days. After 300 days, most of the proliferative cells consuming nutrients have disappeared, and the effects of the tumor on the nutrient concentration are no longer visible. The tumor evolution for certain points in times are shown. After 60 days, a small necrotic core has formed. It is not at the center of the tumor, but slightly shifted towards the nutrient-poor domain. The tumor grows until it reaches its maximum size after 140 days, shortly before the immunotherapy starts. Again, the proliferative cells further away from the nutrients decay until only a tiny proliferative shell remains, which points towards the nutrient source. Finally, only a few proliferative cells remain inside the tumor, mainly consisting of necrotic cells. At this point, the visible tumor volume no longer changes and the proliferative core decays further and further. ## Discussion Critical microenvironments influencing tumor formation are generally not observable [59]. Deterministic modeling approaches, on the other hand, can aid in understanding the driving dynamics by simulating growth and decline for observable tumor sizes. It has been demonstrated that our phase-field model can qualitatively describe the tumor volume evolution in the observable window over time for NSCLC patients and can address different outcomes of immunotherapeutic treatment approaches. The simulation results of the two clinical cases were explained in a biologically meaningful manner throughout the observation period with results that agree with multiple volumetric measurements at time points acquired during clinical routine examinations. This is considered a preliminary proof of concept for the presented model. In future studies, where the model is calibrated over larger data sets, the parameter estimation should be addressed in particular. The parameter estimation is then expected to be more stable if prior parameter distributions for general patient cohorts can be identified. We have shown that our model easily generalizes to a full simulation in 3D. Backdrawn data on RECIST measurements, which are clinically routine characteristics, is easily processable, but the presented model may contain significantly more clinically relevant information by providing the spatial structure of tumors under therapy influence. The model simulation strongly depends on patient-specific parameters unknown a priori and inferred over the whole simulation time, including the clinical outcomes. However, if these parameters can be estimated quickly during treatment or are based on patient-specific clinical covariates, then this approach has prognostic potential. The long-term therapeutic success and lower but still successful dosages could possibly be determined. A preview of this potential is presented in our results, as the qualitative therapeutic outcome can be determined with alternative drug dosage regimens for the two patients presented. This potential should be addressed in future work and be based on large patient data sets. Successful early estimation of these parameters allows for an optimal treatment schedule and a minimal drug dosage regimen. In addition, it could also reduce data acquisition efforts by reducing the number of CT scans during patient treatment to a minimal amount needed for model control, re-calibration, and clinical check-ups. This has the potential to increase the patient's quality of life on an individual basis. ## Methods & Materials We will first introduce the numerical discretization of our model in space in time. Afterwards, the simulation geometry obtained from the CT data is introduced. Numerical discretizationAs a spatial discretization, we use finite elements with piecewise-linear basis functions for \(\phi_{P}\), \(\mu_{P}\), \(\phi_{N}\), \(\phi_{\sigma,v}\), and \(\phi_{\sigma,i}\). We use a semi-implicit scheme in time for Eq. (1). The source and sink terms, as well as the mobilities, are treated explicitly. Since the Cahn-Hilliard equation does not guarantee the boundedness, \(0\leq\phi_{P}\leq 1\), we use a cutoff function on all right-hand side terms to achieve stability. For the potential \(\Psi\), we use a simple convex-concave splitting in time: Note that the double well potential \(\tilde{\Psi}\) can be decomposed into the \(\tilde{\Psi}_{i}=\frac{3}{2}c_{\Psi}\phi^{2}\) and \(\tilde{\Psi}_{e}=c_{\Psi}\left(-2\phi^{3}-\frac{1}{2}\phi^{2}+\phi^{4}\right)\). Clearly, \(\Psi_{i}\) is convex, while \(\Psi_{e}\) stays concave for \(\phi\in\left[\frac{1}{2}-\frac{1}{\sqrt{3}},\frac{1}{2}+\frac{1}{\sqrt{3}} \right]\supset[0,1]\). In our case, this motivates the decomposition \(\Psi_{i}(\phi_{P},\phi_{N})=\tilde{\Psi}_{i}(\phi_{P})+\tilde{\Psi}_{i}(\phi _{P}+\phi_{N})\) and \(\Psi_{e}(\phi_{P},\phi_{N})=\tilde{\Psi}_{e}(\phi_{P})+\tilde{\Psi}_{e}(\phi _{P}+\phi_{N})\). For a fixed \(\phi_{N}\), the latter is concave if \(\phi_{P}\in\left[\frac{1}{2}-\frac{1}{\sqrt{3}},\frac{1}{2}+\frac{1}{\sqrt{3} }-\phi_{N}\right]\supset[0,1-\phi_{N}]\). In case of no source terms, we achieve unconditional gradient stability [60] if \(\Psi_{i}\) is treated implicitly and \(\Psi_{e}\) explicitly and the bounds of \(\phi_{P}\) are satisfied. As a linear solver, we use MINRES with the block-diagonal preconditioner \[\mathcal{P}:=\begin{pmatrix}(c_{m}\tau)K_{m}+\sqrt{c_{m}\tau}M\\ &\epsilon^{2}K_{1}+\frac{6c_{\Phi}}{\sqrt{c_{m}\tau}}M\end{pmatrix}, \tag{13}\] which is a generalization to the preconditioner in [61]. For the inversion of the diagonal blocks, we resort to algebraic multigrid [62]. For Eq. (3) we use an explicit Euler scheme, while in Eqs. (4) and (5) \(\phi_{P}\) is kept explicit, and we solve for \(\phi_{\sigma,v}\) and \(\phi_{\sigma,i}\). Since in our studies, we set \(\eta_{iv}=0\), both equations are decoupled and can be solved separately with a Conjugate Gradient method preconditioned by algebraic multigrid [62]. Finally, we apply a simple explicit Euler scheme for the ODE model of Eq. (6). Since Eq. (6) runs physically on a much smaller time scale, we run it with a much smaller time step size. The coupling does not pose any difficulties, since Eq. (6) is completely decoupled and computationally cheap to solve. For our implementation, we use the FEniCS framework [63]. Simulation geometryThe lung geometry and vasculature structure were extracted from the patient's CT data (see Patient Data) at the therapy's onset. For the lung geometry, the extraction was performed with _3D Slicer_[64], which was simplified with _3D Builder_ by Microsoft Corporation and _blender_[65] before remeshing it coarsely with _gmsh_[66]. The vasculature was extracted using slicers' _vmtk_ extension [67, 68]. The vasculature and the lung are depicted in Fig. 7A. To keep the computational costs tractable, we apply a local refinement close to the initial tumor position by subdividing the mesh several times, see Fig. 7B. This approach is motivated by the potential energy of the Cahn-Hilliard equation, which keeps the central part of the tumor localized to its initial location. Similarly, we only require an accurate solution close to the tumor for the nutrient model, which couples with the Cahn-Hilliard equation. ## Acknowledgements The work of A. Wagner and B. Wohlmuth was partially funded by the Deutsche Forschungsgemeinschaft (WO 671/11-1). P. Schlicke was partially funded by the IGSSE/TUM Graduate School. Marvin Fritz is partially supported by the State of Upper Austria. The support of J. Tinsley Oden by the U.S. Dept. of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics Program, under Award DE-960009286 is gratefully acknowledged.
2310.16360
A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges
In recent years, the combination of artificial intelligence (AI) and unmanned aerial vehicles (UAVs) has brought about advancements in various areas. This comprehensive analysis explores the changing landscape of AI-powered UAVs and friendly computing in their applications. It covers emerging trends, futuristic visions, and the inherent challenges that come with this relationship. The study examines how AI plays a role in enabling navigation, detecting and tracking objects, monitoring wildlife, enhancing precision agriculture, facilitating rescue operations, conducting surveillance activities, and establishing communication among UAVs using environmentally conscious computing techniques. By delving into the interaction between AI and UAVs, this analysis highlights the potential for these technologies to revolutionise industries such as agriculture, surveillance practices, disaster management strategies, and more. While envisioning possibilities, it also takes a look at ethical considerations, safety concerns, regulatory frameworks to be established, and the responsible deployment of AI-enhanced UAV systems. By consolidating insights from research endeavours in this field, this review provides an understanding of the evolving landscape of AI-powered UAVs while setting the stage for further exploration in this transformative domain.
Osim Kumar Pal, Md Sakib Hossain Shovon, M. F. Mridha, Jungpil Shin
2023-10-25T04:52:16Z
http://arxiv.org/abs/2310.16360v1
# A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision, and Challenges ###### Abstract In recent years, the combination of artificial intelligence (AI) and unmanned aerial vehicles (UAVs) has brought about advancements in various areas. This comprehensive analysis explores the changing landscape of AI-powered UAVs and friendly computing in their applications. It covers emerging trends, futuristic visions, and the inherent challenges that come with this relationship. The study examines how AI plays a role in enabling navigation, detecting and tracking objects, monitoring wildlife, enhancing precision agriculture, facilitating rescue operations, conducting surveillance activities, and establishing communication among UAVs using environmentally conscious computing techniques. By delving intothe interaction between AI and UAVs, this analysis highlights the potential for these technologies to revolutionise industries such as agriculture, surveillance practices, disaster management strategies, and more. While envisioning possibilities, it also takes a look at ethical considerations, safety concerns, regulatory frameworks to be established, and the responsible deployment of AI-enhanced UAV systems. By consolidating insights from research endeavours in this field, this review provides an understanding of the evolving landscape of AI-powered UAVs while setting the stage for further exploration in this transformative domain. [name=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,titletitle=A,title=A,title=A,titletitle=A,title=A,title=A,titletitle=A,title=A,titletitle=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,titletitle=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,titletitle=A,title=A,title=A,title=A,title=A,titletitle=A,titletitle=A,title=A,titletitle=A,title=A,titletitle=A,title=A,title=A,title=A,titletitle=A,title=A,titletitle=A,titletitle=A,titletitle=A,title=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,title=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,title=A,title=A,titletitle=A,titletitletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitletitle=A,titletitle=A,title=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitletitle=A,titletitle=A,titletitle=A,titletitletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitletitle=A,titletitle=A,titletitle=A,titletitletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitle=A,titletitletitle=A,titletitletitle=A,titletitle=A,titletitletitletitle=A,titletitletitle=A,titletitletitle=A,titletitletitle=A,titletitle=A,titletitletitle=A,titletitletitle=A,titletitletitletitle=A,titletitletitle=A,titletitletitle=A,titletitletitletitle=A,titletitletitle=A,titletitletitle=A,titletitletitle=A,titletitletitle=A,titletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitle=A,titletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitle=A,titletitletitletitle=A,titletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitle=A,titletitletitletitletitle=A,titletitletitletitle=A,titletitletitletitletitle=A,titletitletitletitletitle=A,titletitletitletitletitle=A,titletitletitletitle=A,titletitletitletitletitletitle=A,titletitletitletitle=A,titletitletitletitletitle=A,titletitletitletitletitletitle=A,titletitletitletitletitletitletitle=A,titletitletitletitletitletitletitle=A,titletitletitletitletitletitletitle=A,titletitletitletitletitletitletitletitletitletitle=A, traditional methods require extensive human intervention [8]. A big difference between AI and standard cognitive algorithms is that AI can automatically extract features. This replaces expensive hand-crafted feature engineering [9]. In general, an AI job may identify anomalies, anticipate future outcomes, adapt to changing circumstances, develop understanding of complex problems requiring enormous amounts of data, and discover patterns that a person might overlook [10]. It may use and learn from the surrounding huge data to better UAV maneuvering. Comparatively to conventional optimization techniques, AI can also handle on-board resources wisely [11].Drones using AI typically have their functions fully or partially automated. With the help of AI, drone makers can use data from devices connected to the drone to collect and use data about the surroundings and how it looks [12]. AI can be used to handle drones automatically, including how they move and navigate. Several methods, including GPS monitoring, computer vision, and machine learning algorithms, can be used to accomplish this.Voice recognition, scene identification, object detection, and picture categorization are just some of the many fields where AI is making inroads, particularly when it comes to deep-learning approaches to AI [13]. Through this procedure, deep learning attempts to unearth deep characteristics in unprocessed data at various levels [14]. These characteristics are then employed to stand in for the actual world. According to projections, the UAV market would increase from USD 26.2 billion in 2022 to USD 38.3 billion by 2027, with a 7.9% CAGR over those two years. Rising acquisition of tiny drones for military applications such as ISR will boost the small drone market development throughout the projected period [15]. Despite this growing interest in UAVs, there are still a lot of restrictions. Many of the problems with UAVs, like high power/energy consumption and real-time needs, are also good things about edge computers and edge AI, like low energy consumption and low delay [16].Deep learning and unmanned aerial vehicles (UAVs) together have the potential to revolutionize traffic monitoring in the transportation sector [17].The optimal usage of AI and deep machine learning in UAVs that are already being used and the potential future will be explored and diagnosed in this in-depth investigation, as will the large region of AI-embedded drone applications. AI-enabled UAVs need robust computers and sensors that use a lot of energy. Energy-efficient components and algorithms are essential to green computing. Scientists can refine AI algorithms to lower processing demands and UAV power needs [18]. Renewable energy sources like solar panels or wind turbines may charge UAV batteries more sustainably. AI is essential for UAV autonomy and navigation. AI algorithms may improve UAV flight patterns, reducing fuel consumption and emissions. Researchers, industry professionals, and government organizations may collaborate to develop green computing solutions for AI-enabled UAVs. Researchers may work to develop environmentally friendly technology and techniques for better green UAVs [19][20]. A study was undertaken of Deep Learning methods based on UAV uses. They have shown the limits of the present state of UAV development. Not sufficient attention was devoted to discussing algorithms and the potential uses of UAVs in specialized industries [14].A assessment of the ML-based UAV communication system was conducted using various algorithms. In that particular research, just few of the many potential applications of UAVs were investigated [21]. Vision-based unmanned aerial vehicle (UAV) navigation system using a variety of AI technologies were examined. Several applications of computational intelligence, including search and rescue, as well as surveillance, were addressed [22]. Rescue sharing, distribution, and trajectory design uses for smart UAV base station were surveyed. Researchers analyzed potential AI-based rescue strategies [23].Computerized vision methods based on machine learning algorithms and UAV platforms are being studied by researchers for their potential to detect and cure agricultural illnesses at an early stage.Multiple elements, including plants weeds, pests, and conditions, were taken into account in the research [24].Researchers investigated the potential for AI-based unmanned aerial vehicle (UAV) systems applications to be used for traffic management [25], as well as monitoring [26],control [27], and detection of traffic [28]. Most recent survey studies concentrated their attention on one application of AI-based UAVs. The research only provided limited examples and did not cover a significant portion of the AI and ML algorithmic landscape. This review effort will evaluate the various uses of UAVs where AI is appropriate and examine all of the different learning algorithms that researchers are presently using.The investigation of AI-enabled UAVs is a captivating exploration of state-of-the-art technology. This detailed overview examines current trends, predicts a future in which UAVs will reshape several sectors, and discusses the issues that will be addressed. This discourse explores the transformative effects of AI, along with green computing and UAVs, on our society and their potential for fostering innovation and generating beneficial outcomes. This systematic Figure 1. Outline of the Structured Review Figure 2: **Review Methodology** review also includes a discussion of the possible datasets for each eligible area of UAV.The schematic representation of the review process is depicted in Figure 1. The systematic literature review is set up in the manner described below:The research methodologies utilised for this investigation are discussed in Section 2. Types of UAV are mentioned in Section 3. Section 4 offers a thorough analysis and applications, while Section 5 lists constraints and potential future research fields. Finally, Section 6 discusses the challenges and future scope of this study. ## 2 Review Methodology This section serves as an illustration of the organized and methodical approach to doing research. It includes the methods, processes, and tactics used by scholars to collect, process, and evaluate data in order to respond to certain investigations or assumptions.Figure 2 depicts the approach employed in this study. The validity, trustworthiness, and credibility of the study conclusions depend on a well established research process. ### Paper Selection Criteria As with any review, we selected and included the research work or paper based on a handful of criteria. These were the standards that we adhered to- * The article should be directly related to UAV research or surveys. * Artificial intelligence, deep learning, and UAV-related papers are also included in our selection. * UAV-related research papers from several sub-fieldsare also included in our evaluation owing to extensive knowledge collection. * We picked relatively few website data when it is statistical or genuine data that is exclusively accessible on that site. ### Source of Information A successful and instructive scientific review will comprise material that has been gathered from publications that are reliable, well-respected, and well-organized. Consequently, the articles or information for this review study are collected from scientifically appreciated Scopus Journal indexes such as Springer Nature, IEEE, Elsevier, MDPI journals, Wiley, and many more. We have only acknowledged a few articles from conferences with rigorous structural criteria. A few reliable online sources compile a minimal amount of statistical data and up-to-date information. ### Area of Coverage This study covers the time period from 2000 to the present. At the very beginning of the year 2000, several industries began implementing new applications for advanced UAVs. After the revolution and the growth of technology, AI allows UAV to become a necessary kit for practically every industry in order to improve surveillance, safety, security, and decision making. ## 3 UAV Platform Type Unmanned aerial vehicles come in a wide variety, so the name "drone" is all-encompassing. It may mean either an intelligent or an autonomous aircraft. Hexacopters, quadcopters, multi-copters, and aircraft with wings all fall within this category [29]. The primary types of flying drones are: ### Fixed-Wing UAV A fixed-wing drone features one rigid wing that is meant to appear and function like an aircraft and provides lift instead of vertical lift rotors. Therefore, this form of drone simply requires energy to go ahead and not to maintain its airborne position [30]. They are energy-efficient as a result. Fixed-wing drones can travel farther, map much bigger areas, and stay still for a long time while keeping an eye outheir target [31]. These drones have a higher ceiling anda greater payload capacity. Drones with fixed wings may be pricey.Flying fixed-wing drones typically requires training. It can be used for Aerial Mapping, Agriculture Inspection, Construction Monitoring, and numerous other applications [32]. ### Multi-rotor UAV The simplest and least expensive method for keeping an "eye in the sky" is to use a multi-rotor drone. They also allow for more precise positioning and framing, making them ideal for aerial photography and surveillance [33]. Common types of multi-rotor aircraft include tri-copters (with three rotors), quadcopters (with four), hexacopters (with six), and octocopters (with eight). Quadrotors are the most prevalent multi-rotor drones. It offers superior aircraft control while in flight. Thanks to its improved maneuverability, it can go backward, forwards, sideways, and around on its axis [34]. Because of their low endurance and speed, multi-rotor drones aren't suited for extensive aerial mapping, long-term monitoring, or long-distance infrastructure inspection Figure 4: This is Multi-Rotor UAV model for Modern Use Figure 3: This is Fixed-Wing UAV model like highways, pipelines, and electricity lines. They are inherently inefficient and need a lot of energy to defy gravity and maintain their airborne position [1]. ### _Single-rotor UAV_ UAV with a single rotor are robust and long-lasting. They resemble helicopters in terms of construction and design.A single-rotor helicopter consists of a single rotor--similar to a large rotating wing--and a tail rotor for directional and stability control.Single-rotor helicopters are more efficient than multi-rotors, especially if they're gas-powered.Long blades spin like wings rather than propellers, making a single-rotor helicopter efficient [35]. Single-rotor drones are costly and complicated.They tremble and are less stable or tolerant of a poor landing.Because of their technical intricacy, they need frequent maintenance [36]. ## IV Artificial Intelligence Embedded UAV UAVs and artificial intelligence (AI) are two topics that have recently attracted the interest of researchers in academia and industry [37]. Aerial drones have increased the flexibility with which operations may be carried out and activities monitored from distant areas [38]. In addition to expanding UAVs capabilities and throwing up the market to a broader variety of businesses, deploying AI and machine learninghas also helped lessen the number of obstacles that must be overcome [39]. The combination of UAVs with machine learning has led to both speedy and dependable outputs [40].Figure 5 demonstrates the present use of AI across a variety of industries, including UAV applications. The use of unmanned aerial vehicles (UAVs) in conjunction with artificial intelligence has been shown to be advantageous for real-time monitoring, the collection and processing of data, and prediction in a variety of contexts, including cities with smarts, defence, farming, and mining [41]. ### _Applications of AI in UAV for Traffic Monitoring_ The location, speed, and direction of vehicles, as well as the number of times they traverse a particular point (like a gate, a junction, or a crossing), are just a few features that UAVs can observe and determine [42][43]. These parameters are usually determined by the UAV placed over the coverage area. Changes in the values of the parameters discussed below may be used to identify certain occurrences [44][45]. For instance, speeding may be determined when a vehicle's speed is measured and exceeds a specified limit. On the other hand, traffic bottlenecks may be identified when the average speed of many cars drops below a specific limit [46].The camera mounted on the UAV used in a surveillance system based on UAVs gathers photos of the current traffic condition using technology associated with route planning [26][47]. The identification system, which the UAV carries, is then automatically fed these photographs. This identification system's primary skill is its capacity to assess traffic congestion [48]. The results of the recognition are sent to a traffic-management facility. The traffic managers may readily provide further analysis using those data [49][50]. Figure 6 depicts the current AI applications for traffic monitoring and how UAVs function with it as an embedded system.The classification system that UAVs carry can be broken into two parts: one for extracting features and the other for recognizing characteristics [51][52]. The recorded pictures are sent to the part of the system that pulls out the high-level details. The outcomes of the final recognition are then determined by the system component responsible for the recognition, and these findings are based on the extracted characteristics [53]. A reconstructed network that has convolutional layers is used as the fundamental architecture for the feature-extraction component of the system [54].In most cases, the additional residual block cuts down on the time required for training and the level of intricacy involved [55]. Currently, several ResNet networks (such as ResNet-50, ResNet-101, and ResNet-110) work as a learned layer for the neural network [56]. Recently, training a 110-layer ResNet with random depth has given better results than training a 110-layer ResNet with a fixed depth, and it takes a lot less time to prepare [57].Figure 7 shows how UAVs are used in real life to track and keep an eye on vehicles using AI method Figure 5: Traffic Monitoring UAV model for Modern Use Figure 6: Traffic Monitoring using UAV YOLO is a technique that offers real-time object identification via neural networks [63]. This method is used a lot because it works quickly and correctly. It has been used to monitor traffic, people, and parking meters, among other things [64][65].Yolo V3 switched from using Darknet-19 as the backbone network to using Darknet-53 instead [66]. Additionally, it used multi-scale estimates. Only a few academics now employ Spatial Pyramid Pooling (SPP) with Yolo V3 to identify traffic signs [67][68].In the system that monitors traffic, the yolo algorithm is used for vehicle counting, detecting, and classifying, as well as monitoring traffic signs. To determine the location of a vehicle, a YOLOv4-tiny model is used [69].Recently, an enhanced learning algorithm called TSR-YOLO has been designed to recognize the traffic sign in accordance to monitor the traffic [70].A system capable of performing global feature extraction with a multi-branch lightweight detection head has been developed to improve the accuracy of identifying more minor traffic signs. This method is well suited to challenging weather forecasts and environments [71].YOLOv5 (STC-YOLO) is an upgraded version of YOLO that performs better in environments with fog, snow, noise, occlusion, and blur. It is designed to monitor tiny traffic signs and vehicles [72]. The area of artificial intelligence-based UAV traffic monitoring is actively implementing green computing approaches [73]. Strategies that help reach this goal include the use of energy-efficient hardware, the integration of renewable energy sources into ground stations, the optimization of artificial intelligence (AI) algorithms to reduce the amount of computation needed, the simplification of data processing on the device itself, and the use of dynamic resource allocation. These methods aim to reduce energy consumption, decrease carbon emissions, and support environmentally sustainable aerial traffic monitoring, connecting technological advancements with environmental preservation [74]. Researchers are now employing many datasets for traffic monitoring models, and table 1 displays the most trending datasets with plausible descriptions.All systems have certain flaws and scope for development. Despite its potential bene \begin{table} \begin{tabular}{p{42.7pt}|p{113.8pt}|p{113.8pt}} \hline \hline **Dataset** & **Data Type** & **Short Description** \\ \hline \multirow{4}{*}{AU-AIR [58]} & \multirow{4}{*}{Raw Video time, length also mentioned in the video data.} & This dataset has 2 hours raw video of traffic monitoring along with 8 categories.GPS, time, length also mentioned in the video data. \\ \cline{2-3} & & \\ \hline \begin{tabular}{l} **Custom \&** \\ **Massachusetts roads** \\ **roads** \\ \end{tabular} & Images & These dataset contains high quality images of road,traffic sign, road conditions and vehicles \\ \cline{2-3} & & \\ \hline \begin{tabular}{l} **Traffic Drone** \\ **Data (BD) [60]** \\ \end{tabular} & Images & This dataset contains various road conditions data such as bus, car and other vehicles as annotated mode. \\ \hline \begin{tabular}{l} **Aerial** \\ **Detection** \\ **Dataset [61]** \\ \end{tabular} & High Quality & Dataset holds very high quality images on bus,truck, car and more. Data are split in training, test and validation. \\ \hline \begin{tabular}{l} **HIT-UAV [62]** \\ \end{tabular} & \begin{tabular}{l} High Quality \\ **Images** \\ \end{tabular} & Its a thermal dataset that contains over forty thousand frame of vehicles images \\ \hline \hline \end{tabular} \end{table} Table 1. UAV Traffic Monitoring Dataset Figure 7. AI Embedded UAV Model fits, traffic monitoring using machine learning has limitations presently being studied [75]. These algorithms may provide a few wrong results when counting vehicles since they are only partially accurate [76]. The intricacy of the environment increases the likelihood that the monitoring may give faulty results under extreme weather conditions [77]. To improve the precision and accuracy of traffic monitoring, researchers are focusing on developing these [78]. ### _AIN UAV for Object Detection_ The UAV captures the camera with its lens, and machine learning and computer vision then extract the features [79]. These algorithms are capable of detecting an item's size,form, color, and ability to recognize patterns that could locate the object specifically [80]. Researchers employ sensors such as synthetic aperture radar (SAR) and light detection and ranging (LIDAR) to collect visuals, and then artifi-cial intelligence is used to extract information from those visuals to locate the item of interest. The SAR method allows researchers to improve the visual capabilities of unmanned aerial vehicles (UAVs) [81].Figure 8 illustrates existing AI applications used in the domain of object detection, specifically highlighting the integration of UAVs as an intelligent system. Since these techniques are simple to employ in cloudy, dark, and wet settings, they are not reliant on the climate [82].Among the portable tools for object identification is the RGB-D camera mounted to a UAV.The Parallel tracking and mapping (PTAM) method is a new development that is used with unmanned aerial vehicles (UAVs) for localization and navigation [83]. This strategy is appropriate for environments and locations that are unknown [84].One of the precise vision techniques used by UAVs to aid in object recognition is optical flow technology. It can capture images over a wide region using its long-range shooting capabilities [85].Researchers are using numerous datasets for object detection models, and table 2 shows the most popular with credible descriptions. Researchers relied on template-matching techniques in the early days of UAV objective identification. The system can identify recorded objects by comparing them to a template collection containing several thousand examples [91]. For any saved or set view, the template method works very well. However, this strategy needs to catch up in terms of large amounts of data with thousands of data categories [91][92].At the beginning of the 2010s [93], Convolutional neural networks (CNNs) have taken over as the go-to technique for extracting the characteristics of pictures in computer vision applications [94], including picture categorization, object recognition, and semantic segmentation of photos [95][96]. Green computing is used in AI-based UAV object identification by selecting energy-efficient hardware, allocating resources intelligently, and processing data locally on the device to maximize the system's potential for reducing its overall energy consumption. This strategy lessens object detection missions' influence on the surrounding environment while preserving the capability to conduct effective surveillance. This helps to promote sustainability in aerial monitoring activities [97].The YOLO technique was then added to UAVs for object identification and visualising. YOLO is a one-stage technique that can quickly identify a UAV-captured picture as green computing concept. Researchers have built a YOLOv2-based ROS that can communicate with UAVs for tasks including object identification and system navigation. The method uses a simple color picture for detection [98].To advance the object detection process to the next level, researchers have identified a few potential areas for future development. An item may be located from various vantage points and angles using a UAV. This may affect the identification [99]. Another significant obstacle that must be overcome in the detecting process is the deformation of the moving item. Some items have intra-class variance, requiring additional training in ML [100]. The detection process might be positively effective if this variation is considered during training [101]. Scientists are striving to improve the UAV \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline **Dataset** & **Data Type** & **Short Description** \\ \hline **UAVAT [86]** & Raw Video & This dataset has 10 hours raw video detection,multiple object tracking and single object tracking. The video is in annotated mode. \\ \hline **UAVOD [87]** & Images & These dataset has 10 object classes such as building,ship, tower, pond, river and more. \\ \hline **Manipal [88]** & Images & This dataset contains various small object and person data for detection \\ \hline **Urban Zone** & High & This dataset is a combination of 3 different dataset.All of the images in this dataset are well annotated. \\ \hline **Aerial** & High & Its has high quality images that contain boat, ship andmany more classes. \\ \hline **Object [90]** & & \\ \hline \end{tabular} \end{table} TABLE II: UAV Object Detection Dataset Figure 8: Object Detection UAV model for Modern Use identification process using AI by addressing the problems mentioned earlier. ### AI-enabled Agriculture Surveillance Surveillance UAV System Agriculture has far-reaching consequences for society since it is essential in maintaining human life by providing food, shelter, and employment opportunities and supplying crucial raw materials for producing a wide variety of goods [102].UAVs have a wide variety of uses in the field of intelligent agriculture. The use of UAVs in smart agriculture allows for the evaluation of agricultural field spots, the monitoring of sunlight following the growth of crops, the diagnosis of diseases, and the administration of preventive medication for plants [103][104].Farmers presently rely on AI-based items and applications already available for improved decision-making and earlier detection of crop conditions [105][106]. AI enables farmers to make better decisions and more accurately assess the state of their crops [106]. The first step in intelligent farming using AI-based UAVs is harvest management, also known as yield management [107]. The authorities may also benefit from an accurate estimate of the yield since it can be usedto create various strategies, including transportation needs, procurement techniques, storage facilities, and more [107][108].The picture data is collected from the field by UAV, then artificial intelligence technology is used to analyze it [109]. This approach helps with more accurately anticipating the yield, as well as with more intelligent watering. These AI systems provide essential data on the predicted output at an early stage [109][110].In the process of field analysis, ANN [111], CNN [112], and RNN [113] are often used to research the field picture to arrive at decisions and achieve precise positioning. Figure 9 shows how AI is already being used in farming tracking and monitoring, with a focus on how UAVs are being used as part of an intelligent system.For crop surveillance models, researchers use a lot of different datasets. Table 3 shows the most common datasets and gives reliable explanations for each.One use of UAV technology in modern farming is the diagnosis of crop diseases. Deep learning algorithms, such as CNN, Deep CNN [114], GoogleNet [94], VGG [115], DenseNet [116], and many others, are currently utilized in AI-based UAV intelligent agriculture systems for detecting disease and applying organic pesticides on the spot [117]. When mapping land, UAVs are often utilized in place of survey drones. Survey drones powered by AI can produce high-resolution orthomosaics and comprehensive 3D models of regions that only have access to data of poor quality, that are outdated, or that do not have any data. They make it possible to construct high-accuracy cadastral maps rapidly and straightforwardly, even in locations that are difficult to access due to their complexity [123].Combining GIS with AI-enhanced drone mapping opens up whole new avenues for robots to observe and comprehend the environment [124]. Superior capacities in geographical data collection, processing, and forecasting. In the process of GIS mapping, CNN, ANN, LSTM, and Naive Bias are used extensively for image segmentation [125]. This transforms 2D photos into 3D models with high resolution for usage in GIS applications [126]. Green computing is establishing a name for itself in AI-based UAV agricultural surveillance by reducing environmental effects and maximizing efficiency in resource use. Utilizing UAV hardware that is more energy-efficient, onboard data processing, and using renewable energy sources for ground stations makes this possible. The algorithms used in machine learning are developed to reduce the amount of required computing, which helps save energy during picture analysis and data transfer [127]. The optimal use of resources may be ensured by dynamic resource allocation and intelligent scheduling, which also reduce idle periods and overall power usage. Agricultural surveillance UAVs may improve crop management while saving energy and helping sustainable farming if these green computing principles are integrated into the system [128]. ### _Wildlife Monitoring with UAV_ Surveying vulnerable and invasive species to get reliable population estimates is a problematic undertaking to es \begin{table} \begin{tabular}{c|c|c} \hline \hline **Dataset** & **Data Type** & **Short Description** \\ \hline **Avo-DB [118]** & Image & This dataset has RGB images and annotated images as well of a Avocado field. \\ \hline **CoFly [119]** & Images & This dataset consists of high quality weed fields images with three different class. \\ \hline **Paddy Field** & Images & This dataset contains different types of paddy condition data for low height UAV image analysis. \\ \hline **PlantDet [121]** & Images & This dataset hold two different types of crop image with several leaf conditions for close UAV image inspection. \\ \hline **UAV Crop** & High Quality Images & Three types of crop images from Kazakhstan with high resolution. \\ \hline \hline \end{tabular} \end{table} Table 3: UAV Agriculture Surveillance Dataset Figure 9: Agriculture Surveillance UAV model for Modern Use tablish the ecological balance and sustainable growth of wildlife species [129]. Intelligent UAV systems are now used to surveil forests and keep track of the animals that live there [130].UAVs that collect geo-referenced sensor data have seen rapid adoption in the last several years, particularly for ecological surveillance and animal surveying [131][132].Figure 10 represents how AI is currently used in wildlife monitoring and tracking, with a focus on how UAVs are integrated into automated systems.Integrating green computing algorithms is pivotal in developing and implementing AI-based UAV wildlife monitoring systems. The algorithms have been specifically designed to optimize energy efficiency and mitigate environmental consequences. On-board data processing and analysis capabilities are used, reducing the need for resource-intensive data transfer to ground stations [133]. By optimizing calculations and using low-power hardware components, these algorithms contribute to the conservation of energy resources in unmanned aerial vehicles (UAVs). This, in turn, leads to an extension of flight duration and a reduction in the carbon footprint associated with monitoring activities. Successful research and protection of wildlife habitats and the promotion of sustainability in aerial surveillance activities may be achieved by integrating green computing concepts into AI-based UAV wildlife monitoring [1]. Monitoring sea turtles [134], black bears [135], big land mammals (such as elephants) [136], marine mammals (such as dugongs [137]), and birds (such as flocks of snow geese [138]), as well as providing assistance for anti-poaching activities for rhinos, are all examples of ways in which unmanned aerial vehicles (UAVs) may be used for governing wildlife [139][140].The use of several datasets is being employed by researchers for the purpose of wildlife monitoring, counting, and surveillance models. Table 4 provides a comprehensive overview of the most often utilized datasets, accompanied by reliable and accurate descriptions. UAVs are presently used extensively for wildlife surveillance, using machine learning methods. First and foremost, UAV cameras gathered photos and video data from the forest. The picture would be in black and white, color, or RGB style for better detection and identification [146]. Regarding the data obtained from the video, the footage might be collected in its raw, night vision mode form [147].SPOT is a program that can identify poachers in longer wavelength infrared heat UAV footage instantly [148]. SPOT searches for poachers in the simulated infrared heat picture, and identifications of poachers are represented by blue rectangles [149].The system learns from examples to pinpoint the most important elements of photos taken in natural settings like forests and animals [150]. ### Rescue Operation Surveillance with UAV Embedded AI Embedded systems with machine learning architecture are now being taught to seek and detect persons, threats, or dangers inside a designated broad region utilizing UAVs [151].CNNs of several types, including the T(Temporal)-CNN strategy [152][153][154],3D-CNN [155][156][157], were used to identify and divide up the rescue zone. Figure 11 depicts how AI is already being employed in human and animal search and rescue, with an emphasis on the employment of UAVs as part of a smart system. This technique is applicable for individuals in large rivers, ponds, or perilous environments. Scientists are now using a GPU-accelerated system [158], high-altitude UAV photos \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline **Dataset** & **Data Type** & **Short Description** \\ \hline **Wildlife [141]** & Image & This dataset has various angle, animal and conditions of wild images. \\ \hline **Fast Animal Detection** & Images & This dataset consist of different wildlife stock,animal tracking of a wildlife reserve park. \\ \hline **WildData [143]** & Images & This dataset has high quality wide UAV annotated images for detection. \\ \hline **Drone Count Data [144]** & Images & This dataset hold different type of animal images for counting, analysis. \\ \hline **UAV Aided Data [145]** & High Quality Images & Various kind of animal, trees and other species in wildlife surveillance. \\ \hline \end{tabular} \end{table} TABLE 4: UAV Wildlife Monitoring Dataset Figure 10: Wild life Monitoring [159], and feature extraction based on super-pixels [160] to locate the rescue spot. The challenge of target object recognition of various sizes is addressed by the researcher's suggested model, which uses a feature pyramid network with only one stage and a densely linked set of features [161][162][163]. The use of several datasets is being employed by researchers in the development of models for humans, small animals, item search, and rescue. Table V provides an overview of the most frequently utilized datasets in this domain, accompanied by reliable descriptions.First, UAV captures still images or moving video over a broad region, then analyzes using machine learning and deep learning algorithms [169][170]. After the picture has been segmented, the algorithm will detect prospective locations where rescue may occur. This might occur in various environments, including a forest [171], a building, a flood [172][173], a fire [174], and many more.Green computing methods are crucial for optimizing energy consumption and reducing environmental effects in AI-based UAV rescue missions. These algorithms emphasize economy by processing data locally on the UAV, negating the need for transmission, which may be taxingon system resources [175]. In this way, the UAV may save power for crucial tasks like search and rescue, even if they are meant to run on low-power technology. UAVs with artificial intelligence are more trustworthy and environmentally friendly tools for rescue missions when they adhere to green computing principles that increase their durability. This strategy is in keeping with the larger objective of ensuring the appropriate and sustainable use of technology during times of crisis [176].This lightweight version of the YOLO method may be found in UAVs as green computing concept, where it can handle various data classes at a high accuracy rate [177][178]. YOLO is cutting-edge real-time software for analyzing images and videos [179], making it useful for SAR operations. A strategy that YOLO uses is analogous to the F(Fully)-CNN algorithm, which the SAR system utilizes. Researchers use YOLO-S [180], YOLOv4 [181], and YOLOv5 [182] on UAVs to carry out rescue operations. These techniques are readily available. ## V Challenges and Future Aspect on AI Enabled UAV Utilizing AI for UAV systems has resulted in the introduction of a multitude of innovative and resourceful solutions toan interimmable array of issues [183].Drones are utilized to gather sensitive information from hazardous environments including high winds, terrible weather, heavy rain, and multi-shaded objects [184]. A vision-based unmanned aerial vehicle (UAV) navigation system using a variety of AI technologies were examined. Several applications of computational intelligence, including search and rescue, as well as surveillance, were addressed. In addition, as these systems become increasingly autonomous and interconnected, they become potential targets for criminals looking to exploit vulnerabilities. To avoid interruptions and illegal access, it is crucial to develop solid security measures and protections. Future research in AI-based UAVs will need multidisciplinary cooperation between specialists in AI, aeronautics, ethics, law, and other fields. This cooperation is required to develop thorough regulatory frameworks that direct the use of AI-enabled UAVs while protecting against possible hazards. Research should continue to concentrate on improving AI algorithms for better navigation, judgment, and adaptability, opening the door for more advanced and trustworthy autonomous systems.Several AI algorithms are now being implemented in UAVs in order to facilitate the operation of a variety of applications. Table VI provides an overview of the various AI algorithms currently being used in UAVs. Table VII lists the potential difficulties and tasks that might accelerate the development of AI-based UAVs. Also, finding new ways to use UAVs with AI, like in urban planning, environmental tracking, and crisis reaction, has the potential to change businesses and make society as a whole better. AI and UAVs working together is a growing area with much potential. Ongoing study and development will help unlock this potential and solve problems that come up along the way. As AI technology changes and new models develop,cooperation between science and technology will be critical in determining where AI-based UAVs go and how they influence our world. ## VI Review Summary "A Comprehensive Review of AI-enabled Unmanned Aerial Vehicles: Trends, Vision, and Challenges" delves into the developing scenario of AI-based UAV systems.Table VI presents an overview of the applications of AI in several areas, with a specific focus on the current applications of UAVs.This analysis takes a look at recent developments, potential future outcomes, and the current difficulties associated with this dynamic partnership. It illustrates how AI is vital in allowing autonomous UAV capabilities, from navigation to object \begin{table} \begin{tabular}{p{42.7pt}|p{113.8pt}|p{113.8pt}} \hline \hline **Dataset** & **Data Type** & **Short Description** \\ \hline **SARD [164]** & Image & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [164]** & Image & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [164]** & Image & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [164]** & Images & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [164]** & Image & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [164]** & Image & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [164]** & Images & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [164]** & Image & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [164]** & Image & This dataset annotated images for person and animal for rescue. \\ \hline **Rescue** & & \\ **Dataset** & Images & This dataset has over 200 real images for search and rescue area, location. \\ \hline **Dataset [166]** & Images & This dataset has high quality wide UAV images for rescue. \\ \hline **SAR Human Data [167]** & Images & This dataset hold 2000 images of human action images for rescue operation. \\ \hline **SAR Data [168]** & \begin{tabular}{p{113.8pt}} \hline \hline \hline **Data Type** & **Short Description** \\ \hline **SARD [1 identification, and addresses applications such as wildlife monitoring, precision agriculture, rescue operations, and more. In addition, the study discusses efficient computing strategies with energy, legal issues, ethical problems, and safety precautions. This in-depth assessment may be a helpful \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Applications** & **Algorithms uses (2000-2010)** & **Uses of AI algorithms** & **Algorithms uses (2011- Present)** & **Uses of AI algorithms** \\ \hline **AI based** & CNN,RNN [54] & & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ **UAV for** & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ **Traffic Monitoring** & ResNet-101 [56] & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ **Monitoring** & ResNet-110 [57] & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ \hline \multirow{4}{*}{**AI in UAV for** **Object Detection** & CNN [94] & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ **Object Detection** & Fast R-CNN [95] & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ [MISSING_PAGE_POST] \ **Visual object detection in highly weather** & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ resource for grasping the complicated nature and transformational possibilities of AI-enabled UAV technology. ## VII Conclusion The exhaustive analysis of AI-enabled UAVs reveals the dynamic nature of this technology's landscape. The study examines emerging trends that highlight the incorporation of AI, propelling unmanned aerial vehicles (UAVs) to new heights of autonomy, efficiency, and applicability across multiple industries. The review shows a picture of a future where AI-powered UAVs change businesses like tracking, detecting, espionage, transportation, and emergency management. However, the path towards this vision is fraught with obstacles, such as regulatory obstacles, ethical considerations, and technical complexities. As AI and UAV technologies continue to advance in tandem, resolving these obstacles will be essential to realizing the full potential of AI-enabled UAVs for the benefit of society.
2304.12099
Investigating the Perceived Impact of Maternity on Software Engineering: a Women's Perspective
Background: Several researchers report the impact of gender on software development teams, especially in relation to women. In general, women are under-represented on these teams and face challenges and difficulties in their workplaces. When it comes to women who are mothers, these challenges can be amplified and directly impact these women's professional lives, both in industry and academia. However, little is known about women ICT practitioners' perceptions of the challenges of maternity in their professional careers. Objective: This paper investigates mothers' challenges and difficulties in global software development teams. Method: We conducted a survey with women ICT practitioners who work in academia and global technology companies. We surveyed 141 mothers from different countries and employed mixed methods to analyze the data. Results: Our findings reveal that women face sociocultural challenges, including work-life balance issues, bad jokes, and moral harassment. Furthermore, few women occupy leadership positions in software teams, and most reported that they did not have a support network during and after maternity leave, feeling overloaded. The surveyed women suggested a set of actions to reduce the challenges they face in their workplaces, such as: i) changing culture; ii) creating a code of conduct for men; iii) more empathy; iv) creating childcare within companies; and v) creating opportunities/programs for women in the software industry and academia. Conclusion: Adding to the underrepresentation of ICT roles, women also face many challenges in one important phase of women's lives, maternity. Our findings explore these challenges and can help organizations in developing policies to minimize them. Furthermore, it can help raise awareness of co-workers and bosses, toward a more friendly and inclusive workplace.
Larissa Soares, Edna Canedo, Claudia Pereira, Carla Bezerra, Fabiana Mendes
2023-04-24T13:50:33Z
http://arxiv.org/abs/2304.12099v1
# Investigating the Perceived Impact of Maternity on Software Engineering: a Women's Perspective ###### Abstract Background: Several researchers report the impact of gender on software development teams, especially in relation to women. In general, women are under-represented on these teams and face challenges and difficulties in their workplaces. When it comes to women who are mothers, these challenges can be amplified and directly impact these women's professional lives, both in industry and academia. However, little is known about women ICT practitioners' perceptions of the challenges of maternity in their professional careers. Objective: This paper investigates mothers' challenges and difficulties in global software development teams. Method: We conducted a survey with women in the ICT field who work in academia and global technology companies. We surveyed 141 mothers from different countries and employed mixed methods to analyze the data. Results: Our findings reveal that women face sociocultural challenges, including work-life balance issues, bad jokes, and moral harassment. The prejudices they suffer make them insecure and with low confidence in the work performed. Furthermore, they usually do not have a supporting network during and after maternity leave, which culminates in them feeling overloaded. The surveyed women suggested a set of actions to reduce the challenges they face in their workplaces, such as: creating a code of conduct for men and childcare within companies. Conclusion: Women face many challenges when they become mothers. Our findings explore these challenges and can help organizations in developing policies to minimize them. Also, it can help raise awareness of co-workers and bosses, toward a more friendly and inclusive workplace. Software Engineering; Maternity Challenges; Pregnancy;aternity Suggestions ## I Introduction There is a worldwide effort to increase gender diversity in Science, Technology, Engineering, and Mathematics (STEM), most notably in Information and Communications Technology (ICT) [1]. However, the representation of women in these areas is still low [2], [3]. Canedo et al. [4], and Izquierdo et al. [5] found that women account for approximately 10% of software development teams and are primarily under-represented in leadership positions. On the other hand, National Center for Women & Information Technology (NCWIT) Scorecard shows trends in girls' and women's participation in computing in the United States (U.S.) over the time. The above mentioned report shows that in 2019 women accounted for approximately 20.9% of students completing bachelor's degrees in ICT in the U.S. against 16.7% in 2009 [6]. Numerous research studies have been conducted in the context of gender diversity in recent decades, which indicate that women in ICT community have faced a quarrel culture on behalf of men that manifests itself as under-representation, disfavor, and inequality [4], [7]-[9]. The sociocultural challenges women generally face, whether in academia or the software industry, may ultimately drive them to quit jobs, mainly when gender diversity is not a priority in their organizations [10]. Kuechler et al. [11] identified that women's distancing from their jobs is related to them not being aligned with their motivations or due to unpleasant and hostile social dynamics in their workplace. Women are susceptible to getting pregnant. Thus, they can be single mothers, have very young children that require constant care, be the family's sole provider, or deal with numerous family problems. These scenarios were even more evident during the COVID-19 pandemic when women working in ICT across different roles had to deal with at least two types of problems, one due to the pandemic scenario and the other due to the continued lack of consideration from male co-workers [12]. During the COVID-19 pandemic, software development teams were impacted by the migration from face-to-face work to remote work [13]-[15]. In such a scenario, mothers who were part of software development teams reported that working remotely with children was highly challenging, often carrying out their activities outside working hours [12]-[14], [16]. Furthermore, it is known that historically the responsibility for household and maternity activities is attributed to women, causing them to have a double shift and suffer penalties in the labor market. Trinkenreich et al. [17] identified possible actions that companies in the ICT sector can apply in parental and gender-inclusive policies, such as: sponsoring child care, providing adequate maternity leave beyond the relevant country's laws, and also providing more flexibility in work hours. The same authors also reported that some challenges for women in the software industry are that mothers receive [fewer] responsibilities because they have kids. Also, when returning from maternity leave, the company does not provide enough support, which usually leads the woman to ask to step down from the role. Despite numerous studies investigating gender in the software industry, we have not identified studies that deeply investigate the challenges of motherhood. Thus, this paper aims to investigate the challenges and difficulties mothers face in global software development teams. To achieve this, we address the ephemeral and critical challenges experienced by women in ICT working in either public organizations (e.g., educational, development, and research activities) or private software development companies. Hence, we performed a survey with 141 mothers from 17 different countries. The survey was composed of 55 questions, 47 closed-ended and 8 open-ended questions. Our findings reveal that mothers face many challenges in their workplace, such as moral harassment, lack of empathy/sorority, work flexibility, and a support network. As a consequence, they feel overloaded, lack confidence, and are physically and mentally stressed, especially when they are single mothers and providers. It is exhausting for mothers to ask for and explain absences due to maternity, which can be many and of various types (e.g., doctor's appointments, kids' illnesses, and school events). Companies lack empathy, both from co-workers and bosses. In addition, the main suggestions to mitigate the challenges pointed out by the surveyed mothers were: changing culture in organizations, creating a code of conduct for men, creating childcare within companies, creating opportunities/programs for women in ICT, and switching to hybrid work when needed. ## II Background and Related Work ### _Impact of gender on software development teams_ Several studies in the literature have investigated the impact of gender on software development teams, especially the participation of women [18, 19, 20]. Understanding the causes of this under-representation helps us understand the reasons for gender imbalance, whether in academia or industry. In addition, it can help organizations devise strategies to attract and retain women in ICT. Wolff et al. [21] surveyed 252 women, and they reported a lack of self-efficacy, which is a potential predecessor of imposter syndrome. The authors reported that women feel discriminated against regarding equal opportunities, and their negative experiences in their workplace environments may affect their feelings and attitudes toward their careers. Canedo et al. [4] have conducted semi-structured interviews with 17 Brazilian women, and they reported dealing with hostile sexism, including discrimination and prejudice, benevolent sexism (i.e., when women are not given the most complex tasks to perform), and glass ceiling, whereby few women hold leadership roles in their teams. Women contributing to OSS projects also reported work-related challenges, including work-life balance issues and imposter syndrome [10], lack of parity with colleagues and sexism [21], and prove-it-again [22]. Some workers also identified that non-inclusive communication is faced by women contributing to OSS projects, particularly technical biases against women developers with lower code acceptance rates, as well as delayed feedback during code reviews and discussion lists [23, 24, 25]. Women also reported hostile sexism they faced during meetings with contributors. ### _Maternity in software engineering_ In the literature, we did not identify studies that directly investigated the impact of motherhood in the Software Engineering context. However, some studies report findings about challenges that mothers face in the software industry [4], [5], [7], [12], [17], [22]. During the transition to remote work due to the COVID-19 pandemic, Bezerra et al. [13] and Machado et al. [26] conducted a study with Brazilian women. They reported that women faced even more work-life balance problems during the pandemic, lacking support in performing household chores and taking responsibility for their children. Unlike previous work, this paper explores the difficulties and challenges that mothers experience in software development teams and academic environments. In addition, we also aim to identify suggestions for dealing with these challenges. ## III Methodology This study aims to investigate the impacts and challenges of motherhood in Software Engineering, both in academia and industry. In order to reach this goal, we defined the following research questions (RQs): 1. **Do mothers remain employed after becoming pregnant? If they quit their jobs, what were the motivations?** This question aims to identify the main reasons why ICT women leave their jobs after becoming mothers. 2. **How was the maternity leave affected by the job activities?** This question investigates whether mothers had to work during the leave or if they had to anticipate returning to work. 3. **What were the strategies employed by mothers to overcome difficulties during the COVID-19 pandemic?** The COVID-19 pandemic may have impacted the balance between work and motherhood activities. This question seeks the set of strategies that mothers used to make work possible during the pandemic. 4. **What are the harassment and prejudice related to motherhood that women face while working in the ICT area?** ICT area is predominantly male context and women may suffer many prejudices related to maternity. This question details the type of harassment or prejudice that women suffer after becoming mothers. * **What are the perceived difficulties by mothers who occupy positions within the computing area?** This question aims to unveil the main difficulties mothers face in ICT jobs, both in industry and academia. * **What do mothers suggest to mitigate the difficulties faced at work related to motherhood?** This question aims to identify what actions mothers suggest to make the work environment more friendly. The next sections detail the process of creating the survey, including the design and procedures to define the target audience, the pilot study, the survey invitation and distribution, and the strategies we employed to analyze the gathered data. ### _Target Audience_ We considered software practitioners women who are working in ICT and are mothers (pregnant, biological mother, stepmother, and adoptive mother). Hence, we included a control question at the beginning of the survey to filter the respondents and ensure we got responses only from our target audience. ### _Survey Design_ All the authors of paper were involved in the design and validation of the survey questions. Three authors described the survey questions and the others validated them. The survey consisted of 55 questions, 47 closed-ended and 8 open-ended questions grouped into 13 sections (S), as follows: * Consent to participate in the research * Control Question * General Profile * Children * Professional Life * Work in Industry * Work in Academia * Work in both Industry and Academia * Organization * Pregnancy * maternity Leave * If not employed during maternity leave * Difficulties, Challenges and Suggestions At the beginning of the survey, we presented the statement of informed consent (S1), including the conditions and stipulations. We also presented the contact information. The survey was anonymous and respondents were not asked for any contact information, then, we had the control question (S2). The next section (S3) comprised the general profile with five questions; (S4) was related to children containing three questions; and (S5) asked about professional life through four questions. From this point, we divided the survey into three sections, depending on the type of mother's job: software industry only (S6), academia only (S7), or both industry and academia (S8). Those sections had three, two, and five questions respectively. Section (S9) contained three questions about the organization they worked for, common to the three jobs (industry, academia, or hybrid). The next four sections are related to maternity issues. (S10) consisted of nine questions about the pregnancy period; (S11) contained 14 questions about the maternity leave period; (S12) was aimed at those who were not employed when pregnant with four questions; and (S13) encompassed the last two questions regarding the maternity difficulties, challenges, and suggestions. The complete survey is available in our supplementary material1 Footnote 1: Survey supplementary material available at [https://zenodo.org/record/7548888](https://zenodo.org/record/7548888) ### _Pilot Study_ We have conducted a pilot test round to evaluate the survey quality. We sent the questionnaire to three mothers who occupy positions within the computing area. Their feedback included suggestions regarding the wording of the questions and other modifications, such as including alternatives to closed questions and the researchers' contact information. We followed their advice and improved the questionnaire. Regarding the time to complete the survey, the pilot respondents, took less than 18 minutes on average. We reported that time to the respondents when the survey questionnaire was made public. ### _Survey Invitation_ We used the Google Forms platform2 to create the survey questionnaire. Next, we made it available through cards and text on different social media platforms. We used two strategies: posts and direct messages. We posted on Twitter, LinkedIn, Facebook, and Instagram, and we sent direct messages to profiles on these platforms, in addition to WhatsApp and e-mails. The questionnaire was available from November 10th to December 14th, 2022 (34 days). Footnote 2: [https://www.google.com/forms](https://www.google.com/forms) ### _Data Analysis_ This research is essentially qualitative research using histograms and percentages to characterize the sample or rank the items most cited by the respondents. To answer the research questions, we employed elements of the Grounded Theory by performing open and axial coding [27]. Grounded Theory refers to a method of inductively generating theory from data. Studies often include unstructured text, for example, interview transcripts, field notes, and so on. However, they may also include structured text, diagrams, images, and even quantitative data [28]. In this study, the coding process was performed in three rounds. In the first one, two authors performed the open coding of all open questions. Thus, they split the data into discrete parts and labeled these parts to create codes. In the second round, the other two authors performed the axial coding. They read the discrete parts of the data and the assigned codes to identify connections among the codes and group them into categories. Finally, another author reviewed and refined the categories and codes in the third round. An example of the coding process is shown in Figure 1. The example shows respondent # R03's answer regarding the difficulties and challenges faced at work as a mother. The complete open and axial coding process is available in our supplementary material. ## IV The motherhood landscape in the ICT area The survey received 147 responses. After removing the responses from women who were not mothers (based on the control question), we obtained 141 valid responses. ### _Respondents' profile_ We received responses from 17 different countries. Overall, more than half of mothers work in Brazil (54.6%), but we also received answers from Portugal (6.4%), the USA (4.3%), Bolivia (4.3%), Belgium (3.5%), Canada (3.5%), Spain (3.5%), Argentina (2.8%), Sweden (2.8%), Australia (2.1%), Colombia (2.1%), Mexico (2.1%), Switzerland (2.1%), Austria (1.4%), Chile (1.4%), Denmark (1.4%), and United Kingdom (1.4%). Our research had the participation of mothers from different continents of the world, mitigating the cultural factor. Furthermore, mothers of different ages participated in the survey. In general, most respondents were between 31 and 54 years old (83.7%); 3.5% were 21 to 25 years old; 7.1%, 26 to 30; 2.8%, 55 to 60; and 2.8% were more than 61 years old. Almost half of the respondents had 2 children (48.2%); 39% had only one child; 9.9%, had 3 children; 2.1%, 4 children; and 0.7% had more than 5 children. The survey questionnaire also asked about the age of the youngest child. For 63.8% of the mothers, their youngest child was up to 6 years old; for 12.8%, the youngest child was between 7 and 9; and for 23.4%, they were over 10 years old. Additionally, 97.2% of the respondents, live with their children. Most respondents have children aged up to 10 years. This factor helps the results of our research, as the age group needs more support and dedication from mothers. Regarding education, only one respondent had not yet finished a bachelor's course. Of those who finished it, 36.2% pursued a Master's degree, 29.1% Ph.D., and 14.9% MBA. Of the total of respondents, 97.9% were employed when they answered the survey, and among them, 42.6% had a face-to-face job; 33.3% was working in a hybrid mode (remote and face-to-face); and 22% were working fully remote. For 42.6% of them, the family income was higher than 10 minimum wages. However, 11.3% received up to 3 minimum wages; 24.1% up to 5; 12.8% up to 7; and 9.2% up to 9 minimum wages. We also obtained in the survey representativeness of approximately 58% of the respondents working in a hybrid or remote way and 42% face-to-face. In addition, most respondents have a family income of more than 7 minimum wages. More than half of the respondents were working at private software development companies (56%); while 32.7% of them were working at either Federal or State Public Administration; 14.2% at research/collaboration projects; 5.7% at State-owned companies (such as state banks); and 0.7% were working with open source software projects. We can see that most of the respondents are from the industry, but we also have a good representation of women who work in academia. The percentage is higher than 100% because 12 respondents marked more than one option. 53.9% of the women are married; 21.3% are single; 15.5% are in a long-term relationship; 7.1% are divorced; 1.4% are separated; and 0.7% are widowed. Thus, the general profile of the mothers who participate in the survey is Brazilian, with two children up to 6 years old, with a master's degree in computing, employed in a face-to-face or hybrid work regime, and receiving 10 minimum wages in a software development company. ### _Organization_ The respondents performed different roles in the organizations, such as Lecturer and Researcher, Software Engineer, Programmer/Developer, Project Manager, Requirements Analyst, Professor, Software Tester, Human-Computer Interaction specialist, Designer, and Researcher. Other roles, such as QA Tech Leader, Product Manager, Linguist, Infrastructure, Enterprise Architect, Engineering Manager, Director, Database Administrator, Data Modeling, Course Coordinator, Technology Coordinator, and Change Management Analyst, were mentioned only once by the participants. Figure 2 shows the main roles informed by them. For those who work in the software industry, their team size is characterized as follows: 50% of respondents work in teams with more than 16 members; for 13%, the teams have from 11 to 15 members; for 23%, from 6 to 10; and for 14% less than 5 members. For respondents who work in academia, the numbers are much higher, since we considered the number of members in their departments. Thus, 20% of respondents work in departments with up to 20 members; 23%, from 21 to 49 members; 28%, from 50 to 99 members; and 29%, more than 100 members. Fig. 1: Example of how the coding process was carried out. Although they occupy various roles in the software industry and academia and may work in huge teams or departments, 36% of respondents work in teams/departments with up to 6 women; 18% work with 7 to 15 women; 8% work with more than 16 women, and 57% of them affirmed that they work with less than 3 women, as shown in Figure 3. In addition, 61% of respondents stated that there is one or no women in leadership positions in their teams/departments. 27.7% stated that there are 2 or 3 women; 4.9% stated that there are 4 or 5; and 6.4% informed that there are more than 5 women in leadership positions in their teams or departments. This finding is similar to what was found by Canedo et al. [4] and Izquierdo et al. [5]. For 93.6% of respondents, work in teams with up to 5 women as leaders, as shown in Figure 4. #### Iii-B1 Industry setting Of the total of mothers, 60.3% worked only in the software industry and 10.6% worked for Industry and Academia. For both groups, most respondents have worked in the industry for more than 15 years (34%), as Figure 5 shows. However, respondents were quite diverse, with some having little experience, such as less than 1 year (3%), to others between 4 and 9 years (23%), and even more than 10 years (65%). Regarding the organization, 39% of the respondents work in large-sized companies with more than 400 employees, combining both groups, Industry and Industry & Academia, as Figure 6 shows. Yet, 33% work in medium-sized companies, with 199 employees; and 10% work in companies with up to 20 employees, considered as small or micro companies by the Organisation for Economic Co-operation and Development (OECD)3. Footnote 3: classification available at: [https://data.oecd.org](https://data.oecd.org) Concerning the team size of women who work in Industry, 37% work in teams with more than 21 members; 40% work in teams with 6 to 20 members; and 8% work in teams with less than 5 people. Regarding the size of teams that women Fig. 4: Number of women in leadership positions in team or department Fig. 5: Years of experience in the software industry and academia Fig. 3: Number of women in teams or departments Fig. 2: Main roles performed in the organizations work on both in Industry and Academia, 4% work in teams with more than 21 members; 5%, from 6 to 15; and 6%, less than 5, as Figure 7 shows. #### Iv-B2 Academia setting Considering the total of respondents, 29.1% work exclusively in Academia, and 10.6% work for both Academia and Industry. For both groups, most respondents work in academia for more than 15 years (22%), as Figure 5 shows. However, respondents were quite diverse, with some having little experience, such as less than 1 year (1%), to others between 7 and 12 years (8%), and between 13 and 15 years (7%). Regarding the respondents who work at the Academia, most of the respondents work in educational institutions with 50 to 99 employees (16%), 13% work in institutions from 21 to 49 employees, and 10% with more than 600 employees, as Figure 6 ### _Reasons to quit or change jobs (RQ[1])_ Considering the 141 respondents, 131 were employed when they last became pregnant, which is about 93% of them. From these 131 mothers, 14 (11%) left their jobs after they became pregnant. The answer of respondent #R62 is a warning about the problems suffered by women: _"In my first pregnancy [...] I was fired when I returned from maternity leave and it took me 2 months to find a new job."_ From the ones who indicated the reasons to quit or change their jobs, the main motivations were: * Moral harassment (45.5%). They faced bad jokes, prejudice, and discrimination. For example, #R110 commented _"I couldn't stand my co-workers, there were a lot of bad jokes and it lowered my self-esteem"_. Also, _"I looked for a job where people believed in me more"_(#R119); and _"I changed job because my boss didn't think I was capable of working and taking care of the kids"_(#R105); * Health-related issues (18.2%). They had to deal with the health problems of both mother and baby. For example, respondent #R32 said _"my child was born with Down syndrome and this fact prevented me from working for two years and after this period, still struggling, I used to do my job during the evenings"_; * Feeling guilty (9%). Reconciling work with motherhood is not an easy task. For example, respondent #R47 said _"we always feel guilty for going out and leaving the children at home with the nannies"_. Although less cited, they also reported that changed their jobs because of logistics around child rearing and changed to a position that requires fewer hours and stress. Additionally, 27.3% of mothers reported that they left their jobs because they found better opportunities, for example, a better position or salary, some also reported that they switched to companies that allow remote work. For instance, respondent #R02, stated _"I had the desire to breastfeed my baby and the company did not allow remote work"_. **RQ[1] Summary**: 11% of the woman quit their jobs after becoming mothers. The main reasons for quitting work were: moral harassment suffered within the company and health issues. Others changed jobs in search of a better quality of life and better opportunities. Although the percentage is small, some women needed to quit their job after becoming pregnant, affecting their income and profession. ### _Maternity Leave (RQ[2])_ In spite of 93% of respondents stating that they were working when they became pregnant, only about 79% (111) of them were on maternity leave during their pregnancy. From them, 89.2% maintained the same number of employment relationships. In general, 69.4% of the mothers had one job during the maternity leave; 7.2% had two employment relationships; 2.7%, had three jobs; 5.4% more than three; Fig. 6: Number of employees in the organizations Fig. 7: Team size and 15.3% were unemployed. Most of them had 90 (11.7%), 120 (37.8%), or 180 (40.5%) days of license. The other 10% had more than 180 days. Moreover, some mothers could not care for their babies full-time during maternity leave. About 24% of the respondents needed to work during maternity leave, either partially or fully. Among the reasons, were: the need for money, mentoring students, supervisor's request to return to work before the end of maternity leave, research does not take a break, they started a new job, and fear of falling behind or losing their jobs. For example, #R142 stated: _"I was afraid of losing my job and accepted my boss's blackmail to return to work after 2 months"_. However, one speech stood out: _"My boss said I needed to prove I was capable of working after my daughter was born"_. About 39% of respondents had no networking support during maternity leave, while 31% had partial support, for a total of 70%. We found that only 27% had some kind of support. Furthermore, 3% stated that they did not ask for any support network or that they only received help from their husband. When asked whether their partners shared childcare responsibilities with them during maternity leave, 41% said "Yes"; 32% "No", and 27% received partial help. The support network for these mothers shortly after maternity leave was quite diverse. Figure 8 shows that the main caregivers for the children were babysitters, relatives, and nurseries, respectively. However, some mothers dedicated themselves to this task, as shown in the speech of respondent #R64: _"myself, because I always worked from home and after maternity leave came the pandemic with the beginning of the shutdown, so I didn't have much choice"_. Others had the support of their children's parents. One even reported that in Sweden parental leave is divided between both parents and added _"My husband took 6 months to leave with the child until he was 1 year old and started school"_ (#R92). They also cited grandmothers, school, and mother-in-law as caregivers. Of those on maternity leave, only 13.5% indicated having support from more than one agent. After returning to work from maternity leave, 70% of respondents work the same amount of hours per week; 17%, fewer hours per week and 13% more hours per week. **RQ[2] Summary**: 79% of respondents were on maternity leave during their pregnancy. Of these, 15.3% had more than one job during maternity leave and 15.3% were unemployed. Some of the women (24%) had to work during maternity leave, for a variety of reasons, among them the need for money, the fear of losing their job, or feeling less capable upon returning and being asked by their boss. About 39% of respondents had no networking support during maternity leave. The main support was provided by babysitters, family members, and nurseries. ### _COVID-19 Pandemic (RQ[3])_ The COVID-19 pandemic may have changed the way people relate and communicate with each other, at work and home. Several reports of women have moved from face-to-face work to remote jobs. Some companies have adopted remote work and intend to continue even after the pandemic, others are adopting a hybrid work system. It is important to highlight that this new configuration may bring benefits to the employees, however, companies should pay attention to the challenges that the configuration can bring to mothers. For about 81% of respondents, the COVID-19 pandemic made it difficult to balance the time between work and motherhood-related activities. Many respondents reported strategies they employed to overcome those difficulties. We aggregated the reported strategies into six categories as shown in Figure 9. The category Flexibility with school tasks corresponds to strategies related to the reduction of the charge related to school activities, and the Use of devices represents an alternative to hold the attention of the kids while their mothers work. The other four categories contain sub-categories that are not presented in Figure 9. The Shared responsibilities category contains three Fig. 8: Person responsible for taking care of the baby after mother’s maternity leave Fig. 9: Strategies used to overcome difficulties during the COVID-19 pandemic sub-categories representing with whom they shared responsibility: some mothers were able to count on the help of _fathers or relatives_, others _hired a nanny_ and others could count on _teacher assistance_ to help with their children's homework. In the Work overload category, a significant number of mothers reported not being able to handle their work during the day and having to work _at alternative times to the children's schedule_, such as after the child sleeps. A strategy employed by some of these mothers to overcome this difficulty is _sleep deprivation_, in other words, they slept fewer number hours per day to be able to work more. The Time planning and sharing category corresponds to the creation of a plan to make it possible to work from home among the children. The respondents reported four strategies: _creating a kids schedule_, _creating a daily schedule_, _taking care of children while working_, and _taking breaks from work whenever possible_. Finally, the last category, Change at work contains three strategies used by the mothers: _changing from face-to-face to home office_, _workload reduction_ and _one partner quit the job_ to take care of the children. From the total of responses, only one respondent did not mention any negative aspect of remote work. According to her, working remotely gave her the opportunity to spend more time with kids. **RQ[3] Summary**: For most of the respondents (81%), the COVID-19 pandemic has made it harder to balance time between work and motherhood activities. Mothers needed to develop a set of strategies to make it possible to work during this period, such as: allowing the use of devices by the kids; workload reduction: creating a time schedule, working during alternative times to the children's schedule; and hiring a nanny or teacher assistance. ### _Harassment (RQ[2]_ We also investigated whether women had suffered any harassment because of their pregnancy. It was hoped that this type of prejudice would not occur in any proportion. However, the results pointed out that, of the 141 respondents, 59 had suffered some harassment, making up about 42%, as opposed to the other 58% (82 mothers) who had not experienced this embarrassment. Although in a smaller proportion, this percentage is quite relevant and points out that there is still an important path to be followed for actions such as these to be mitigated. Those who suffered some harassment presented statements that reveal gender issues that need to be reflected upon, especially the moral harassment they faced. For them, the most pointed evidence were issues related to distrust, mean jokes, and related prejudices, as Figure 10 shows. The Distrust category represents the distrust of the boss and coworkers regarding the functional capacity of the pregnant colleague or mother, as illustrated by the speeches of three of the respondents (#R143, #R130, and #R135, respectively): _"My boss said in all meetings with the team that I was not able to carry out my activities within the established deadlines due to my pregnancy"_; _"Everyone thought that my creative process would be affected by my pregnancy, that I wouldn't be creative and be able to make interesting designers"_; _"Some colleagues didn't want to be on the same team as me because they thought I wouldn't be able to work"_. The Prejudice category refers to the damages and impacts resulting from moral harassment, whether emotional, psychological, or financial. The statements brought up by the respondents shows how necessary it is to discuss gender diversity in academic and professional environments, to guarantee a more equitable, fair, and healthy space for everyone. In this category, some statements translate the cruelty of peers and, above all, the need for reflection and actions toward significant changes in organizational environments and society in general. For example, #R114 affirmed _"My colleagues always looked at me with discrimination and as if I didn't know anything about ICT"_; Also, #R131 said _"My co-workers thought that I had a disease and that I would not be able to work for the 9 months"_. Not only colleagues, but ICT mothers also suffer from bosses' prejudice. Two statements stood out (#R115 and #R105, respectively): _"My boss asked me to resign. He said I couldn't take on two roles, mother and practitioner"_ and _"When I told my boss that I was pregnant, he asked the company to fire me. He said I wouldn't be able to work with kids"_. Mean jokes and bad jokes were also common reports among the respondents. This category represents that these professionals, in their work environments, suffered from mean jokes throughout or after their pregnancy. For instance, #R136 affirmed _"My co-workers made a lot of jokes and started not assigning me any tasks"_ and #R55 reported _"When they found out about my pregnancy they started treating me differently as if I had a very contagious disease, I always heard giggling and small talk, my paychecks were on different days than others and my boss always referred to me as 'the pregnant woman' "_. Fig. 10: Moral Harassment identified in the survey As a consequence, due to the harassment they suffered, some women reported they began to doubt their own abilities. For instance, #R126 affirmed _"My colleagues made so many bad jokes that I began to think that I was incapable of working and being a mother"_. **RQ[4] Summary**: 42% of the women reported that they suffered some maternity-related harassment within ICT companies/institutions. Among the most common harassment were: distrust, prejudice, and mean jokes. The women's statements illustrate the perversity of coworkers' and bosses' behaviors. ### _Difficulties mothers face at work (RQ[5])_ Respondents cited many difficulties and challenges they face in their day-to-day work. For example, according to respondent #R47, it might be hard _"caring, giving attention, educating, and doing all this together with an overloaded workday"_, which can be even harder to solo mothers without a support network. Figure 11 shows the identified categories of difficulties. Many of them are related to mothers' overload. For example, some mothers complained about the lack of time to study and keep up to date. The respondent _#R87_ said: _"One of the most difficult things is [...] not having time for training on professional skills"_. Also, there is no time for self-care, such as doing physical activities, mainly related to a lack of a support network. Also, balancing maternity, work, and housework is not an easy task. Some respondents indicated that they had already missed opportunities for promotion and used to avoid complex projects to have more time for the children. They also would like to set limits on what time of the day they can have meetings, for example. A related challenge is the lack of work flexibility for childcare tasks. A common difficulty among the respondents was the maternity-related absences. Children get sick and need to go to the doctor quite often, especially in the first years of life. The respondent #R96 said that _"[...] bosses do not understand these absences, especially those who still do not have children"_. The mothers cited five common necessary absences: medical appointments, children's illnesses, childhood vaccines, many absence, and school events. Another related challenge is the lack of women at work. The ever-reduced number of women on the team means that they have no one to talk to, which makes it difficult for colleagues and bosses to understand the difficulties. Thus, they also complained about the lack of empathy or sorority, sometimes even by women. The respondent #R87 affirmed: _"When I returned from maternity leave, the woman in charge gave me greater responsibilities, because when she returned from maternity leave, they did this to her"_. Remote work challenges, such as children requiring attention, and the strategies to overcome them, as presented in Figure 11 were also cited. Besides, the short maternity leave instituted in some countries, such as Brazil, can be challenging for mothers who need to return to work after 4 months, a period when the baby is still exclusively breastfed, as indicated by pediatricians. Furthermore, mothers have to handle bad jokes and prejudice at work. For example, respondent #R136 declared "I heard many bad jokes from my male colleagues. Sometimes even women too [...]". And respondent #R107 said _"My biggest challenge is to prove that I am capable. The lack of confidence from my male colleagues makes me feel discouraged [...]"_. The difficulties and challenges of motherhood imply a set of consequences for mothers, as Figure 12 shows. The challenges mothers face in coping with work and motherhood can have dangerous consequences. They reported being overloaded with physical and mental exhaustion, many times in sleep deprivation. Moreover, the Lack of confidence in mothers' work was a theme frequently addressed in the mothers' responses. For instance, respondent #R122 affirmed that _"Sometimes my colleagues think that I am not able to fulfill my demands at work and home"_. They also reported having difficulty concentrating when the babies are so young and they need to go back to work because maternity leave is so short. They affirmed feeling guilty about: (i) being away from the children; (ii) not having time for them; and (iii) not meeting children's attention needs. Fig. 11: Main difficulties and challenges mothers face at work Fig. 12: Consequences of the difficulties mothers face at work **RQ[5] Summary**: Respondents pointed out many difficulties in reconciling work with motherhood. Most of them are related to mothers' overload, and the lack of time to care for themselves and keep up to date professionally. They do not find flexibility at work for the absences that motherhood requires, such as doctor appointments. Also, the bad jokes and prejudice they suffer make it harder to balance work and maternity. ### _Suggestions to mitigate the difficulties (RQ[6]_ We also asked the participants, what would be their suggestions for mitigating the difficulties (by the organization and co-workers) faced in their work environments. TableI presents all suggestions that received two or more citations. The suggestion most mentioned by the participants was the change of culture, which was cited 27 times by the mothers who participated in the survey. For example, #R92 affirmed that _"The culture of the team and the company (and even the country in general) must be aligned with raising a family's difficulties. I live in Sweden and here the laws make it easier to raise children, such as parental leave of 480 days shared between the guardians of the child"_. The second most cited suggestion was to create a code of conduct for men. Still, mothers suggested applying fines if men don't comply with the code, so they stop making jokes and judging the mothers' abilities. Some works in the literature proposed to develop a code of conduct as a collective policy on unacceptable behavior in interactions between members of development teams [7, 22, 24, 29, 30]. The studies report that harassment cannot be tolerated and that violations of policies should have consequences, for example by employing mechanisms to enforce the use such as appropriate penalties, if necessary. Suggestions such as establishing limits, reducing meetings' frequency and duration, prioritizing tasks, shorter projects and less complex tasks on the return from motherhood, changing the structure of scientific events (for example, do not put submission deadlines for Sundays and holidays, have a kids' space at scientific events), making a welcome program for women, more personal days, more time off with no penalties, having professionals/colleagues who can replace mothers that need to be absent, and promote initiatives like this survey. Each of the aforementioned suggestions was cited once by the respondents. Personal life and work should not be seen as opposing worlds, but complementary. It is essential to promote and practice conciliation strategies so that mothers do not lose their space in the labor market. Mothers need to feel welcomed in the work environment. Therefore, it is necessary to provide alternatives that promote the reception of these mothers so that motherhood is naturalized in companies. In view of what was raised, it is possible to point out some interesting initiatives capable of contributing to the transformation of organizational culture, such as: * _Give visibility to the subject of motherhood at work._ Companies could create educational moments for leaders and other collaborators in order to raise awareness about the topic. For example, they could invite mothers to share with the entire company their challenges and how they overcame them; or solutions that would get them through. * _Merge work groups in such a way that every team has a female member_. It is easier to deal with situations when we experience them. Including mothers in teams, in addition to increasing diversity, can make colleagues more aware of women's difficulties. * _Provide foster care spaces for mothers_. The companies could create discussion groups for mothers, providing exchange and a safe environment. Support groups are essential for mothers to feel welcomed in the corporate environment. * _Create spaces for children in companies and daycare centers that can receive them_. These spaces are valuable for companies, research centers, universities, and conferences. * _Set diversity and inclusion goals_. Companies could set inclusion goals that seek to favor minority groups like working mothers. These goals need to be monitored closely. Therefore, they should be measurable and have a well-defined deadline. **RQ[6] Summary**: The respondents feel that the work environment could be more friendly and inclusive for mothers. They suggested many actions that could be done by the organization and co-workers, such as creating a code of conduct for men and changing the culture, extending maternity leave, talking openly about maternity, and supporting parents in a way they can focus on getting their work done while they are at work. ## V Threats to Validity As with any empirical study, this work also has threats to validity and limitations, which we present in this section. According to Kasunic [17], there are three important types of validity concerning to survey research: construct, internal and external validity. The **construct validity** aims to answer if "we measuring what we think we are measuring" [17]. The questionnaire employed in this research has never been used before. In order to deal with this threat, we divide the authors into two groups: those who created the questionnaire and those who reviewed it. Furthermore, before distributing the questionnaire, we run a pilot with three people, which resulted in changes to it. The analysis of the **external validity** of a survey aims to answer the following question "Can the results be generalized to other people, places, or times?" [17]. Despite the good number of answers (147) and the different countries that the respondents are from (17), half of the respondents of the survey are from Brazil. Finally, about the **internal validity**, the sample characteristics might have influenced our results, for the same reason mentioned before: half of the respondents are from Brazil. Therefore, the suggestions and challenges might reflect the problems and situations related only to women that live in Brazil. Furthermore, it is important to highlight that most of the authors that conducted this research are mothers (4 out of 5) which could also have some influence on the results we got. ## VI Conclusion and Future Work This paper presents the results of a survey about women's perceptions of the impact of motherhood on their careers in software engineering. We received responses from 141 mothers who work with software engineering in industry and academia in 17 countries. As women and mothers, we wanted to show a more sensitive look at the difficulties of mothers working in this area and how companies can provide better support to them. We designed a questionnaire with 55 questions to answer six research questions. The RQ [17] investigated the strategies employed by mothers to overcome difficulties during the COVID-19 pandemic. We found that most of the women (81%) reported difficulties to balance work and motherhood activities. To overcome them, they developed some strategies, such as creating a routine for the children and hiring someone to help take care of them. The surveyed women also suggested many actions to deal with motherhood difficulties (RQ [17]), such as: (i) more empathy/sorority in relation to problems related to motherhood; (ii) creating a code of conduct for men; (iii) creating childcare within companies; (iv) creating opportunities/programs for women in ICT; and (v) hybrid work models or remote work. This requires changing the company's organizational culture and placing more women in leadership positions. Women still suffer harassment and prejudice related to motherhood (RQ [17]), such as distrust and mean jokes. Some of them reported that they started to feel incapable to perform their work and being a mother, simultaneously. The RQ [17] asked how maternity leave was affected by job activities. We found out that most of the mothers (79%) could use maternity leave, however, many needed to work during it. In relation to RQ [17] we found out that 11% of women quit after becoming mothers. Furthermore, mothers face many difficulties while working within the computing area (RQ [17]), such as overload of having to comply with company work hours and overtime with household activities. They also complained that the employment makes maternity-related absences difficult, such as doctor appointments and children's illnesses. In summary, this paper shows that mothers perceive that there is a social penalty for mothers in the corporate environment. They suffer discrimination in the area of Software Engineering, which can be very negative for the market as a whole, as it can discourage female talent and prevent their growth. Overall, everyone desires a better world to live in. A world where women and men can be treated with equality and respect, recognizing their differences. In future work, we intend to investigate how facilitating actions directed at mothers can affect factors such as team productivity and software quality. We also intend to investigate, from the perspective of men, the impact of motherhood on software development teams.
2308.03241
Notably Inaccessible -- Data Driven Understanding of Data Science Notebook (In)Accessibility
Computational notebooks, tools that facilitate storytelling through exploration, data analysis, and information visualization, have become the widely accepted standard in the data science community. These notebooks have been widely adopted through notebook software such as Jupyter, Datalore and Google Colab, both in academia and industry. While there is extensive research to learn how data scientists use computational notebooks, identify their pain points, and enable collaborative data science practices, very little is known about the various accessibility barriers experienced by blind and visually impaired (BVI) users using these notebooks. BVI users are unable to use computational notebook interfaces due to (1) inaccessibility of the interface, (2) common ways in which data is represented in these interfaces, and (3) inability for popular libraries to provide accessible outputs. We perform a large scale systematic analysis of 100000 Jupyter notebooks to identify various accessibility challenges in published notebooks affecting the creation and consumption of these notebooks. Through our findings, we make recommendations to improve accessibility of the artifacts of a notebook, suggest authoring practices, and propose changes to infrastructure to make notebooks accessible. An accessible PDF can be obtained at https://blvi.dev/noteably-inaccessible-paper
Venkatesh Potluri, Sudheesh Singanamalla, Nussara Tieanklin, Jennifer Mankoff
2023-08-07T01:33:32Z
http://arxiv.org/abs/2308.03241v1
# Notably Inaccessible - Data Driven Understanding of Data Science Notebook (In)Accessibility ###### Abstract. Computational notebooks, tools that facilitate storytelling through exploration, data analysis, and information visualization, have become the widely accepted standard in the data science community. These notebooks have been widely adopted through notebook software such as Jupyter, Datalore and Google Colab, both in academia and industry. While there is extensive research to learn how data scientists use computational notebooks, identify their pain points, and enable collaborative data science practices, very little is known about the various accessibility barriers experienced by blind and visually impaired (BVI) users using these notebooks. BVI users are unable to use computational notebook interfaces due to (1) inaccessibility of the interface, (2) common ways in which data is represented in these interfaces, and (3) inability for popular libraries to provide accessible outputs. We perform a large scale systematic analysis of 100000 Jupyter notebooks to identify various accessibility challenges in published notebooks affecting the creation and consumption of these notebooks. Through our findings, we make recommendations to improve accessibility of the artifacts of a notebook, suggest authoring practices, and propose changes to infrastructure to make notebooks accesible. An accessible PDF can be obtained at [https://blvi.dev/noteably-inaccessible-paper](https://blvi.dev/noteably-inaccessible-paper) + Footnote †: journal: ## 1. Introduction Computational notebooks such as Jupyter (Datalore and Gourour, 2018) combine code, natural language, and rich representations of data providing a ubiquitous literate programming experience (Krishnan, 2018). These notebooks are used through computational notebook systems and programming environments such as Jupyter, Google Colab, Datalore, and Noteable (among others) that abstract software setup, computational infrastructure management and access to resources. Computational notebooks are widely used by data scientists as interactive mechanisms to process, understand and express data making it easier for them to collaborate, share code, convey stories and narratives through data visualizations and text, while keeping the reproducibility of results in mind (Sudheesh, 2018; Sudheesh, 2018). The popularity of these computational notebooks, specifically Jupyter notebooks, as the go-to tool for data science can be seen in the rapid increase in published notebooks, 2.5 Million public notebooks hosted on GitHub in September 2018 (Sudheesh, 2018), increasing by 10x since 2015 (Sudheesh, 2018). As of 2020, this dataset has increased to over 10 Million and has been analyzed by JetBrains \(\sim\) a company building Integrated Development Environments (IDEs) (Sudheesh, 2018). Despite computational notebooks being popular tools, we know very little about the accessibility of these tools for developers and data scientists who are blind or visually impaired (BVI). What little has been written on the topic is found in non-peer reviewed sources such as Astronomy Notebooks for All (Sudheesh, 2018), an effort to perform accessibility auditing of the Jupyter Lab interface and contribute changes to the upstream Jupyter open source community. An early 2023 analysis of the accessibility score for Jupyter Hub graded it as a fail (F) (Friedman, 2018). One active effort to address the inaccessibility of notebook software can be found in Microsoft's VisualStudu Code, a popular IDE among the BVI developer community that is building a new, more accessible notebook authoring experience on top of the existing standardized format of Notebooks by adding improved keyboard navigation and audio cues. While these are much needed improvements to the Notebook IDE experience, they do not contribute to our understanding of the full variety of accessibility issues that can arise from from the different ways in which computational notebooks are authored, consumed, and published. Understanding these accessibility issues can be critical to improving the accessibility of notebooks and the infrastructure that supports their creation, consumption, and distribution. We present a data-driven investigation of the accessibility of computational notebooks. Our investigation focuses on accessibility for BVI notebook _authors_ and _consumers_ (hereby referred to as BVI users or BVI notebook users). Our work answers the question of whether IDE artifacts, authoring experiences, and infrastructure to work with computational notebooks, are accessible to BVI users. We answer the following specific research questions: * _Data Artifacts_: How accessible are key data artifacts, namely _figures and tables_, to blind or visually impaired users? * _Authoring_: How do existing notebook authoring practices impact screen reader users' ability to glance through important information and results? For example, do most notebooks make proper use of headers and other landmarks that improve navigation and glanceability? * _Infrastructure_: How do current tools to distribute and customize notebooks impact accessibility? For example, how do different themes for coloring a notebook impact the number of accessibility errors found by automated tools assessing that notebook? We answer these questions in the context of a large-scale, in-the-wild dataset of computational notebooks. This includes notebooks that may have been used for anything from exploratory data analysis to documents produced for public distribution. These notebooks could be written and consumed by users from a variety of disciplines including students, coders, or data scientists, and this process should be accessible to any notebook user who may be blind or visually impaired at any stage of the notebook authoring or consumption process. Thus, we chose to assess accessibility at scale without narrowing to a specific use category. We narrow our our analysis to a subset of 100,000 notebooks selected at random from the JetBrains dataset of 10,000,000 (10 Million) of them (Han et al., 2017). Choosing such a large, random sample helps us to understand common patterns in notebook inaccessibility. We complement this with manual verification of a smaller set of 10 notebooks. Additionally, we narrow parts of our analysis to Python, the most popular language used in notebooks, so that we can perform language-specific code analysis to gain a deeper understanding of inaccessibility caused by notebook infrastructure. Our contributions are as follows: 1. We develop repeatable automated metrics that represent _optimistic upper bounds_ for estimating notebook accessibility. 2. We developed a method for the first systematic large scale analysis of the current state of accessibility of computational notebooks to blind or visually impaired notebook authors / consumers. We open source our dataset and processing pipeline to enable researchers to build on this method and extend our results (Zhou et al., 2017). 3. We present results highlighting the overall inaccessibility of notebooks with respect to three research questions which look at the accessibility of data artifacts, notebook IDEs, and infrastructure. We also describe the programming tools most commonly used by notebook authors. 4. Based on our results, we highlight opportunities such as encouraging good ALT text authoring practices, or automatically generating an accessible table alongside a chart. We make recommendations for notebook software developers, data scientists, and accessibility researchers creating computational notebooks, to make data and the corresponding story telling accessible through computational notebooks. Our findings about accessibility of computational notebooks through this characterization can have the potential to quicken the pace of making data science accessible by identifying the right improvements to widely used tools and notebook authoring practices, reducing the need for bespoke, custom accessibility-only solutions. ## 2. Background Computational notebooks have risen in popularity since the inception of Jupyter in 2014, impacting many domains within and outside of computer science such as data science, machine learning, and astronomy. This impact has been recognized by the Association of Computing Machinery (ACM) in 2017 with a prestigious software systems award (Bahdan et al., 2017). Often, these notebooks are authored in a web-based IDE such as Jupyter Lab and Jupyter book, or through hosted and managed alternatives such as Google Colab, and Datalore. Since their invention, millions of such notebooks have been authored for data analysis and related tasks, and it is important that we analyze this phenomena (Zhou et al., 2017) in the context of accessibility. Rule _et al._ collected and released a dataset of one million Jupyter notebooks (Rule et al., 2017). Analyses of these notebooks have shown that the majority of the notebooks do not declare dependencies, and have not been tested (Zhou et al., 2017); and notebook users consider them to be personal and messy (Rule et al., 2017). Although many notebooks are used primarily by their authors, some notebooks are _published_. Publishing a notebook leverages tools built into the notebook IDE to generate a webpage, or sometimes a PDF or LaTeX document. In the case of web pages, these tools use web semantics such as headings and tables to structure content, and allow notebooks to be themed or otherwise decorated. To better understand what issues may be of concern, we review the literature outside of computational notebooks in three closely related areas: Programming / IDE accessibility (relevant to the accessibility of the notebook _authoring_ experience, Section 2.1); Data analysis and visualization accessibility (relevant to both _authors_ and _consumers_, Section 2.1); and Web Accessibility (relevant to _consumers_ and authors of published notebooks Section 2.3). ### Programming/IDE Accessibility Accessibility of web-based programming has been studied in the past in the context of High-fidelity prototyping tools, which were found to have inaccessible graphical user interface (GUI) controls preventing BVI users of these tools from accessing the content in widgets and manipulating them on the prototyping canvas (Datalore et al., 2017). Another useful point of comparison consider accessibility concerns raised in studies of other (non-web) programming. Mealin and Murphy-Hill published one of the first studies to understand accessibility barriers experienced by BVI developers (Muller et al., 2017). They found challenges associated with using IDEs and developing user interfaces. Other studies have found that accessibility barriers can impact navigation, debugging, and glanceability of code during the programming process (Bahdan et al., 2017; Datalore et al., 2017). Accessibility can also impact other tasks related to software development such as information seeking and collaboration with sighted users (Zhou et al., 2017; Zhou et al., 2017; Zhou et al., 2017). Solutions to these accessibility concerns include improvements to IDEs, bespoke software tools, and physical tactile interfaces to make web and user interface development accessible to BVI developers (Zhou et al., 2017; Zhou et al., 2017; Zhou et al., 2017; Zhou et al., 2017). Several tools present novel, accessible representations of code by repurposing familiar navigational structures such as list views, tree views and tables to facilitate efficient screen reader navigation (Bahdan et al., 2017; Datalore et al., 2017; Zhou et al., 2017). Additionally, audio has been used as a feedback mechanism to facilitate accessible debugging of code (Zhou et al., 2017; Zhou et al., 2017; Zhou et al., 2017). However, the impact of these tools to accessibly support data intensive programming has not been evaluated. ### Accessibility of Data A critical aspect of using computational notebooks is to create, consume, and collaborate on visual, tabular, and other representations of data. These representations are often generated as results of computations performed in a Jupyter notebook. Understanding prior work on data and accessibility will help contextualize accessibility barriers that prevent notebook users from accessing data and results of computations performed in these notebooks. Several efforts have explored making data visualizations _accessible_ through auditory representations combining speech with tones, and _interactive_ through voice commands, keyboard shortcuts, and touch screen gestures (Krause et al., 2015; Krause et al., 2016; Krause et al., 2017; Krause et al., 2018; Krause et al., 2019). Sharif _et al._ present a JavaScript plugin that makes two-dimensional data accessible to screen reader users (Sharif et al., 2017). The plugin supports both speech based summaries and sonifications to convey trends in the data, and can be used along with a screen reader. Zong _et al._ focus specifically on screen reader interactions of charts and inform that improvements to structural organization, navigation, and descriptions are necessary to improved screen readability of visualizations (Krause et al., 2018). Though not sufficient to make data visualizations accessible, they find that accompanying visualizations with tables is crucial for accessibility due to the familiarity of tables to screen reader users. These efforts assume BVI people as non-expert consumers of data visualizations. Very few efforts have resulted in tools and interfaces for BVI people to author accessible data visualizations. As of today, the work by Cantrell _et al._ resulting in the development of Hibarchs Sonification Studio is the only open source charting library to support accessible authoring of data sonifications (Krause et al., 2016). Potluri _et al._ developed a data sonification toolkit, centered around the needs of BVI developers' attempts at and need for understanding sensor data and enable them to develop Internet of Things (IoT) applications (Krause et al., 2017). Computational notebooks, in addition to enabling consumption, have the potential to give BVI developers the means to produce data visualizations and enable data driven story telling, therefore surfacing the need for these tools to offer capabilities to provide accessible data visualizations. The very few attempts to make data representations accessible to BVI computational notebook users resulted in libraries with very limited functionality, leaving much to be discovered about their accessibility. ### Accessibility of Published Information As mentioned earlier, many notebooks are published, and one common format for this is to turn them into a web page. Additionally, many notebook software use web interfaces and leverage web semantics such as headings and HTML tables to structure content and outputs. Thus, coupling the findings from a domain specific tool leveraging web as a platform like Jupyter, with web accessibility helps us identify potential accessibility concerns that may be unique to notebooks. Studies have been conducted in the past that use web accessibility guidelines to examine and improve the accessibility of other domains. For example, Elsavski _et al._(Elsavski et al., 2017) extend web accessibility guidelines to make data visualizations accessible. Similarly, Li _et al._ use IBM's accessibility checklist -- a set of accessibility guidelines derived from web accessibility guidelines, to compensate for the lack of industry standards to examine the accessibility of high-fidelity prototyping tools (Li et al., 2017). Our findings from the web accessibility analysis of Jupyter notebooks contribute to this body of work. Summary of concerns relating to Notebook accessibilityOur literature review highlights several areas in which accessibility problems might arise, including: Visualization and data table access, both of which come up frequently in notebooks; and navigation and glanceability during coding, which would be relevant to the notebook authoring experience. We now describe our analysis pipeline to understand accessibility concerns with notebooks in the areas highlighted in prior literature and present our results. ## 3. Studying Notebook Accessibility Our background section highlighted some important areas of relevance to examine, to identify accessibility concerns that might impact the experience of notebook consumers and authors. Building on these observations, our study focuses on an at-scale assessment of the accessibility of the consumer and authoring experiences. We choose this approach because of its high ecological validity: while a user study must narrow their scope to a very small set of notebooks, possibly hand curated to be semi-accessible so that the study is not a waste of participants' time, a large scale study can explore a much broader range of notebooks. Below we introduce our study approach and sampling strategy. We also discuss the metadata that we extract from notebooks to prepare for our analysis of results. While there are several datasets of Jupyter notebooks available (Krause et al., 2015; Krause et al., 2016), with metadata about where they come from and how they are created, they do not fully capture the variety of contexts that computational notebooks could be used in. Further, accessibility issues can occur in notebooks irrespective of context, and tools should inherently support the creation, consumption, and distribution of accessible notebooks. We begin by detailing our data processing pipeline, including our approach to sampling. We then explain our method of measuring accessibility. We defined our accessibility metrics to prioritize an optimistic upper bound (meaning metrics that have high sensitivity but low precision), because there does not currently exist a validated measure of automatically measuring true accessibility. As we will see, this approach ultimately allows us to confidently say that most in-the-wild notebooks are inaccessible (hinting at the fact that the situation is potentially even worse than we estimate). Put differently, our approach has lower _construct validity_ than a user study might have. To complement this automated measurement and analysis of accessibility, we also conducted experiments to manually verify notebook glanceability through screen reader testing. ### Data Sampling and Filtering We start with the dataset provided by JetBrains that contains 10 million Jupyter notebooks (Li et al., 2017). Because of the computational and time costs of analyzing a data set of this size, we began by analyzing a sample of 10000 randomly chosen notebooks from the 10 Million notebook dataset. We start with this random sample of 10000 notebooks to test our analysis pipelines, and gain an understanding of the characterization of the dataset. After establishing the required analysis pipelines, we scaled our analysis by 10x and obtained a new random subset of notebooks resulting in 100000 notebooks. We observed that the results from our analysis pipeline returned similar observations in both the 10000 and 100000 notebook analysis, giving us the confidence that 100000 was a sufficient sample size to draw conclusions from. Therefore, we stopped our analysis without further scaling the number of notebooks in our dataset. By chance, there was overlap of 87 notebooks between these two data sets, which we considered small enough to be inconsequential and retained all 87 overlapping notebooks for our analysis. It is likely that some of these notebooks may be intended for scratch use, or exploratory data analysis which might be inaccessible compared to the presentation-ready notebooks. Since our work intends to explore accessibility for BVI authors, as well as consumers, including these notebooks in our study is intentional. Only studying notebooks that are presentation-ready assumes BVI people's involvement only as consumers of these notebooks and limits discovery of the extent of notebook accessibility problems. Below, we present a high level diagram of our data processing pipeline in Figure 1. We will describe the steps involved in our pipeline and provide details of the data along the way. We build our data processing pipeline to (1) collect and filter a randomized subset of the notebooks from the larger JetBrains dataset (SS3.1), (2) extract the required data representations from the filtered notebooks (SS3.2), and (3) enrich the notebook data through analysis of transformed representations, used in the notebook distribution process (SS3.3). #### 3.1.1. Notebook Validity Check The first step of the pipeline involves coercing the computational notebook files to the latest v4 specification of the Jupyter Notebook format using the nbformat tool to ensure validity (Steintein et al., 2017). A notebook obtained in the dataset is considered as valid if the file has correctly formatted JSON content according to the specified Jupyter notebook format. This conversion resulted in a total of 99441 notebooks filtering out 559 notebooks in the process due to conversion failures. #### 3.1.2. Filtering for Python Notebooks Building on previous works on notebook analysis that have established python as the most popular language used in computational notebooks (Kleiner et al., 2017; Kliem et al., 2018), we filter the validated dataset obtained from the first step for notebooks written in python. By removing notebooks which are not written in python, we obtain 94722 notebooks, of which we further removed 2672 which do not contain the language information in the metadata, resulting in a total of 92050 python based notebooks. ### Extracting Required Data Representations Computational notebooks store source code in 'code cells' and the outputs from the execution of the code as its children formatted as 'output cells'. Additionally notebooks also support the usage of markdown to display text which is formatted as a'markdown cell'. These cells cannot contain child attributes related to outputs as per the notebook format specification. We process each notebook and extract information about (1) source code, and (2) outputs. #### 3.2.1. Code Cells from Notebooks We run the next stage of our data processing pipeline to extract information about the source code present within 'code cells' in the 92050 python notebooks. We extract the source code, and markdown text in this process in addition to computing the number of code and markdown lines and cells in a notebook. Our analysis identifies 39540 notebooks where at least one source code cell generates a graphical output into its corresponding output cell. We removed 52509 (57.04%) notebooks that only contained source code, and no accompanying outputs for the code segments from our analysis, leaving 39540 notebooks for the next stage of the pipeline. #### 3.2.2. Output Figures The Jupyter notebook format stores figures generated as a part of the code output as base64 encoded strings with the metadata information of the corresponding Multipurpose Internet Mail Extensions (MIME Types) to identify the type of media output. By default, the notebooks support five image media mine types indicated by image/bmp, image/gif, image/jpeg, image/png, and image/svg+xml. Other media output types such as PDF generated from code are immediately converted to display as PNG format in a notebook with no additional extensions installed. We parse through all the 39540 notebooks since these are the python notebooks containing at least one programmatically generated graphic in an output cell corresponding to a code cell. We convert the encoded base64 strings into image files and store them for further analysis resulting in 342722 total images. As summarized in Figure 1(a), about 42.95% notebooks have at least 1 programmatically generated figure. For notebooks that have such images, the median number of images in them is 4.0.10% of the notebooks contain over 16 images per notebook, with a long tail where 1% of the notebooks contain over 77 images. The notebook with the most images contains 12858 images. #### 3.2.3. Analysis of Code Syntax The next step in our data processing pipeline targets the contents of the source code in the 39540python notebooks that contain at least one programatically generated graphical output. We chose Python because it is the most popular language used in notebooks and is studied in other large scale notebook analyses (Brock et al., 2017; Kliem et al., 2018). This allows us to perform language specific code analysis by delving deep into the library and module ecosystem of the programming language, to understand accessibility gaps in popular tools, and make actionable recommendations to improve the accessibility of these notebooks. We use the abstract syntax tree (ast) module in python to parse the contents of the source code in these 39540 notebooks. We extract information about modules and functions being imported by notebook authors, and the functions being invoked within various cells of the notebook. This gives us information about the different libraries, and function calls frequently used by notebook authors. Constructing the abstract syntax trees however is not as trivial as combining the code cells in a notebooks before running a parser, and requires additional processing of the code in the notebooks. Notebook systems support Jupyter _magics_ -- a functionality provided by the kernel, that allow developers to call functions that would simplify some Jupyter operations. For example, the IPython kernel used by Jupyter, allows developers to use the %% latex command to render a cell in LaTeX and the %% bash script magic command to run a cell in bash as a subprocess. While these are valid operations in a Jupyter environment, they are not a valid part of the syntax of the python programming language and result in errors when parsing the syntax to construct an abstract syntax tree, even if the rest of the code in the cell is valid syntax. Therefore, constructing abstract syntax trees requires the source code snippets to be processed to remove any magic lines. We removed source code lines which begin with the % (percentage) special character, in addition to lines starting with! indicating the execution of a shell command, or ending with the help? operator. We run our parsers to construct the AST over the resulting code and extract information about the functions, and modules imported in the notebooks. #### 3.2.4. Output Types The extensibility of Jupyter notebooks allows notebook authors to customize the story telling and consumption Figure 1: Flowchart Diagram indicating the Data Processing Pipeline presented in Section 3.1, 3.2, and 3.3 Figure 2: Characteristics of the Notebooks in the Analysis. Plots show a cumulative distribution function (CDFs). The teal line in the CDF indicates the relationship between the 92050 valid python notebooks in our study and the number of plots they contain. The magenta line indicates the distribution of 39540 notebooks which contain at least one image. The step nature of the curve compared to the continuous distribution is because of the discrete number of plots which could be included in a notebook. The vertical length of the line at each \(x\) value indicates the percent of those notebooks in the dataset. experience of the notebooks. To understand how these customizations impact accessibility, we extracted three high level categories (_application_, _image_, and _text_) based on MIME types used in notebooks. The top 5 output types presented in Table 1 account for 98.67% of the outputs in the notebooks. Notebooks included 24 different application types; 6 text output types (eg. HTML, WiFi, markdown, etc.), and 4 image output types. Portable network graphics (PNG) images are the most commonly used image output formats making up 21.45% of the outputs in the notebooks. The detailed and complete list of output types found in our dataset is presented in Table 5 in Appendix B. #### 3.2.5. Figure Types To understand what type of figures are created in the notebooks, we classify the figures into 28 different image type categories including Line Chart, Histogram, Box plot, Confusion matrix, Scatter plot and others (the full list can be found in 3b). We classify the 342722 images obtained from our figure extraction phase of the pipeline, using a Fully Connected - Convolutional Neural Network (FC-CNN) combined with a Fisher-Vector Convolutional Neural Networks (FV-CNN) (Han et al., 2017) pretrained on the DocFigure dataset (Dosov et al., 2018). We run this inference on an AWS p2.16xlarge VM running 16 NVIDIA K80 GPUs, 64 vCPUs, and 732 GB of memory. ### Data Enrichment The user experience to consume and author Jupyter notebooks often takes place through web interfaces. Developers customize their development environment experiences through themes, and many IDEs in addition to their defaults, provide a wide variety of themes - catering to developer preferences. Often these same themes are carried to published notebooks. #### 3.3.1. Generating HTML from notebooks Since different themes can vary in their accessibility, we selected 6 popular themes including the defaults provided by Jupyter for publishing HTML exports. Our selections include _solarized_ -- a theme (originally written for Vim and made available now by various IDEs) (Vam and others, 2017), _darcula_ (for ZSH originally, and used extensively by JetBrains) (Dosov et al., 2018), _horizon_ (default by VSCode) (Bogor et al., 2018), the _material darker_ theme (Krause et al., 2018), and the default _dark_ and _light_ theme options supported by Jupyter (Stenberg et al., 2018). To assess the accessibility of notebooks in these web interfaces, we generate HTML exports of notebooks in our dataset with these six popular themes applied. We use the open source nbconvert tool that supports converting notebooks into publishable HTML and other formats. nbconvert allows users to select themes and specify other parameters to control the generation of HTML output and is the de-facto tool integrated into computational notebooks providing various export format capabilities. Once the conversion is complete, we export the notebooks into standalone HTML files which we then serve through a web server. All 100000 notebooks were exported as HTML using nbconvert, producing 589746 HTML files, 98291 per theme. #### 3.3.2. Accessibility scans HTML user interfaces are typically evaluated using accessibility testing and evaluation engines which are software programs that evaluate the content, design of the interface, and check their ability to satisfy various established accessibility guidelines. We perform accessibility scans using aXe and HTML Code Sniffer (HTMLCS) together by deploying a self hosted version of the _pa11y_ accessibility scanning infrastructure configured to use both engines (Krause et al., 2018). _pa11y_ is an LGFL licensed open source tool that tests web pages by executing a chromium process and providing the ability to run multiple accessibility engines via the same tool. We modified _pa11y_, which was last modified by the maintainers in October 2022, by building on the community's work to make the project compatible with the latest Node.js \(\geq\) v18. Our changes expose the ability to run multiple accessibility engines in their webservice subproject 1 and additionally involved fixing various security vulnerabilities 2, and overcoming engineering debt. We are currently working towards contributing these changes to pa11y. Footnote 1: [https://github.com/pa11y/pa11y-webservice/pull/145](https://github.com/pa11y/pa11y-webservice/pull/145) Footnote 2: [https://github.com/pa11y/pa11y-webservice/pull/145](https://github.com/pa11y/pa11y-webservice/pull/145) For all 589746 HTML files across all themes, we extracted the type of violation (_error_, _warning_ or _notice_), the accessibility engine that detected it (Axe or HTMLCS), the specific code corresponding to the violation provided by the engines, and the selector (HTML node where the violation was detected). We enrich our dataset by attaching this information to the name of the notebook and theme being tested. We found a total of 238675580 combined errors, warnings and notices across all notebooks. #### 3.3.3. Web Semantics While accessibility scanners provide information about standardized accessibility errors, they do not provide more nuanced information such as the size of tables and the presence of headings that can be critical to understand the screen reader glanceability and navigability of notebooks. To gain this understanding, we use LXML, a library that enables efficient parsing of the HTML DOM tree, to process the HTML outputs of notebooks. We picked the light theme and used the 98291 HTML files corresponding to it for this processing. We extracted information about the type of cell, presence of images, alternative text (using the alt attribute of the <img> tags), information about tables (using the <table>, th, and td tags), presence of links (using the <a>), and various heading levels (using the <h1 - h6> tags) along with the file name, and the location of the cell in the file. Our observations indicate that 65.9% of the notebooks do not contain any tables in their outputs. Among the remaining, a notebook contains 3.0 (median) / 5.55 (mean); however the distribution has a long tail with the maximum at 1181 tables and 1% of the notebooks containing over 37.0 tables. Among the 34.1% of notebooks in our dataset which contain tables, we extract additional metadata to identify the structural shape of the tables. The median table rendered in the output cells of the notebooks contains 6 (median) / 15 (mean) rows, and 30 (median) / 140 (mean) columns. Figure 1(c) indicates the distribution of the number of rows and columns for all tables identified in our dataset. The largest table in the notebooks in our dataset contains 162736 rows and 1139145 columns indicating a maximum of 185379900720 cell values. ### Manual Screen Reader Testing To investigate a sample of accessibility issues identified by our automated testing, and to verify if notebooks were glanceable, two research team members experienced with using a screen reader opened a selection of notebooks in screen reader and browser combinations, and verified if a screen reader user will be able to get to all headings, images, and tables present in those notebooks. As we noticed the BVI researcher's screen reader crash several times during the exploratory phase of our research, we hypothesized size of the notebooks to play a role in causing these crashes. We identify a list of notebooks meeting different size criteria based on the percentiles of the file sizes in our dataset. While one researcher (also a BVI screen reader user) performed the tests and reported the counts in each notebook, the second researcher visually verified the reporting, noting the observations. We performed these tests with Microsoft Edge, Google Chrome, and Firefox Nightly with their accessibility cache improvements enabled (Bordes et al., 2015) with JAWS and NVDA 2023 on a Windows computer running Windows 10. We performed our VoiceOver tests on a Mac with VoiceOver using both Safari, and Chrome. We did not test our notebooks with Firefox on the Mac since the accessibility cache improvements for Firefox were not yet available for the Mac OS. Similarly, we did not test using Safari on Windows since Apple ended support for it in 2015. ## 4. Results Recall that the goal of our investigation is to understand the accessibility of data artifacts, authoring experiences, and infrastructure to work with computational notebooks to BVI users. Given the overall concerns about visualization and data accessibility discussed in section 2.2, we must understand how accessible these are in notebooks. Thus, we begin our analysis by exploring how accessible the data artifacts in notebooks are to BVI users. Given the importance of navigation, and information seeking/glanceability to IDE accessibility, it is critical that we assess this in notebooks as well. Thus, our analysis explores the proper use of headers and other landmarks that improve navigation and glanceability for screen reader users. Uniquely, notebooks are not just a programming tool, they are also a communication platform that can be customized and themed to generate final "documents" in various formats. If these documents are not accessible, the consumer experience will be impacted. Thus, our analysis investigates the impact of the tools used to customize and distribute notebooks on their accessibility. To answer our research questions, we developed metrics which represent optimistic _upper bounds_, meaning that the presence of accessibility in notebooks is likely much smaller than our findings in this paper indicate. ### Accessibility of Data Artifacts At the heart of data analysis is the data, and that is typically explored through a combination of text summaries, graphical representations, and tables. These latter two artifacts -- _graphics_ and _tables_, that are essential to the data story telling process used by notebook authors, may have important implications for accessibility and help us answer the first research question _RQ1_ we set out to answer. A truly accessible notebook should support advanced, accessible and dynamic visualization techniques (Krishnan et al., 2017). However there are common, basic requirements for making static images and charts accessible (Krishnan et al., 2017), which guided the development of our optimistic metrics for evaluating artifact accessibility. **Presence of ALT text** : A meaningful _ALT_ text attribute should accompany visualizations. We measure the presence or absence of ALT text in programatically generated images and analysed the alt-texts found in images that may have been manually added by notebook authors. This is an optimistic measure because the presence of ALT text does not mean it is descriptive or helpful for accessibility -- which depends heavily on the quality of the text provided. Only.19% of images in our data have ALT text whose word frequency we measure and present detailed results of in Section 4.1.1 but we do not measure ALT text quality. **Figures followed by tables** : Without ALT text, a figure can still be somewhat accessible if it is followed by a table with the equivalent data visualized in the figure. This is optimistic because such tables may not be accessible, or may not be related to the figure. We present the detailed results in Section 4.1.2. #### 4.1.1. ALT text for Static Images and Charts Static images and charts are present in 42.95% (39540) of the 92050 python based notebooks, most of which are PNG files (Table 1). Notebooks with these artifacts contain a median of 4.0 figures as shown in Figure 1(a). We consider a figure to be programmatically generated if the code cell in the notebook contains a mapped output section indicating the result of the execution of the cell and containing an image type. The vast majority of the programmatically generated images (N=342102 (99.81%)) do not have associated alternative text. Of the 609 containing the alt text information in the notebooks, only 1 image is programmatically generated from code 3, while 608 others are specified through the description attribute in markdown images included using the syntax '![description](path_to_image)'. In Figure 2(a), we present a word cloud of the alt descriptions we found indicating that they were mostly meaningless in their current use. The most dominant words 'Open' and 'Colab' come from the markdown included interaction button for opening notebooks with Google Colab which is included by some notebook authors when publishing their notebooks. The word cloud also indicates poor usage of alt text \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Rank** & **Category** & **Output Type** & **Total** & **Percent** & **Cumulative** \\ \hline 1 & Text & Plain & 977328 & 61.72 & 61.72 \\ \hline 2 & Image & PNG & 339589 & 21.45 & 83.17 \\ \hline 3 & Text & HTML & 208170 & 13.15 & 96.32 \\ \hline 4 & Application & Javascript & 22842 & 1.44 & 97.76 \\ \hline 5 & Application & Jupyter JSON Widget & 14532 & 0.91 & 98.67 \\ \hline **Total Number of Outputs** & & **1583255** & & \\ \hline \end{tabular} \end{table} Table 1. Top 5 Output Types in Notebook Account for 98.67% of outputs whenever used in markdown with most descriptions lackadaisically referring to the graphic included as 'image', 'png', or 'alt'. _Types of charts and libraries used._ If we understand what types of charts and figures are most used by notebook authors, this can help guide and prioritize research in accessible visualization for data analysis. The usage of the pre-trained FC-CNN+FV-CNN classification model (Section 3.2.5) over the programmatically generated images found in our data (N=342722) reveals line charts as the most common type of programmatically generated visualization (22.26%), followed by histograms (12.61%), and boxplots (9.45%). Figure 2(b) contains a histogram of the most popular types of figures found in the analyzed notebooks ranked from most widely used to the least widely used. Seven types of charts make up roughly 75% of the figures in the dataset: Line chart, histogram, box plot, confusion matrix, scatter plot, area chart, and natural images. Some of these are understudied in the accessible visualization literature, which primarily focuses on bar, line, & pie charts, and scatter plots (Zhu et al., 2018). Histograms, box plots, confusion matrices, and area charts represent important areas to make more accessible in future research. We also analyzed the python import statements and function call invocations used in the 39540 notebooks with images, to identify the most popular charting library used by notebook authors (matplotlib, followed by seaborn as shown in Figure 4). The seaborn library is an enhanced wrapper over the underlying matplotlib libraries. Additionally, through code analysis and tracing the function calls we identified the most popular functions used by notebook authors and present them in Table 2. Seven of these 10 popular functions produce visualizations through the matplotlib module, while the others target data processing through modules like numpy and pandas. Given this, we investigated matplotlib's capabilities to understand support for alt text. However, despite its importance in the python community and extensive use in computational notebooks, scientific publishing, and other media, Matplotlib does not support embedding alt text descriptions for the generated graphics. Even with static figures, it is possible to include ALT text -- PNG, the most popular image format standard used by authors (Table 1), supports embedding image descriptions in metadata. #### 4.1.2. Comparative Ordering of Tables and Figures Unless a chart's summary, context, and all text convey all relevant information contained in the chart, which our investigation of ALT text shows is clearly lacking in this data (Section 4.1.1), a data table or data summary is a necessary accompaniment to that chart (Kumar et al., 2018). With the use of data processing libraries like pandas, it is relatively straightforward to also render a slice of the dataset represented in a chart as an HTML formatted table, or to provide a description in the surrounding code or markdown cells. To estimate the prevalence of these accessibility best practice guidelines, we look for code cells that programmatically output an image and filter those which (1) do not contain a heading immediately after the current output cell -- since they typically indicate a change in the context (Beng et al., 2018), (2) have a markdown cell before or after the current cell potentially indicating necessary context, or (3) have a cell with a table (either programmatically generated or via markdown) immediately before or after the cell with the image. Of the 208427 code cells across all notebooks that contain images, we find that 159341 (76.44%) cells meet our criteria for possibly being related to the image. Of these, 59166 (37.13%) cells contain Figure 4. Top 10 popular python modules imported in notebooks ranked by their usage frequency Figure 3. Types of visualizations used in notebooks and analysis of most frequent words used in alternate text descriptions. markdown text (indicating possible explanations of the image) and 13037 (8.18%) cells contain a table. The 13037 cells surrounded by a data table in the neighboring cells are found in 7109 _notebooks_ in our dataset, while the 59166 cells containing markdown text after an image are present in 19028 notebooks. The existence of markdown cells in the neighboring cells but no heading in the cell immediately after however does not immediately imply relevance to the figure generated in the current code cell since they could also contain markdown included images. The existence of both markdown content, and supporting tables indicate the most accessible representations of the image -- containing both the relevant tables and a description. Only 2566 (1.23%) cells in 1795 (4.53%) notebooks with figure outputs meet this criteria and contain both markdown and tables in their neighboring cells making those cells relatively more accessible when navigating and attempting to understand the notebook. Simply including tables after charts is a low bar. It is also important that those tables are usable with a screen reader. Despite the inclusion of tables in the notebooks, too many rows and columns can affect screen reader navigation, resulting in users losing context or requiring too many key strokes to skip the tables or interact with elements in the table. While there is no defined threshold for the maximum number of rows or columns, screen reader users typically lose context when navigating large tables (Steinberg et al., 2016). The median row count (6 rows) is identical to the number of default rows printed by pandas. head() (one header row, and 5 data rows). The number of columns however grows rapidly as shown in Figure 2c. For example, the largest table in our dataset contains 1139145 columns and 162736 rows and may be impossible to glance with a screen reader. An average table present in the notebooks in our study contains 140 columns and 15 rows resulting in over 2100 cells indicating the very high number of keyboard interactions needed by BVI users to understand and navigate the tables. In summary, we find that images and tables -- data artifacts found in notebooks -- may not contain the necessary information for them to be accessible by BVI notebook users. The presence of tables and text descriptions indicate the possibility for these data artifacts to be accessible and open up avenues for future investigations. However, notebook authors following this accessible practice need to also provide descriptions and use tables with sizes considering screen reader accessibility. ### Navigability and Accessibility for Authors An understanding of the practices of notebook authors can guide our approach to making notebooks more accessible. Current understanding of code glanceability challenges for BVI people (_e.g._, (Bordes et al., 2016)) does not account for code representations which are interwoven with rich representations of data and web semantic structures -- such as headings and tables supported by computational notebooks. Since Jupyter notebooks support markdown, a markup that is rendered using web semantics, authors have a great deal of control over how structural information is conveyed in the notebooks. For example, notebook authors control presence of semantic elements such as headings, tables, links, and figures. Screen readers typically support single key navigation among headings of different levels, links, tables, and many others (Steinberg et al., 2016). This can allow screen reader users to skim through a notebook, and quickly understand its structure. To explore notebook glanceability through web semantics helping answer the _RQ2_ we set out to explore, we define the following optimistic estimates of accessibility: **Navigability of Notebooks:** One simple but important aspect of proper heading use to use a level 1 heading (H1) in the first cell of every notebook, enabling screen reader users to easily find the start of notebook content (Kal A majority (28414 (28.90%) notebooks) have a heading level 1 (H1) in the first cell. We speculate IDE integrations, JupyterLab, and Google Colab's default behavior, which adds an H1 in the first cell automatically, may have resulted in this accessibility advantage. As the first occurrence of a dialog moves further away from the first cell indicated in Figure 4(a), it is likely that screen reader users will skip a number of useful cells in the notebook. Screen reader users may also be confused by or skip cells when authors use a different heading level for a first heading in cell 1, breaking the typically expected structure of the notebook or web based interactions. In Figure 4(a), we see that 10955 (11.14%) notebooks start with an H2 (row 1, column 2), 5728 (5.82%) with H3 and so on down to H6 (.09%). 47543 (48.36%) notebooks contain a heading of _any_ level in the first cell of the notebook and 59.67% (28414) of them correctly match our expectations to have a heading level 1 in the first cell. The high percentage of notebooks containing a heading indicate the high potential for improving accessibility of notebooks through improved navigability for screen reader users. The subsequent rows indicate the number of useful code cells potentially skipped by the screen reader navigation and show the frequency of occurrences of the first heading type in different cells of the notebooks. Figure 4(a) presents only a partial slice of the overall occurrence of heading in the notebooks, the detailed result is presented in Appendix A. #### 4.2.2. Finding landmarks: Tables We found that 33517 (34.1% ) of the notebooks have tables in them. Among these, a notebook contains 5.55 tables on average, with 3.0 tables at the median, and the top 1 percentile of notebooks containing at least 37.0 tables as shown in Figure 2(b). The lack of tables in 64774 (65.9% ) of notebooks, consequently reduces the accessibility of notebooks and the possibility of glancing at data. The first occurrence of tables is typically the third cell in the notebook as shown in Figure 4(b), perhaps because the first two cells are typically used for importing the necessary modules, and loading the required datasets for further analysis. This raises a further optimistic aspect of our metric -- such tables may be primarily present to check that data loaded correctly, rather than highlighting results after analysis is complete, limiting their utility for screen reader users to understand a notebook. ### Tooling Impacts on Notebook Accessibility The accessibility of published notebooks on BVI consumers is impacted not only by authorship, but also by the tools used to export them. Jupyter notebooks allow the developers to export the notebook into various formats such as PDF, EURX or HTML, among many others. The HTML format is widely used by notebook authors when releasing their notebooks because of the ease of sharing contents over the web. Many popular code hosting repository tools like GitHub or GitLab use the submitted raw ipynb notebook format files and convert them by exporting them into HTML when navigating to the file using a web browser. Thus we focus our analysis on HTML renderings. We use a mix of manual testing and automated accessibility testing tools to assess this. We measure: Figure 5. Presence of first navigable heading and table elements in the Notebooks. Left (4(a)): shows the cell position of the first heading element and its level (\(x\) axis) from the 1st cell in the notebook to the 10th (\(y\) axis). Right (4(b)): shows the cell position (\(x\) axis) of the first table present in notebooks (\(y\) axis) with a navigable table (excluding notebooks with 0 tables). **Reachability of structural and graphical information:** As mentioned in Section 4.2, the ability to navigate to structural elements is important to accessibility. While our previous analysis looked at whether semantic information is properly included in notebooks by authors (Section 4.2.1 and Section 4.2.2), here we test the same question once notebooks have been rendered. To assess this, we performed manual screen reader testing, the only metric that we tested at small scale (only for 10 notebooks). This is an optimistic estimate because we did not test whether the landmarks were useful, only whether a screen reader user could get to them. We find significant concerns with the scalability of notebooks for screen reader users affecting navigability in 6/10 cases and present the detailed results in Section 4.3.1. **Impact of Theme Choice on Accessibility (Section 4.3.2):** This metric captures the accessibility impact due to the choice of theme when generating a notebook. Many themes are also used during authoring, so this may also impact authorship. This metric is optimistic since it may not capture all errors introduced by a theme due to the use of automated testing, which is known to be an optimistic measure of website accessibility (Sandel, 2018). The best theme we tested differed from the worst by 84.95% overall; however we found that different themes performed differently on different accessibility testers and raised different types of errors within those testers. #### 4.3.1. Reachability of structural and graphical information Table 3 shows the results of manual testing of the accessibility of notebooks. We sampled notebooks of a representative range of sizes as we observed that large notebooks were crashing browsers during the exploratory phase of this research. Through the sampling based manual analysis, we identified that notebooks of large sizes can cause accessibility breakdowns, crashing screen readers and browsers on windows, indicated by \(\bot\) in Table 3. The results suggest that at least 1% of all notebooks are fully inaccessible due to their large sizes and cause screen readers or web browser tabs to completely crash. We speculate that the difference in how JAWS and NVDA read webpages through a virtual buffer _vs_ VoiceOver's ability to directly interact with the browser causes the difference in accessibility breakdowns visible in the table (Sandel, 2018). Addressing these breakdowns in windows screen readers is critical due to their popularity among the BVI community. Notebook N5 is an interesting case. Although other notebooks smaller than it fail, it passes for both NVDA and JAWS on Edge and Chrome. Firefox Nightly however was not able to find all the required headers or tables present in the notebook therefore marking its status as a functional but not fully glanceable notebook. We looked more deeply into this and found that one possible cause is that N5 did not contain any programmatically generated graphics. Additionally, our manual accessibility checks demonstrate that for _any notebook size_, both JAWS and NVDA with Chrome, Edge and Firefox only detect the first programmatically generated graphic despite the existence of more than one in the subsequent cells. While navigating by graphic will jump focus to other images that are manually added to markdown cells, both JAWS and NVDA do not jump to any graphics that are encoded as base64 strings that occur in subsequent code cells, a significant accessibility problem (indicated by \(\Theta\) in Table 3). We further tested this behavior by adding synthetically generated base64 encoded images of different sizes into HTML outputs randomly chosen from our data, and to new notebooks with just two cells and observed similar behavior. It is important for screen reader users to be informed of the presence of these images and to be able to navigate to them. Current attempts at navigation using popular NVDA and JAWS screen readers completely skip images. VoiceOver on the Mac performs the best with no crashes and detects all images, headings, and tables -- satisfying the requirements for notebook glanceability we set forth (denoted by \(\checkmark\)), and enables screen reader users to navigate to them using the VoiceOver rotor -- a single-key navigation equivalent for VoiceOver. Though the notebooks are glanceable using VoiceOver on the Mac, it is critical that Windows based screen readers be able to do the same due to their wide adoption among BVI users (Sandel, 2018). #### 4.3.2. Impact of Theme Choice on Accessibility To understand the impact of the choice of theme to accessibility of the notebook, we evaluate each notebooks' accessibility using six themes, two of which are the default (light and dark), and the four others which are popular defaults used by various IDEs. The use of color schemes to modify the visual style of the editor interface is a common practice and allows developers to improve their productivity due to their ability to make code easier to visually read and understand. These visual differences are summarized for the six themes we selected in Figure (a)a. Applying color schemes such as high contrast themes on IDEs has been widely adopted by developers for both accessibility (Iov vision developers), and aesthetic reasons. Although Jupyter notebooks by default are exported to HTML using the _light_ theme, notebook authors can specify a different theme to use and export the notebook into. However, as summarized in Figure 6, these themes also differ significantly by the amount of warnings and errors related to accessibility that we found when we used automated testing on them. We found that the horizon theme, which is the default for the popular VSCode IDE, performs the best, with the fewest accessibility errors (\(\mu\)=67.52 (\(\sigma\)=138.97) errors). It was 84.95% better than the default light theme provided by the Jupyter IDE, which had a mean of 335.40 (\(\sigma\)=939.72) errors. This difference is statistically significant according to the paired-samples t-test (_t(95101)_-_198.05_, \(p<.001\)). Figure (b)b summarizes all six themes using a CDF showing the distribution of the number of errors (solid lines) and warnings (dashed lines) reported by the aXe and HTMLCS accessibility engines on the exported HTML versions of the notebooks. Looking more closely at error type can help us to explore the range of ways in which notebook exports affect accessibility. We analyze the results of our accessibility scans and identify 10 error categories reported by the aXe engine, and 9 error categories reported by the HTML Code Sniffer (HTMLCS) engine whose total occurances across all the notebooks differ based on the theme they were run on. In Figure 7 we present the relative heatmap of these errors by comparing the number of errors in each theme to the maximum number of errors reported for each error type. We present the details of the error code, and its impact on user accessibility enumerated through AXE-[1-10] and HTMLCS-[1-9] in Table 4. While our results from Figure (b)b presented in Section 4.3.2 indicate that a change in theme would significantly improve the accessibility of the notebooks, the results in Table 4 indicate some unintended consequences of theme changes and also indicate the impact of the tools chosen to test accessibility. Of the 16 unique types of errors which differ among themes, both accessibility engines agree on only three which are grouped together due to their similarity (AXE-E1-HTMLCS-E1, AXE-E2-HTMLCS-E2, AXE-E3-HTMLCS-E7). The aXe scanner assiduously identifies seven other issues as errors, some of which are typically considered warnings or notices by other tools or in the WCAG2AA standard specification [16]. HTMLCS identifies six other accessibility errors which are not identified by aXe. Together, the two engines uncover 6 critical accessibility error categories, eight serious ones, and one moderate and minor error category respectively. Our findings demonstrate the value of running multiple accessibility engines while evaluating for accessibility challenges when employing automated mechanisms. As is visible in Figure 7, there are some disagreements between the accessibility tools. For example, the color contrast metric result in aXe ranks the darcula theme as most inaccessible, while HTMLCS considers the light theme to be the most inaccessible (AXE-E1-HTMLCS-E1). Similarly, despite being considered inaccessible due to color contrast issues with the background, the light theme performs the best when addressing the challenge of link distinguishability -- as indicated by the dark black squares in the "light" row on the AXE-E3 column in Figure (a)a, and HTMLCS-E7 column in Figure (b)b. The difference in errors due to aria attributes \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{**Rank**} & \multirow{2}{*}{**Size (Bytes)**} & \multicolumn{2}{c|}{**Microsoft Edge**} & \multicolumn{3}{c|}{**Google Chrome**} & \multicolumn{2}{c|}{**Firefox Nightly**} & **Apple Safari** \\ \cline{3-10} & & & NVDA & JAWS & NVDA & JAWS & VoiceOver & NVDA & JAWS & VoiceOver \\ \hline N1 & 1 (P10) & 630352 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline N2 & 2 (P25) & 634340 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline N3 & 3 (P50) & 721056 & \(\copy\) & \(\copy\) & \(\copy\) & \(\copy\) & ✓ & \(\copy\) & \(\copy\) & ✓ \\ \hline N4 & 4 (P75) & 997861 & \(\copy\) & \(\copy\) & \(\copy\) & \(\copy\) & ✓ & \(\copy\) & \(\copy\) & ✓ \\ \hline N5 & 5 (P85) & 1362512 & ✓ & ✓ & ✓ & ✓ & \(\copy\) & \(\copy\) & ✓ \\ \hline N6 & 6 (P90) & 1900465 & \(\copy\) & \(\copy\) & \(\copy\) & \(\copy\) & ✓ & \(\copy\) & \(\copy\) & ✓ \\ \hline N7 & 7 (P95) & 1915381 & \(\copy\) & \(\copy\) & \(\copy\) & \(\copy\) & ✓ & \(\copy\) & \(\copy\) & ✓ \\ \hline N8 & 8 (P99) & 10955553 & ✗ & ✗ & ✗ & ✗ & ✓ & \(\downarrow\) & \(\downarrow\) & ✓ \\ \hline N9 & 9 (P100) & 103790428 & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & ✓ & \(\downarrow\) & \(\downarrow\) \\ \hline \end{tabular} \end{table} Table 3. Accessibility evaluation on randomly chosen notebooks of variable sizes (in bytes) ordered by percentile rank (P10-P100). For consistency we chose the _light_ theme in our evaluations. Only VoiceOver is shown for Safari, since the browser is no longer actively supported on Windows. \(\diagup\) indicates notebooks which pass the accessibility evaluation for glanceability, \(\copy\) represents those which are functional, but not fully glanceable, ✗ represents those which fail glanceability evaluation for multiple reasons, and \(\perp\) represent the notebook which cause complete crashes of screen readers or browser tabs. Figure 6. Customizability of Notebook experiences and its accessibility implications. Left ((a)a): shows the visual difference in notebooks when applying different themes in our evaluation.Right ((b)b): shows the distribution of the number of accessibility errors and warnings reported by accessibility engines on the same set of notebooks exported into multiple themes. such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). We manually inspected 5 notebooks (of the 38 notebooks which are affected) with the most AXE-E9 errors and found problems such as aria-hidden, and aria-parents are also striking (AXE-E9). as an out of date version of MathJax, incorrect LaTeX notations for mathematical expressions, and programmatic inclusion of stale documentation. Interestingly, these are all sources outside the computational notebook ecosystem and introduced during authoring. ## 5. Discussion and Recommendations Our work is the first large scale analysis of computational notebooks done from an accessibility perspective. Our study of 100,000 notebooks provide insights not only about the current state of accessibility of notebooks, but also suggest directions for future research. We note that our analysis does not directly address the fact that web-based notebook editors are not accessible to BLV authors. We do not say more about this as it was not a focus of our data analysis, but it is an important domain for future work to address (Kumar et al., 2018). ### Improving Artifact Accessibility We found that notebook images almost universally lack ALT text, and are usually created with matplotlib or a library built on top of it (seaborn). We also found a lack of accessible, tabular information associated with figures. Many of these concerns could be improved upon by providing additional options to developers using the libraries or minor changes to the tool defaults. #### 5.1.1. Including alternate text in images During the programmatic creation of tables and images, there is a great deal of available knowledge that could be used to improve their accessibility. First, it is possible to check for the presence of axis labels, validate color contrast, and even generate basic ALT text that would be significantly better than what we found (Figure 2(a)), all based on information available when the chart is instantiated. For example, matplotlib during the generation of a line chart could include alt-text information indicating the type of chart, the color and number of lines, and the labels along various axis. Proprietary statistics software such as SAS/STAT include such metadata information in the graphics generated based on the context such as the function call, or the data being visualized (Bordes et al., 2016). Second, interactive support in notebooks could be designed to help users encode image descriptions according to Lungard _et al.'s_ four-level semantic model (Lungard et al., 2017). Third, it is very feasible to modify a tool such as matplotlib, the most used plotting tool as observed in our analysis (presented in Table 2), and provide programmatic methods to embed ALT text in PNG images (the most common image type it produces in our data (Table 1)). ALT text could be provided manually by the notebook authors, similar to the ability for web developers to customize the alt attribute in HTML <img> tags to describe images. To demonstrate the feasibility of our suggestion, we updated the matplotlib backend for PNG formats, by modifying the modules' source code (image.py, backend.(png|pdf).py), to use the standardized Exchangeable Image File Format (EXIF) and include alt text information which could be passed as image metadata by making less than 10 lines of code changes to the open source code. Such image description information can be encoded in a serialized byte format against the 0x010e (270) byte delimiter following the EXIF standards. Following the established and existing standards would allow consumers of notebook formats, such as Jupyter, VS-Code, and various IDEs to embed ALT text into the figures and make it available to screen reader users. We further verified that the descriptions included using our proposed alt argument can be included by existing tools like nbcconvert and included in the HTML conversions of the notebook. Our artifacts released as a part of this work utilize these changes and include ALT metadata description in the resulting images. We leave their ability to be automatically recognized by notebook software for future open source efforts motivated by this research. #### 5.1.2. Leveraging Interactivity on the Web Visualizations on web pages (using libraries like d3.js, highcharts, jQuery charts), are often generated during the page load event establishing a separation between the data and the functions responsible for creating and rendering the visualizations, enabling interactivity, and making it possible for extensions to build additional accessibility features (Kumar et al., 2018). Future work could explore comprehensive API designs similar to those that are provided by Apple to sonify charts (Bordes et al., 2016), to replace static base64 encoded images returned from the Jupyter python kernels' message passing interface with a representation of only the data required offloading the visualization capabilities to the Jupyter front end. This could allow web exports of notebooks to separate the data and functions responsible for visualizations, thus making it feasible for including _accessible_, and interactive graphs. #### 5.1.3. Presenting multiple data representations Similar innovations could be added to the tools used in notebooks to improve table prevalence and accessibility. For example, we found that tables don't always accompany charts, despite this being a best practice (Kumar et al., 2018). It may be possible to extend modules such as matplotlib, or seaborn to return a table representing the various data points in the image being created, or for notebook tools to perform code analysis of popular data processing libraries like pandas (Table 2) and automatically insert the intermediate data representations passed to the visualization functions as a table. For example, Notebooks with the understanding of a code cell using a pandas dataframe in a variable 'df' calling the lineplot(data=df, x='label', y='data') could create a snapshot of the data corresponding to the 'label' and 'data' columns in its metadata and present a table along with the resulting figure. Future accessibility efforts to improve this understanding could develop targeted heuristics to assess the relevance of tables and other data representations in notebooks. #### 5.1.4. Addressing Large Tables Our characterization of tables used in computational notebook software shows that tables used in notebooks are typically very wide (presented in Figure 1(c)). Additionally, Wang _et al._ highlight table navigation challenges experienced by blind web users, finding that screen reader users have difficulty keeping track of context when navigating long tables (Wang et al., 2017). Programmatic support could be designed to ensure that the defaults for representing tabular data are as accessible as possible, though additional research is needed to understand the best way to accessibly represent tables for data sets with large numbers of columns. Extensive work has been done by a few contributors to the Notebooks4All effort to provide guidance on making table outputs accessible (Wang et al., 2017). Wang _et al._(Wang et al., 2017) also find that screen reader users encounter incorrectly marked up tables. Though not significant, our accessibility scan results do indicate the presence of table related errors, possibly generated by data processing libraries such as pandas when specific constants to improve their accessibility are not set (Wang et al., 2017). Code analyses extending the one described in our pipeline (Section 3.1), combined with accessibility scans of programmatically generated tables with exhaustive combinations of parameters that have an impact on table output, could help identify root causes and target accessibility improvements. ### Accessible Notebook Authoring We found that many notebooks have an H1 in their first cell, but consistent and proper use of headers is by no means universal. Further, only a third of notebooks have even one table in them. Improving these authoring practices could improve notebook accessibility. Computational notebook tools could be updated to provide notices to the authors about incorrect usage of headings when violating the heading order or existence in the WCAG2 guidelines. Similarly, they could suggest places where authors may want to summarize what has happened so far in a table. This could be valuable for all notebook authors, regardless of disability, since tables convey valuable information about the structure of the data being processed in the subsequent cells of the notebook. These practices, along with increased figure accessibility, can improve notebook comprehensibility and glanceability. Future efforts could focus on understanding and improve notebook glanceability to further explore what glanceability means to a BVI user based on the task at hand (authoring _vs_ consuming) and incorporate these best practices into tools such as linters for Jupyter Notebooks, to support accessible notebook authoring. ### Accessible Notebook Consumption Notebooks that are converted to HTML may be accessible to consumers if that HTML is accessible. When we used the most accessible theme (Horizon, which is not the default that most authors are likely to use), we found a mean of 68 accessibility errors, when testing notebooks with automated tools. This was significantly better than the worst theme, which had an average of 335 errors. These results are particularly striking since they imply that changing the default theme of the notebook software during export, or enabling the ability to toggle themes in the exported notebooks, could significantly improve the overall accessibility. While changing the theme is not the remedy to all accessibility issues in notebooks, it is an important factor to consider when improving and addressing the challenges of notebook accessibility. As a first step, the Jupyter community could explore conducting an accessibility evaluation of their theme(s), address existing accessibility issues, and provide improved accessible defaults. Future explorations could investigate the feasibility of integrating accessibility checks such as aXe, or HTMLcs into the rheconvert process or within the notebook programming environments. Notebook authors hosting their notebooks on source code repository systems such as GitHub could also consider integrating accessibility engines to test the export of notebooks into their continuous integration and deployment pipelines (CI/CD) which could highlight and prevent accessibility issues during notebook creation (Zhu et al., 2019). ### Reflections on Research Methodology Large scale characterization work does not frequently occur in accessibility research. Mack _et al._, in their systematic literature survey of accessibility papers, report on methodology and find that less than 6% of accessibility papers included large scale analyses (Mack et al., 2019). Our work demonstrates the use of large scale data collection and characterization to understand the (in)accessibility of computational notebooks, and the sources for these errors from data that need not involve burdening people with disabilities. In choosing our approach, we were cognizant that not all accessibility issues are captured by our approach, and thus we drilled down a few occurrences of accessibility issues to observe factors that may require targeted investigation. We hope that this methodology can be applied to other areas where accessibility breakdowns could happen without placing a burden on disabled stakeholders. To facilitate further research on accessibility of computational notebooks, we make our dataset and analysis scripts publicly available (Mack et al., 2019) and link to them in IncluSet (Mack et al., 2019) and Zenodo (Zu et al., 2019). ## 6. Conclusion We present the first large-scale analysis of computational notebooks aimed at gaining an understanding about their accessibility. Current accessibility work to make computational notebooks accessible are focused on remediating the accessibility of the interface used to create these notebooks. While this is important, we identify that programming environments, authoring practices and distribution mechanisms chosen by authors can also have an impact on the accessibility of computational notebooks. We find that the data artifacts presented in notebooks cause inaccessibility, and tools to create these artifacts some times do not have mechanisms to make data accessible. We find that notebook interfaces can use web semantics to make data and notebooks glanceable, although they are not frequently used by authors. Finally, we observe that customizations used by authors can impact accessibility and potentially make notebook authoring and consumption more accessible. Finally, we make actionable recommendations to increase the accessibility of data artifacts, tools, and notebook infrastructures. ###### Acknowledgements. We thank Richard Anderson and Kurtis Heimerl for the cloud infrastructure access and their thoughtful feedback through the course of this work. We thank Tim Althoff, Ken Gu, and Ashish Sharma from the Behavioral Data Science group for their initial feedback and comments related to this work. Executing this research would not have been possible without the gracious infrastructure support provided by the UW CSE Support team especially Stephen Spencer, and Aaron Timss. We would also like to thank Aaron Golenthal for their initial work in improving \(pa1ly\) for continued support on newer operating systems which was further extended and instrumented to perform large scale accessibility analysis. We thank the members of University of Washington's Make4All and ICTD labs for their feedback and support through the course of this work. Venkatesh Potluri was supported by the Apple Scholars in AI/ML PhD fellowship. This work is supported by the National Science Foundation (NSF) Eng Diversity Activities (EDA) 2009977, and the Center for Research and Education on Accessible Technology and Experiences (CREATE).
2303.01724
Node-Specific Space Selection via Localized Geometric Hyperbolicity in Graph Neural Networks
Many graph neural networks have been developed to learn graph representations in either Euclidean or hyperbolic space, with all nodes' representations embedded in a single space. However, a graph can have hyperbolic and Euclidean geometries at different regions of the graph. Thus, it is sub-optimal to indifferently embed an entire graph into a single space. In this paper, we explore and analyze two notions of local hyperbolicity, describing the underlying local geometry: geometric (Gromov) and model-based, to determine the preferred space of embedding for each node. The two hyperbolicities' distributions are aligned using the Wasserstein metric such that the calculated geometric hyperbolicity guides the choice of the learned model hyperbolicity. As such our model Joint Space Graph Neural Network (JSGNN) can leverage both Euclidean and hyperbolic spaces during learning by allowing node-specific geometry space selection. We evaluate our model on both node classification and link prediction tasks and observe promising performance compared to baseline models.
See Hian Lee, Feng Ji, Wee Peng Tay
2023-03-03T06:04:42Z
http://arxiv.org/abs/2303.01724v1
# Node-Specific Space Selection via Localized Geometric Hyperbolicity in Graph Neural Networks ###### Abstract Many graph neural networks have been developed to learn graph representations in either Euclidean or hyperbolic space, with all nodes' representations embedded in a single space. However, a graph can have hyperbolic and Euclidean geometries at different regions of the graph. Thus, it is sub-optimal to indifferently embed an entire graph into a single space. In this paper, we explore and analyze two notions of local hyperbolicity, describing the underlying local geometry: geometric (Gromov) and model-based, to determine the preferred space of embedding for each node. The two hyperbolicities' distributions are aligned using the Wasserstein metric such that the calculated geometric hyperbolicity guides the choice of the learned model hyperbolicity. As such our model Joint Space Graph Neural Network (JSGNN) can leverage both Euclidean and hyperbolic spaces during learning by allowing node-specific geometry space selection. We evaluate our model on both node classification and link prediction tasks and observe promising performance compared to baseline models. Graph neural networks, hyperbolic embedding, graph representation learning, joint space learning. ## I Introduction Graph neural networks (GNNs) are neural networks that learn from graph-structured data. Many works such as Graph Convolutional Network (GCN) [1], Graph Attention Network (GAT) [2], GraphSAGE [3] and their variants operate on the Euclidean space and have been applied in many areas such as recommender systems [4, 5], chemistry [6] and financial systems [7]. Despite their remarkable accomplishments, their performances are still limited by the representation ability of Euclidean space. They are unable to achieve the best performance in situations when the data exhibit non-Euclidean characteristics such as scale-free, tree-like, or hierarchical structures [8]. As such, hyperbolic spaces have gained traction in research as they have been proven to better embed tree-like, hierarchical structures compared to the Euclidean geometry [9, 10]. Intuitively, encoding non-Euclidean structures such as trees in the Euclidean space would result in more considerable distortion since the number of nodes in a tree increases exponentially with the depth of the tree while the Euclidean space only grows polynomially [11]. In such cases, the hyperbolic geometry serves as an alternative to learning those structures with comparably smaller distortion as the hyperbolic space has the exponential growth property [8]. As such, hyperbolic versions of GNNs such as HGCN [12], HGNN [13], HGAT [14] and LGCN [15] have been proposed. Nevertheless, real-world graphs are often complex. They are neither solely made up of Euclidean nor non-Euclidean structures alone but a mixture of geometrical structures. Consider a localized version of geometric hyperbolicity, a concept from geometry group theory measuring how tree-like the underlying space is for each node in the graph (refer to Section III-A for more details). We observe a mixture of local geometric hyperbolicity values in most of the benchmark datasets we employ for our experiments as seen in Fig. 2. This implies that the graphs contain a mixture of geometries and thus, it is not ideal to embed the graphs into a single geometry space, regardless of Euclidean or hyperbolic as it inevitably leads to undesired structural inductive biases and distortions [8]. Taking a graph containing both lattice-like and tree-like structures as an example, Fig. 1c and Fig. 1f shows that 15 of the blue-colored nodes in the tree structure are calculated to have 2-hop local geometric hyperbolicity value of zero, while 12 of the purple nodes have a value of one and the other 3 purple nodes (at the center of the lattice) have a value of two (the smaller the hyperbolicity value, the more hyperbolic). This localized metric can therefore serve as an indication during learning on which of the two spaces is more suitable to embed the respective nodes. Here we address this mixture of geometry in a graph and propose Joint Space Graph Neural Network (JSGNN) that performs learning on a joint space consisting of both Euclidean and hyperbolic geometries. To achieve this, we first update all the node features in both Euclidean and hyperbolic spaces independently, giving rise to two sets of updated node features. Then, we employ exponential and logarithmic maps to bridge the two spaces and an attention mechanism is used as a form of model hyperbolicity, taking into account the underlying structure around each node and the corresponding node features. The learned model hyperbolicity is guided by geometric hyperbolicity and is used to "softly decide" the most suitable embedding space for each node and to reduce the two sets of updated features into only one set. Ideally, a node should be either hyperbolic or Euclidean and not both simultaneously, thus, we also introduce an additional loss term to achieve this non-uniform characteristic. To the best of our knowledge, the closest work to ours is Geometry Interaction Learning (GIL) [11] which exploits Euclidean and hyperbolic spaces through a dual feature interaction learning mechanism and a probability assembling module. GIL has two branches where a message-passing procedure is performed in Euclidean and hyperbolic spaces simultaneously. Dual feature interaction learning is where the node features in each of the spaces are enhanced based upon the updated features on the other space and their distance similarity. The larger the distance between the different spatial embeddings, the larger the portion of features from the other space is summed to itself as seen in Fig. 3. Meanwhile, probability assembling refers to learning node-level weights to determine which of the learned geometric embeddings is more critical. A weighted sum of the classification probabilities from the two spaces yields the final result. Our approach differs from [11] in some key aspects. Firstly, we leverage the distribution of geometric hyperbolicity to guide our model to learn to decide for each node to be either better embedded in a Euclidean or hyperbolic space instead of performing feature interaction learning. This is done by aligning the distribution of the learned model hyperbolicity and geometric hyperbolicity using the Wasserstein distance. Our motivation is that if a node can be best embedded in one of the two spaces and encoding it in another space other than the optimal one would result in comparably larger distortion. Minimal information would be present in the sub-optimal space to help "enhance" the representation in the better space. Hence, promoting feature interaction could possibly introduce more noise to the branches. The ideal situation is then to learn normalized selection weights that are non-uniform for each node so that we select for each node a single, comparably better space's output embedding. To achieve this, we introduce an additional loss term that promotes non-uniformity. Lastly, we do not require probability assembling since we only have one set of output features at the end of the selection process. ## II Background In this section, we give a brief overview of hyperbolic geometry that will be used in the paper. Readers are referred to [16] for further details. Moreover, we review GAT and its hyperbolic version. ### _Hyperbolic geometry_ A hyperbolic space is a non-Euclidean space with constant negative curvature. There are different but equivalent models to describe the same hyperbolic geometry. In this paper, we work with the Poincare ball model, in which all points are inside a ball. The hyperbolic space with constant negative curvature \(c\) is denoted by \((\mathbb{D}_{c}^{n},g_{\mathbf{x}}^{c})\). It consists of the \(n\)-dimensional hyperbolic manifold \(\mathbb{D}_{c}^{n}=\{\mathbf{x}\in\mathbb{R}^{n}:c\|\mathbf{x}\|<1\}\) with the Riemannian metric \(g_{\mathbf{x}}^{c}=(\lambda_{\mathbf{x}}^{c})^{2}g^{E}\), where \(\lambda_{\mathbf{x}}^{c}=2/(1-c\|\mathbf{x}\|^{2})\) and \(g^{E}=\mathbf{I}_{n}\) is the Euclidean metric. At each \(\mathbf{x}\in\mathbb{D}_{c}^{n}\), there is a tangent space \(\mathcal{T}_{\mathbf{x}}\mathbb{D}_{c}^{n}\), which can be viewed as the first-order approximation of the hyperbolic manifold at \(\mathbf{x}\)[9]. The tangent space is then useful to perform Euclidean operations that we are familiar with but are undefined in hyperbolic spaces. A hyperbolic space and the tangent space at a point are connected through the exponential map \(\exp_{\mathbf{x}}^{c}:\mathcal{T}_{\mathbf{x}}\mathbb{D}_{c}^{n}\to\mathbb{D}_ {c}^{n}\) and logarithmic map \(\log_{\mathbf{x}}^{c}:\mathbb{D}_{c}^{n}\to\mathcal{T}_{\mathbf{x}}\mathbb{D} _{c}^{n}\), specifically defined as follows: \[\exp_{\mathbf{x}}^{c}(\mathbf{v}) =\mathbf{x}\oplus_{c}\Big{(}\tanh\!\left(\sqrt{c}\frac{\lambda_{ \mathbf{x}}^{c}\|\mathbf{v}\|}{2}\right)\!\frac{\mathbf{v}}{\sqrt{c}\|\mathbf{ v}\|}\Big{)}, \tag{1}\] \[\log_{\mathbf{x}}^{c}(\mathbf{y}) =\frac{2}{\sqrt{c}\lambda_{\mathbf{x}}^{c}}\tanh^{-1}(\sqrt{c} \|-\mathbf{x}\oplus_{c}\mathbf{y}\|)\frac{-\mathbf{x}\oplus_{c}\mathbf{y}\,}{ \|-\mathbf{x}\oplus_{c}\mathbf{y}\|}, \tag{2}\] where \(\mathbf{x},\mathbf{y}\in\mathbb{D}_{c}^{n},\mathbf{v}\in\mathcal{T}_{\mathbf{ x}}\mathbb{D}_{c}^{n}\) and \(\oplus_{c}\) is the Mobius addition. For convenience, we write \(\mathbb{D}\) for \(\mathbb{D}_{c}^{n}\) if no confusion arises. A salient feature of hyperbolic geometry is that it is "thinner" than Euclidean geometry. Visually, more points can be squeezed in a hyperbolic subspace having the same shape as its Euclidean counterpart, due to the different metrics in the two spaces. We discuss the graph version in Section III-A below. ### _Graph attention and message passing_ Consider a graph \(G=(V,E)\), where \(V\) is the set of vertices, \(E\) is the set of edges, and each node in \(V\) is associated with Fig. 1: Example graphs. (a) Lattice-like graph. (b) A tree. (c) A combined graph containing both lattice and tree structure. (d-f) The histograms reflect the geometric hyperbolicity in the respective graphs. a node feature \(h_{v}\). Recall that GAT is a GNN that updates node representations using message passing by updating edge weights concurrently. Specifically, for one layer of GAT [2], the node features are updated as follows: \[h_{v}^{{}^{\prime}} =\sigma\Big{(}\sum_{j\in N(v)}\alpha_{vj}\textbf{W}h_{j}\Big{)}, \tag{3}\] \[\alpha_{vj} =\frac{\exp(e_{vj})}{\sum_{k\in N(v)}\exp(e_{rk})},\] (4) \[e_{vj} =\mathrm{LeakyReLU}(\textbf{a}^{\prime}[\textbf{W}h_{v}\parallel \textbf{W}h_{j}]), \tag{5}\] where \(\parallel\) denotes the concatenation operation, \(\sigma\) denotes an activation function, **a** represents the learnable attention vector, **W** is the weight matrix for a linear transformation and \(\alpha\) denotes the normalized attention scores. This model has been proven to be successful in many graph-related machine learning tasks. ### _Hyperbolic attention model_ To derive a hyperbolic version of GAT, we adopt the following strategy. We perform feature aggregation in the tangent spaces of points in the hyperbolic space. Features are mapped between hyperbolic space and tangent spaces using the pair of exponential and logarithmic functions: \(\exp_{\textbf{x}}^{c}\) and \(\log_{\textbf{x}}^{c}\). With this, we denote Euclidean features as \(h_{\mathbb{R}}\) and hyperbolic features as \(h_{\mathbb{D}}\). Then one layer of message propagation in the hyperbolic GAT is as follows [11]: \[h_{v,\mathbb{D}}^{{}^{\prime}}=\sigma\Big{(}\sum_{j\in N(v)} \alpha_{vj}\log_{\textbf{o}}^{c}(\textbf{W}\otimes_{c}h_{j,\mathbb{D}}\oplus_ {c}\textbf{b})\Big{)}, \tag{6}\] \[e_{vj}=\mathrm{LeakyReLU}\Big{(}\textbf{a}^{\prime}\Big{[}\hat {h}_{v}\parallel\hat{h}_{j}\Big{]}\times d_{\mathbb{D}}(h_{v,\mathbb{D}},h_{j, \mathbb{D}})\Big{)},\] (7) \[d_{\mathbb{D}}(h_{v,\mathbb{D}},h_{j,\mathbb{D}})=\frac{2}{\sqrt {c}}\tanh^{-1}(\sqrt{c}\|-h_{v,\mathbb{D}}\oplus_{c}h_{j,\mathbb{D}}\|),\] (8) \[\alpha_{vj}=\mathrm{softmax}_{j}(e_{vj}), \tag{9}\] where \(d_{\mathbb{D}}\) is the normalized hyperbolic distance, \(\hat{h}_{j}=\log_{\textbf{o}}^{c}(\textbf{W}\otimes_{c}h_{j,\mathbb{D}})\), while \(\otimes_{c}\) and \(\oplus_{c}\) represent the Mobius matrix multiplication and addition, respectively. ## III Joint Space Learning In this section, we propose our joint space learning model. The model relies on comparing two different notions of hyperbolicity: geometric hyperbolicity and model hyperbolicity. We start by introducing the former, which also serves as the motivation for the design of our GNN model. ### _Local geometry and geometric hyperbolicity_ Gromov's \(\delta\)-hyperbolicity is a mathematical notion from geometry group theory to measure how tree-like a metric space is in terms of metric or distance structure [17, 12]. The precise definition is given as follows. **Definition 1** (Gromov 4-point \(\delta\)-hyperbolicity [18] p.410).: _For a metric space \(X\) with metric \(d(\cdot,\cdot)\), it is \(\delta\)-hyperbolic, where \(\delta\geq 0\) if the four-point condition holds:_ \[d(x,y)+d(z,t)\leq \tag{10}\] \[\max\{d(x,z)+d(y,t),d(z,y)+d(x,t)\}+2\delta,\] _for any \(x,y,z,t\in X\). \(X\) is hyperbolic if it is \(\delta\)-hyperbolic for some \(\delta\geq 0\)._ This condition of \(\delta\)-hyperbolicity is equivalent to the Gromov thin triangle condition. For example, any tree is (0-)hyperbolic, and \(\mathbb{R}^{n}\), where \(n\geq 2\) is not hyperbolic. However, if \(X\) is a compact metric space, then \(X\) is always \(\delta\)-hyperbolic for some \(\delta\) large enough such as \(\delta=\mathrm{diameter}(X)\). Therefore, it is insufficient to just label \(X\) as hyperbolic or not. We want to quantify hyperbolicity such that a space with smaller hyperbolicity resembles more of a tree. Inspired by the four-point condition, we define the \(\infty\)-version and the \(1\)-version of hyperbolicity as follows. Fig. 2: Distributions of geometric hyperbolicity for all datasets, obtained by computing \(\delta_{G_{\alpha},\infty}\) on each nodes’ 2-hop subgraph. **Definition 2**.: _For a compact metric space \(X\) and \(x,y,z,t\in X\), denote \(\inf_{\delta\geq 0}\{(10)\) holds for \(x,y,z,t\}\) by \(\tau_{X}(x,y,z,t)\). Define_ \[\delta_{X,\infty}=\sup_{x,y,z,t\in X}\tau_{X}(x,y,z,t),\] \[\delta_{X,1}=\mathbb{E}_{x,y,z,t\sim\mathrm{Unif}(X^{4})}[\tau_{ X}(x,y,z,t)],\] _where \(\mathrm{Unif}\) represents the uniform distribution._ In order for these invariants to be useful for graphs, we require them to be almost identical for graphs with similar structures. We shall see that this is indeed the case. Before stating the result, we need a few more concepts. Let \(\mathcal{G}\) be the space of weighted, undirected simple graphs. Though for most experiments, the given graphs are unweighted. However, aggregation mechanisms such as attention essentially generate weights for the edges. Therefore, for both theoretical and practical reasons, it makes sense to expand the graph domain to include weighted graphs. For each \(G=(V,E)\in\mathcal{G}\), it has a canonical path metric \(d_{G}\), and \(d_{G}\) makes \(G\) into a metric space including non-vertex points on the edges. For \(\epsilon>0\), there is the subspace \(\mathcal{G}_{\epsilon}\) of \(\mathcal{G}\) consisting of graphs whose edge weights are greater than \(\epsilon\). On the other hand, there is a metric on the space \(\mathcal{G}\) and \(\mathcal{G}_{\epsilon}\), called the Gromov-Hausdorff metric ([18] p.72). To define it, we first introduce the Hausdorff distance. Let \(X\) and \(Y\) be two subsets of a metric space \((M,d)\). Then the Hausdorff distance \(d_{H}(X,Y)\) between \(X\) and \(Y\) is \[d_{H}(X,Y)=\max\{\sup_{x\in X}d(x,Y),\sup_{y\in Y}d(X,y)\},\] where \(d(x,Y)=\inf_{y\in Y}d(x,y)\), \(d(X,y)=\inf_{x\in X}d(x,y)\). The Hausdorff distance measures in the worst case, how far away a point in \(X\) is away from \(Y\) and vice versa. In general, we want to also compare spaces that do not a priori belong to a common ambient space. For this, if \(X,Y\) are two compact metric spaces, then their Gromov-Hausdorff distance \(d_{GH}(X,Y)\) is defined as the infimum of all numbers \(d_{H}(f(X),g(Y))\) for all metric spaces \(M\) and all isometric embeddings \(f:X\to M,g:Y\to M\). Intuitively, the Gromov-Hausdorff distance measures how far \(X\) and \(Y\) are from being isometric. The following is proved in the Appendix. **Proposition 1**.: _Suppose \(\mathcal{G}\) and its subspaces have the Gromov-Hausdorff metric. Then \(\delta_{G,\infty}\) is Lipschitz continuous w.r.t. \(G\in\mathcal{G}\) and \(\delta_{G,1}\) is continuous w.r.t. \(G\in\mathcal{G}_{\epsilon}\) for any \(\epsilon>0\)._ Consider a graph \(G\). We fix either \(\delta_{G,\infty}\) or \(\delta_{G,1}\) as a measure of hyperbolicity, and apply to each local neighborhood of \(G\). To be more precise, it is studied [19, 20] that many popular GNN models have a shallow structure. It is customary to have a \(2\)-layer network possibly due to oversmoothing [21, 22, 23] and oversquashing [24] phenomena. In such models, each node only aggregates information in a small neighborhood. Therefore, if we fix a small \(k\) and let \(G_{v}\) be the subgraph of the \(k\)-hop neighborhood of \(v\in V\), then it is more appropriate to study the hyperbolicity \(\delta_{v}\), either \(\delta_{G_{v},\infty}\) or \(\delta_{G_{v},1}\), of \(G_{v}\). For our experiments, the former is utilized. We call \(\delta_{v}\) the _geometric hyperbolicity_ at node \(v\). The collection \(\Delta_{V}=\{\delta_{v}:v\in V\}\) allows us to obtain an empirical distribution \(\mu_{G}\) of geometric hyperbolicity on the sample space \(\mathbb{R}_{\geq 0}\). For instance, we can build histograms to acquire the distributions as observed in Fig. 2. We see, for example, for Cora, a substantial number of nodes have small (local) hyperbolicity, in contrast with many works that claim Cora to be relatively Euclidean due to its high global hyperbolicity value [12, 25]. On the other hand, Airport is argued to be globally hyperbolic, but a large proportion of nodes has large local hyperbolicity. However, this is not a contradiction as we are considering the local structures of the graph. We call \(\mu_{G}\) the _distribution of geometric hyperbolicity_. It depends only on \(G\) and \(k\). ### _Space selection and model hyperbolicity_ In this section, we describe the backbone of our model and introduce the notion of model hyperbolicity. Our model consists of two branches, one using Euclidean geometry and the other using hyperbolic geometry. For the Euclidean part, we use GAT for message propagation, while for the hyperbolic part, we employ HGAT in Section II-C. After the respective message propagation, we would have two sets of updated node embeddings, the Euclidean embedding \(Z_{\mathbb{R}}\) and the hyperbolic embedding \(Z_{\mathbb{D}}\). The two sets of embeddings are combined into a single embedding \(Z=\{z_{v},v\in V\}\) through an attention mechanism that serves as a space selection procedure. The attention mechanism is performed in a Euclidean space. Thus, the hyperbolic embeddings are first mapped into the tangent space using the logarithmic map. Mathematically, the normalized attention score indicating whether a node should be embedded in the hyperbolic space \(\beta_{v,\mathbb{D}}\) or Euclidean space \(\beta_{v,\mathbb{R}}\) is as follows: \[w_{v,\mathbb{R}} =\mathbf{q}^{\intercal}\tanh(\mathbf{M}z_{v,\mathbb{R}}+\mathbf{ b}), \tag{11}\] \[w_{v,\mathbb{D}} =\mathbf{q}^{\intercal}\tanh(\mathbf{M}\log_{\mathbf{v}}^{ \mathrm{c}}(z_{v,\mathbb{D}})+\mathbf{b}),\] (12) \[\beta_{v,\mathbb{R}} =\frac{\exp(w_{v,\mathbb{R}})}{\exp(w_{v,\mathbb{R}})+\exp(w_{v, \mathbb{D}})},\] (13) \[\beta_{v,\mathbb{D}} =\frac{\exp(w_{v,\mathbb{D}})}{\exp(w_{v,\mathbb{R}})+\exp(w_{v, \mathbb{D}})}, \tag{14}\] where \(\mathbf{q}\) refers to the learnable space selection attention vector, \(\mathbf{M}\) is a learnable weight matrix, \(\mathbf{b}\) denotes a learnable bias and \(\beta_{v,\mathbb{D}}+\beta_{v,\mathbb{R}}=1\), for all \(v\in V\). The two sets of space-specific node embeddings can then be combined via a convex combination using the learned weights as follows: \[z_{v}=\beta_{v,\mathbb{R}}z_{v,\mathbb{R}}+\beta_{v,\mathbb{D}}\log_{\mathbf{v }}^{\mathrm{c}}(z_{v,\mathbb{D}}),\forall\,v\in V. \tag{15}\] This gives one layer of the model architecture of JSGNN, as illustrated in Fig. 3. The parameter \(\beta_{v,\mathbb{R}},v\in V\) controls whether the combined output, consisting of both hyperbolic and Euclidean components, should rely more on the hyperbolic components or not. We call \(\beta_{v,\mathbb{R}}\) the _model hyperbolicity_ at the node \(v\). The notion of model hyperbolicity depends on node features as well as the explicit GNN model. Similar to geometric hyperbolicity, the collection \(\Gamma_{G}=\{\beta_{v,\mathbb{R}}:\,v\in V\}\) gives rise to an empirical distribution \(\nu_{G}\) on \([0,1]\). We call \(\nu_{G}\) the _distribution of model hyperbolicity_. To motivate the next subsection, from (15), we notice that the output depends smoothly on \(\beta_{v,\mathbb{R}}\). If we wish to have a similar output for nodes with similar neighborhood structures and features, we want their selection weights to have similar values. On the other hand, we have seen (cf. Proposition 1) that geometric hyperbolicities, which can be computed given \(G\), are similar for nodes with similar neighborhoods. It suggests that we may use geometric hyperbolicities to "guide" the choice of model hyperbolicities. ### _Model hyperbolicity vs. geometric hyperbolicity_ We have introduced geometric and model hyperbolicities in the previous subsections. In this subsection, we explore the interconnections between these two notions. Let \(\Theta\) be the parameters of a proposed GNN model. We assume that the model has the pipeline shown in Fig. 4. Given node features \(\{h_{v},v\in V\}\) and model parameters \(\Theta\), the model generates (embedding) features \(\{z_{v},v\in V\}\) and selection weights or model hyperbolicity \(\{\beta_{v,\mathbb{R}},v\in V\}\) in the intermediate stage. For each \(v\in V\), there is a combination function \(\phi_{v}\) such that the final output \(\{\hat{y}_{v},v\in V\}\) satisfies \(\hat{y}_{v}=\phi_{v}(z_{v},\beta_{v})\). In principle, we want to compare \(\{\beta_{v,\mathbb{R}},v\in V\}\) and \(\{\delta_{v},v\in V\}\) so that the geometric hyperbolicity guides the choice of model hyperbolicity. However, comparing pairwise \(\beta_{v}\) and \(\delta_{v}\) for each \(v\in V\) may lead to overfitting. An alternative is to compare their respective distributions \(\nu_{G}\) and \(\mu_{G}\), or even coarser statistics (e.g., mean) of \(\nu_{G}\) and \(\mu_{G}\) (cf. Fig. 5). The latter may lead to underfitting. We perform an ablation study on the different comparison methods in Section IV-E. We advocate choosing the middle ground by comparing the distributions \(\mu_{G}\) and \(\nu_{G}\). The former can be computed readily as long as the ambient graph \(G\) is given, while the latter is a part of the model that plays a crucial role in feature aggregation at each node. Therefore, \(\mu_{G}\) can be pre-determined but not \(\nu_{G}\). We propose to use the known \(\mu_{G}\) to constrain \(\nu_{G}\) and thus the model parameters \(\Theta\). A widely used comparison tool is the Wasserstein metric. **Definition 3** (Wasserstein distance).: _Given \(p\geq 1\), the \(p\)-Wasserstein distance metric [26] measures the difference between two different probability distributions [27]. Let Fig. 4: The model pipeline is shown in the (blue) dashed box, while the geometric hyperbolicity can be computed independently of the model. Fig. 5: Different ways of comparing geometric and model hyperbolicities. Fig. 3: Comparison between JSGNN and GIL [11] in leveraging Euclidean and hyperbolic spaces. (a) Soft space selection mechanism of JSGNN where trainable selection weights \(\beta_{v,\mathbb{R}},\beta_{v,\mathbb{D}}\) are non-uniform, effectively selecting the better of the two spaces considered. (b) Feature interaction mechanism of GIL where \(\zeta,\zeta^{\prime}\in\mathbb{R}\) are trainable weights and \(d_{\mathbb{D}},d_{\mathbb{R}}\) are the hyperbolic distance (cf. (8)) and Euclidean distance respectively. The node embeddings of both spaces in GIL are adjusted based on distance, potentially introducing more noise to the branches as there is minimal information in the sub-optimal space to “enhance” the representation in the better space. \(\Pi(\nu_{G},\mu_{G})\) be the set of all joint distributions for random variables \(x\) and \(y\) where \(x\thicksim\nu_{G}\) and \(y\thicksim\mu_{G}\). Then the \(p\)-Wasserstein distance between \(\mu_{G}\) and \(\nu_{G}\) is as follows:_ \[W_{p}(\nu_{G},\mu_{G})=\left\{\inf_{\gamma\in\Pi(\nu_{G},\mu_{G})}\mathbb{E}_{( x,y)\thicksim\gamma}\|x-y\|^{p}\right\}^{1/p}. \tag{16}\] To compute the Wasserstein distance exactly is costly given that the solution of an optimal transport problem is required [28, 29]. However, for one-dimensional distributions, the \(p\)-Wasserstein distance can be computed by ordering the _samples_ from the two distributions and then computing the average \(p\)-distance between the ordered samples [28, 30]. In ideal circumstances, considering the distributions do not lose much information. We first notice that for both \(\beta_{v,\mathbb{R}}\) and \(\delta_{v}\), a smaller value means more hyperbolic in an appropriate sense. Suppose \(\beta_{v,\mathbb{R}}\) is increasing w.r.t. \(\delta_{v}\), i.e., \(\delta_{v}\leq\delta_{u}\) implies that \(\beta_{v,\mathbb{R}}\leq\beta_{u,\mathbb{R}}\). Then, \(W_{2}(\mu_{G},\nu_{G})=\sqrt{\frac{1}{|\mathcal{V}|}\sum_{v\in V}|\beta_{v, \mathbb{R}}-\delta_{v}|^{2}}\). ### _Non-uniformity of selection weights_ A node is considered to be more suitable to be embedded in the hyperbolic space when \(\beta_{v,\mathbb{D}}>\beta_{v,\mathbb{R}}\). Meanwhile when \(\beta_{v,\mathbb{D}}\leq\beta_{v,\mathbb{R}}\), the node is considered to be Euclidean. Nevertheless, to align with our motivation that each node can be better embedded in one of the two spaces and the less suitable space would result in distortion in representation, we require JSGNN to learn non-uniform attention weights, meaning that each pair of attention weights \((\beta_{v,\mathbb{D}},\beta_{v,\mathbb{R}})\) should significantly deviate from the uniform distribution. This is because soft selection without a non-uniformity constraint may result in the assignment of nodes to be partially Euclidean and partially hyperbolic with \(\beta_{v,\mathbb{R}}\approx\beta_{v,\mathbb{D}}\approx 0.5\). Hence, we include an additional component to the standard loss function encouraging non-uniform learned weights as follows: \[L_{\text{nu}}=-\frac{1}{|V|}\sum_{v\in V}\bigl{(}\beta_{v,\mathbb{R}}^{2}+ \beta_{v,\mathbb{D}}^{2}\bigr{)}. \tag{17}\] Since \(-1\leq-(\beta_{v,\mathbb{R}}^{2}+\beta_{v,\mathbb{D}}^{2})\leq-0.5\) and \(\beta_{v,\mathbb{R}}+\beta_{v,\mathbb{D}}=1\), minimizing the term would favor non-uniform attention weights for each node. In summary, we may combine hyperbolicity matching discussed in Section III-C and the non-uniformity loss to form the loss function to optimize JSGNN. \[L_{\text{overall}}=L_{\text{task}}+\omega_{\text{nu}}L_{\text{nu}}+\omega_{ \text{was}}W_{2}(\nu_{G},\mu_{G}), \tag{18}\] where \(L_{\text{task}}\) is the task-specific loss, while \(\omega_{\text{nu}}\) and \(\omega_{\text{was}}\) are balancing factors. For the node classification task, \(L_{\text{task}}\) refers to the cross-entropy loss over all labeled nodes while for link prediction, it refers to the cross-entropy loss with negative sampling. This completes the description of the JSGNN model. We speculate that the non-uniform component \(L_{\text{nu}}\) should push the model hyperboliciities towards the two extremes \(0\) and \(1\). On the other hand, as we have seen in Section III-C, to compute \(W_{2}(\nu_{G},\mu_{G})\), we need to order \((\delta_{v})_{v\in V}\), \((\beta_{v,\mathbb{R}})_{v\in V}\) respectively, and compute their pairwise differences. Therefore, \(W_{2}(\nu_{G},\mu_{G})\) aligns the shapes of \(\nu_{G}\) and \(\mu_{G}\). ## IV Experiments In this section, we evaluate JSGNN on node classification (NC) and link prediction (LP) tasks against seven baselines. ### _Datasets_ A total of seven benchmark datasets are employed for both NC and LP. Specifically, three citation datasets: Cora, Citeseer, Pubmed; a flight network: Airport; a disease propagation tree: Disease; an Amazon co-purchase graph dataset: Photo; and a coauthor dataset: CS. The statistics of the datasets are as shown in Table I. ### _Baselines and settings_ We compare against three Euclidean methods GCN [1], GraphSAGE [3] and GAT [2] and four hyperbolic models HGCN [12], HGNN [13], HGAT [14] and LGCN [15]. We also consider GIL [11], which similar to JSGNN, leverages both hyperbolic and Euclidean spaces. For all models, the hidden units are set to 16. We set the early stopping patience to 100 epochs with a maximum limit of 1000 epochs. The hyperparameter settings for the baselines are the same as [11] if given. The only difference is that the hyperparameter _h-drop_ for GIL in [11] (which determines the dropout to the weight associated with the hyperbolic space embedding) is set to 0 for all datasets as setting a large value essentially explicitly chooses one single space. Else, the hyperparameters are chosen to yield the best performance. For JSGNN, we perform a grid search on the following search spaces: Learning rate: [0.01, 0.005]; Dropout probability: [0.0, 0.1, 0.5, 0.6]; Number of layers: [1, 2, 3]; \(\omega_{\text{nu}}\) and \(\omega_{\text{was}}\): [1.0, 0.5, 0.2, 0.1, 0.01, 0.005]; **q** (cf. (11)): [16, 32, 64]. The Wasserstein-\(2\) distance is employed in all variants of JSGNN. ### _Node classification_ For the node classification task, each of the nodes in a dataset belongs to one of the \(C\) classes in the dataset. With the final set of node representations, we aim to predict the labels of nodes that are in the testing set. To test the performance of each model under both semi-supervised and fully-supervised settings, two data splits are used in the node classification task for the Cora, Citeseer and Pubmed datasets. In the first split, we followed the standard split for semi-supervised settings used in [1, 2, 3, 11, 22, [31, 32, 33, 34]. The train set consists of 20 train examples per class while the validation set and test set consist of 500 samples and 1,000 samples, respectively.1 Meanwhile, in the second split, all labels are utilized and the percentages of training, validation, and test sets are set as 60/20/20%. For the Photo and CS datasets, the labeled nodes are also split into three sets where 60% of the nodes made up the training set, and the rest of the nodes were divided equally to form the validation and test sets. Airport and Disease datasets were split in similar settings as [11]. Footnote 1: Note that the top results on [https://paperswithcode.com/sota/node-classification-on-cora](https://paperswithcode.com/sota/node-classification-on-cora) used different data splits (either semi-supervised settings with a larger number of training samples or fully-supervised settings such as the 60/20/20% split) which give much higher accuracies In Table II and Table III, the mean accuracy with standard deviation is reported for node classification, except for the case of Airport and Disease datasets where the mean F1 score is reported. Our empirical results demonstrate that JSGNN frequently outperforms the baselines, especially HGAT and GAT which are the building blocks of JSGNN. This shows the superiority of using both Euclidean and hyperbolic spaces. Results also show that JGNN frequently performs better than GIL, indicating that our method of incorporating two spaces for graph learning is potentially more effective. We also observe that Euclidean models such as GCN, GAT, and GraphSAGE perform better than hyperbolic models in general on the Cora, Citeseer, and Pubmed datasets for both splits. Meanwhile, hyperbolic models achieve better results on the CS, Photo, Airport, and Disease datasets. This means that Euclidean features are more significant for representing Cora, Citeseer and Pubmed datasets while hyperbolic features are more significant for the others. Nevertheless, JSGNN is able to perform relatively well across all datasets. We note that JSGNN exceeds the performance of single-space baselines on all datasets except for Disease. This can be explained by the fact that Disease consists of a perfect tree and thus, does not exhibit different hyperbolicities in the graph. We also particularly note that the difference in results between single-space models using only the Euclidean embedding space and hyperbolic models is not significant. This means that many of the node labels can be potentially predicted even without the best representation from the right space. This might be the reason why the gain in performance for the node classification task is not exceptional from embedding nodes in the better space. Nevertheless, we still see improvements in predictions for cases where there is a mixture of local hyperbolicities. Moreover, embedding nodes in a more suitable space can benefit other tasks that require more accurate representations such as link prediction. ### _Link prediction_ We employ the Fermi-Dirac decoder with a distance function to model the probability of an edge based on our final output embedding, similar to [11, 12, 35]. The probability that an edge exists is given by \(\mathbb{P}(e_{vj}\in E\,|\,\Theta)=(e^{(d(x_{i},x_{j})-r)/t}+1)^{-1}\) where \(r,t>0\) are hyperparameters and \(d\) is the distance function. The edges of the datasets are randomly split into 85/5/10% for training, validation, and testing. The average ROC AUC for link prediction is recorded in Table IV. We observe that JSGNN performs better than the baselines in most cases. For the link prediction task, we notice that hyperbolic models consistently outperform Euclidean models by a significant margin. In such a situation, predicting the existence of edges seems to benefit from dual space models, i.e., GIL and JSGNN, potentially benefiting from better representations with reduced distortions. ### _Ablation study_ We conduct an ablation study on the node classification task by introducing three variants of JSGNN to validate the effectiveness of the different components introduced: * Without the non-uniformity constraint (w/o NU): This does not enforce the model to learn non-uniform selection weights. * Without the Wasserstein metric (w/o \(W_{2}\)): The learning of model hyperbolicity is not guided by geometric hyperbolicity. * Without the non-uniformity loss and Wasserstein distance (w/o NU & \(W_{2}\)): Only guided by the cross entropy loss, i.e., \(\omega_{\text{nn}}=0,\omega_{\text{wns}}=0\) (cf. (18)). Table V summarizes the results of our study, from which we observe that all variants of JSGNN with some components discarded perform worse than the full model. Moreover, JSGNN without \(W_{2}\) always achieves better results than JSGNN without NU and \(W_{2}\), signifying the importance of selecting the better of the two spaces instead of combining the features with relatively uniform weights. Similarly, JSGNN without NU performs better than JSGNN without NU and \(W_{2}\) in most cases, suggesting that incorporating geometric hyperbolicity through distribution alignment does help to improve the model. To further analyze our model, we present a study regarding our method of incorporating the guidance of geometric hyperbolicity through distribution alignment. The result is as seen in Table VI. We test and analyze empirically different variants of our model based on the different comparisons shown in Fig. 5. Pairwise match indicates minimizing the mean squared error between elements of \(\Gamma_{G}\) and \(\Delta_{V}\) (without sorting) while mean match minimizes the squared loss between the means of \(\Gamma_{G}\) and \(\Delta_{V}\). We observe that comparing the distributions of \(\nu_{G}\) and \(\mu_{G}\) consistently outperforms comparing their mean, demonstrating the insufficiency of utilising coarse statistics for supervision. Secondly, pairwise matching gave better results than mean matching, though still lower than distribution matching, suggesting the importance of fine-scale information yet, a need to avoid potential overfitting. ### _Analysis of hyperbolicities_ We have speculated the effects of different components of our proposed model at the end of Section III-D. To verify that our model can learn model hyperbolicity that is non-uniform and similar in distribution as geometric hyperbolicity, we analyze the learned model hyperboliciities \((\beta_{v,\mathbb{R}})_{v\in V}\) of JSGNN and JSGNN w/o NU & \(W_{2}\) for the node classification task. Specifically, we extract the learned values from the first two layers of JSGNN and its variant for ten separate runs. The learned values from the first two layers were then averaged before determining \(W_{2}(\nu_{G},\mathrm{Unif})\) and \(W_{2}(\nu_{G},\mu_{G})\). In Fig. 6, it can be inferred that JSGNN's learned model hyperbolicity is always less uniform than that of JSGNN w/o NU & \(W_{2}\) given JSGNN's larger \(W_{2}(\nu_{G},\mathrm{Unif})\) score, demonstrating a divergence from uniform distribution. Meanwhile, for most cases, JSGNN's \(W_{2}(\nu_{G},\mu_{G})\) is smaller than that of JSGNN w/o NU & \(W_{2}\), suggesting that the shape between \(\nu_{G}\) and \(\mu_{G}\) of JSGNN is relatively more similar. At times, JSGNN's \(W_{2}(\nu_{G},\mu_{G})\) is larger than JSGNN w/o NU & \(W_{2}\), suggesting a tradeoff between NU and \(W_{2}\) as we choose the optimal combination for the model's best performance. ## V Conclusion In this paper, we have explored the learning of GNNs in a joint space setting given that different regions of a graph can have different geometrical characteristics. In these situations, it would be beneficial to embed different regions of the graph in different spaces that are better suited for their underlying structures, to reduce the distortions incurred while learning node representations. Our method JSGNN utilizes a soft attention mechanism with non-uniformity constraint and distribution alignment between model and geometric hyperboliciities to select the best space-specific feature for each node. This indirectly finds the space that is best suited for each node. Experimental results of node classification and link prediction demonstrate the effectiveness of JSGNN against various baselines. In future work, we aim to further improve our model with an adaptive mechanism to determine the appropriate, node-level specific neighborhood to account for each node's hyperbolicity. ## Acknowledgments The first author is supported by Shopee Singapore Private Limited under the Economic Development Board Industrial Postgraduate Programme (EDB IPP). The programme is a collaboration between Shopee and Nanyang Technological University, Singapore. The last two authors are supported by the Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE-T2EP20220-0002, and the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research and Development Programme. ## Proof of Proposition 1 Proof.: We first consider \(\delta_{G,\infty}\). For two graphs \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\), let \(f_{1}:G_{1}\to M\), \(f_{2}:G_{2}\to M\) be isometric embeddings into a metric space \((M,d)\) such that \(d_{GH}(G_{1},G_{2})=d_{H}(f_{1}(G_{1}),f_{2}(G_{2}))\). Denote \(d_{GH}(G_{1},G_{2})\) by \(\eta\). For \(x,y,z,t\) in \(G_{1}\), there are \(x^{\prime},y^{\prime},z^{\prime},t^{\prime}\in G_{2}\) such that \(d(f_{1}(x),f_{2}(x^{\prime}))\), \(d(f_{1}(y),f_{2}(y^{\prime}))\), \(d(f_{1}(z),f_{2}(z^{\prime}))\), \(d(f_{1}(t),f_{2}(t^{\prime}))\) are all bounded by \(\eta\). We now estimate: \[\begin{split}& d_{G_{1}}(x,y)+d_{G_{1}}(z,t)=d(f_{1}(x),f_{1}(y))+d(f_ {1}(z),f_{1}(t))\\ &\leq d(f_{2}(x^{\prime}),f_{2}(y^{\prime}))+d(f_{2}(z^{\prime} ),f_{2}(t^{\prime}))+4\eta\\ &=d_{G_{2}}(x^{\prime},y^{\prime})+d_{G_{1}}(z^{\prime},t^{ \prime})+4\eta\\ &\leq\max\{d_{G_{2}}(x^{\prime},z^{\prime})+d_{G_{2}}(y^{\prime},t^{\prime}),d_{G_{2}}(z^{\prime},y^{\prime})+d_{G_{2}}(x^{\prime},t^{\prime} )\}\\ &\qquad+2\delta_{G_{2},\infty}+4\eta\\ &\leq\max\{d(f_{1}(x),f_{1}(z))+d(f_{1}(y),f_{1}(t)),\\ & d(f_{1}(z),f_{1}(y))+d(f_{1}(x),f_{1}(t))\}\\ &\qquad+2\delta_{G_{2},\infty}+8\eta\\ &=\max\{d_{G_{1}}(x,z)+d_{G_{1}}(y,t),d_{G_{1}}(z,y)+d_{G_{1}}(x,t)\}\\ &\qquad+2\delta_{G_{2},\infty}+8\eta.\end{split} \tag{19}\] Therefore, \(\delta_{G_{1},\infty}\leq\delta_{G_{2},\infty}+4\eta\). By the same argument swapping the role of \(G_{1}\) and \(G_{2}\), we have \(\delta_{G_{2},\infty}\leq\delta_{G_{1},\infty}+4\eta\). Therefore \(|\delta_{G_{1},\infty}-\delta_{G_{2},\infty}|\leq 4\eta\) and \(\delta_{G,\infty}\) is Lipschitz continuous w.r.t. \(G\). The proof of the continuity of \(\delta_{G,1}\) is more involved. Consider \(G_{1}\) and \(G_{2}\) in \(\mathcal{G}_{\epsilon}\). Let \(f_{1},f_{2},(M,d),\eta\) be as earlier and assume \(\eta\ll\epsilon\), for example, \(\eta=\alpha\epsilon\) for \(\alpha\) is smaller than all the numerical constants in the rest of the proof. We adopt the following convention: for any non-vertex point of a graph, its degree is \(2\). By subdividing the edges of \(G_{1}\) and \(G_{2}\) if necessary, we may assume that the length of each edge \(e\) in \(E_{1}\) or \(E_{2}\) satisfies \(\epsilon/2\leq e<\epsilon\). As a consequence, for \((u,v)\) in \(E_{1}\) (resp. \(E_{2}\)), \(d_{G_{1}}(u,v)\) (resp. \(d_{G_{2}}(u,v)\)) is the same as the Fig. 6: Analysis of hyperbolicities on different datasets. (a) \(W_{2}(\nu_{G},\mathrm{Unif})\). (b) \(W_{2}(\nu_{G},\mu_{G})\). length of \((u,v)\). We define a map \(\phi:G_{1}\to G_{2}\) as follows. For \(v\in G_{1}\), there is a \(v^{\prime}\) in \(G_{2}\) such that \(d_{GH}(f_{1}(v),f_{2}(v^{\prime}))\leq\eta\). Then we set \(\phi(v)=v^{\prime}\). The map \(\phi\) is injective on the vertex set \(V_{1}\). Indeed, for \(u\neq v\in V_{1}\), \(d_{G_{1}}(u,.,v)\geq\epsilon/2\) and hence \(d_{G_{2}}(\phi(u),\phi(v))\geq\epsilon/2-2\eta>0\). The strategy is to modify \(\phi\) by a small perturbation such that the resulting function \(\psi:G_{1}\to G_{2}\) is a homeomorphism that is almost an isometry. For \(v\in V_{1}\), let \(N_{v}\) be the \(5\eta\) neighborhood of \(v\). It is a star graph and its number of branches is the same as the degree of \(v\), say \(k\). Let \(v_{1},\ldots,v_{k}\) be the endpoints of \(N_{v}\). The convex hull (of shortest paths) \(C_{v}\) of \(\{\phi(v_{1}),\ldots,\phi(v_{k})\}\) in \(G_{2}\) is also a star graph. This is because \(C_{v}\) is contained in the \(7\eta\) neighborhood of \(\phi(v)\) and it contains at most \(1\) vertex in \(V_{2}\). We claim that \(C_{v}\) has the same number of branches as \(N_{v}\). First of all, \(C_{v}\) cannot have fewer branches. For otherwise, there is a \(\phi(v_{i})\) in the path connecting \(\phi(v)\) and \(\phi(v_{j})\) for some \(j\neq i\). Hence, \[d_{G_{2}}(\phi(v_{i}),\phi(v_{j}))\leq d_{G_{2}}(\phi(v_{i}), \phi(v))\leq 7\eta\] \[<10\eta-2\eta=d_{G_{1}}(v_{i},v_{j})-2\eta.\] This is a contradiction with the property of \(\phi\). It cannot have more branches than \(k\) as it is the convex hull of at most \(k\) points. We next consider different cases for \(k\). For \(k\neq 2\), as \(C_{v}\) is a star graph, it has a unique node \(v^{\prime}\) with degree \(k\) (in \(C_{v}\)), and \(d_{G_{1}}(v^{\prime},\phi(v_{j}))>0,1\leq j\leq j\). We claim that \(v^{\prime}\) has degree exact \(k\) in \(G_{2}\). Suppose on the contrary, its degree in \(G_{2}\) is larger than \(k\). Then there is a branch not contained in \(C_{v}\). Let \(w^{\prime}\) be a node on the new branch such that \(6\eta\leq d_{G_{2}}(w^{\prime},\phi(v))\leq 7\eta\). Moreover, there is a node \(w\) in \(N_{v}\) such that \(4\eta\leq d_{G_{1}}(w,v)\leq 9\eta\) and \(w^{\prime}=\phi(w)\). Moreover, \(w\) is on the branch containing \(v_{j}\) for some \(j\), and hence \(d_{G_{1}}(w,v_{j})\leq 4\eta\). Therefore, \[d_{G_{1}}(w,v_{j})\leq 6\eta-2\eta\] \[<d_{G_{2}}(v^{\prime},\phi(v_{j}))+d_{G_{2}}(w^{\prime},v^{\prime })-2\eta\] \[=d_{G_{2}}(\phi(w),\phi(v_{j}))-2\eta,\] which is a contradiction. In this case, we define \(\psi(v)=v^{\prime}\in G_{2}\). If \(k=2\) when \(N_{v}\) is a path, by a similar argument, we have that \(C_{v}\) is a path. We set \(\psi(v)=\phi(v)\). An illustration is given in Fig. 7. For each \(v\in V_{1}\), we now enlarge the neighborhood and consider its \(\epsilon/6\)-neighborhood \(N_{v}^{\prime}\). It does not contain another vertex and hence is also a star graph. Moreover, if \(v\neq u\in V_{1}\), then \(N_{v}^{\prime}\cap N_{u}^{\prime}=\emptyset\) for otherwise \(d_{G_{1}}(u,v)\leq\epsilon/3\), which is impossible. We may similarly consider the \(\epsilon/6\)-neighborhoods \(C_{u}^{\prime},C_{v}^{\prime}\) of \(\psi(u)\) and \(\psi(v)\). Both \(C_{u}^{\prime}\) and \(C_{v}^{\prime}\) do not contain any vertex in \(V_{2}\) with degree \(\neq 2\). As \(N_{v}^{\prime}\) and \(C_{v}^{\prime}\) are star graphs with the same number of branches, there is an isometry (also denoted by \(\psi:N_{v}^{\prime}\to C_{v}^{\prime}\) such that \(d_{G_{2}}(\psi(w),\phi(w))\leq 2\eta\). By disjointedness of \(\epsilon/6\) neighborhoods, we may combine all the maps above together to obtain \(\psi:\cup_{v\in V_{1}}N_{v}^{\prime}\to\cup_{v\in V_{1}}C_{v}^{\prime}\). For the rest of \(G_{1}\), consider any edge \((u,v)\in E_{1}\). Without loss of generality, let \(u_{1}\) and \(v_{1}\) be the leaves of \(N_{u}^{\prime}\) and \(N_{v}^{\prime}\) contained in \((u,v)\). We claim that the shortest open path connecting \(\psi(u_{1})\) and \(\psi(v_{1})\) is disjoint from \(\cup_{v\in V_{1}}C_{v}^{\prime}\). For otherwise, \(d_{G_{1}}(u_{1},v_{1})\geq 2\epsilon/3\), while \(d_{G_{2}}(\phi(u_{1}),\phi(u_{2}))\leq d_{G_{2}}(\psi(u_{1}),\psi(u_{2}))-4\eta \geq\epsilon/2+2\epsilon/6-4\eta\). Therefore, \(2\epsilon/3-2\eta\geq 5\epsilon/6-4\eta\), which is impossible as \(\eta\ll\epsilon\). Let \(P_{u,v}\) and \(Q_{u,v}\) be the shortest paths connecting \(u_{1},v_{1}\) and \(\psi(u_{1}),\psi(v_{1})\) respectively (illustrated in Fig. 8). Then the length of \(P_{u,v}\) and \(Q_{u,v}\) differ at most by \(4\eta\). We may further extend \(\psi:P_{u,v}\to Q_{u,v}\) by a linear scaling such that \(d_{G_{2}}(\psi(w),\phi(w))\leq 3\eta\) for \(w\in P_{u,v}\). For different edges \((u,v),(u^{\prime},v^{\prime})\), it is apparent \(Q_{u,v}\cap Q_{u^{\prime},v^{\prime}}\) are disjoint, as the minimal distance between points on \(P_{u,v}\) and \(P_{u^{\prime},v^{\prime}}\) is at least \(\epsilon/3\). Therefore, we obtain a continuous injection \(\psi:G_{1}\to G_{2}\), which maps homeomorphically onto its image. We claim that \(\psi\) is onto. If not, there is a vertex \(v^{\prime}\in V_{2}\) that is not in \(\psi(V_{1})\) but it has a neighboring vertex \(u^{\prime}=\psi(u)\). However, this implies that the degree of \(u^{\prime}\) is strictly larger than that of \(u\), which is impossible as we have shown. In summary, \(\psi:G_{1}\to G_{2}\) is a homeomorphism such that \(|d_{G_{1}}(u,v)-d_{G_{2}}(u,v)|\leq 6\eta\) for any \(u,v\in G_{1}\). Moreover, \(\psi\) is piecewise linear whose gradient \(\psi^{\prime}\) is \(1\) in the interior of \(N_{v}^{\prime},v\in V_{1}\) and satisfies \[\frac{\frac{\epsilon}{6}-6\eta}{\frac{\epsilon}{6}}\leq\psi^{ \prime}(w)\leq\frac{\frac{\epsilon}{6}+6\eta}{\frac{\epsilon}{6}}, \tag{20}\] for \(w\) contained in the interior of some \(P_{u,v},(u,v)\in E_{1}\). We are ready to estimate \(|\delta_{G_{1},1}-\delta_{G_{2},1}|\). Let \(|G_{i}|\) be the total edge weights of \(G_{i},i=1,2\). For convenience, we denote a typical tuple \((u,v,w,t)\in G_{1}^{4}\) as a vector \(\mathbf{v}\), and \((\psi(u),\psi(v),\psi(w),\psi(t))\) by \(\boldsymbol{\psi}(\mathbf{v})\). The map \(\boldsymbol{\psi}:G_{1}^{4}\to G_{2}^{4},\mathbf{v}\mapsto\boldsymbol{\psi} (\mathbf{v})\) inherits the properties of its counterpart \(\psi\), which is a piecewise linear homeomorphism. In particular, its Jacobian \(J(\mathbf{v})\) is defined almost everywhere. Using Definition 2, Fig. 7: Illustration of \(\psi\). we have: \[\begin{split}&|\delta_{G_{1},1}-\delta_{G_{2},1}|\\ &=\Bigg{|}\int_{\mathbf{v}\in G_{1}^{4}}|G_{1}|^{-4}\tau_{G_{1}}( \mathbf{v})\,\mathrm{d}\mathbf{v}\\ &\qquad-\int_{\mathbf{v}\in G_{1}^{4}}|G_{2}|^{-4}J(\mathbf{v}) \tau_{G_{2}}(\mathbf{\psi}(\mathbf{v}))\,\mathrm{d}\mathbf{v}|\\ &\leq\sup_{\mathbf{v}\in G_{1}^{4}}|\tau_{G_{1}}(\mathbf{v})- \frac{|G_{1}|^{4}}{|G_{2}|^{4}}J(\mathbf{v})\tau_{G_{2}}\big{(}\mathbf{\psi}( \mathbf{v})\big{)}\Bigg{|}.\end{split} \tag{21}\] Similar to (19), we estimate \[\sup_{\mathbf{v}\in G_{1}^{4}}|\tau_{G_{1}}(\mathbf{v})-\tau_{G_{2}}\big{(}\bm {\psi}(\mathbf{v})\big{)}|\leq 24\eta. \tag{22}\] Moreover, we have seen in the proof that \(\psi\) can only have distortion when restricted to \(P_{u,v}\) for \((u,v)\in E_{1}\). As \[\frac{\frac{2e}{3}-6\eta}{\frac{2e}{3}}\leq|P_{u,v}|/|Q_{u,v}|\leq\frac{\frac{2 e}{3}+6\eta}{\frac{2e}{3}},\] the same bounds holds for \(|G_{1}|/|G_{2}|\). Both upper and lower bounds can be arbitrarily close to \(1\) if \(\eta\) is small enough. Similarly, by (20), \(J(\mathbf{v})\) as a fourth power of \(\psi^{\prime}\) can also be made arbitrarily close to \(1\). In conjunction with (21) and (22), \(|\delta_{G_{1},1}-\delta_{G_{2},1}|\) can be arbitrarily small if \(\eta\) is chosen to be small enough. This proves that \(\delta_{G,1}\) is continuous in \(G\).
2307.02902
Colored delta-T noise in Fractional Quantum Hall liquids
Photons are emitted or absorbed by a nano-circuit under both equilibrium and non-equilibrium situations. Here, we focus on the non-equilibrium situation arising due to a temperature difference between the leads of a quantum point contact, and study the finite frequency (colored) noise. We explore this delta-$T$ noise in the finite frequency regime for two systems: conventional conductors described by Fermi liquid scattering theory and the fractional quantum Hall system at Laughlin filling fractions, described by the chiral Luttinger liquid formalism. We study the emission noise, its expansion in the temperature difference (focusing on the quadratic component) as well as the excess emission noise defined with respect to a properly chosen equilibrium situation. The behavior of these quantities are markedly different for the fractional quantum Hall system compared to Fermi liquids, signalling the role of strong correlations. We briefly treat the strong backscattering regime of the fractional quantum Hall liquid, where a behavior closer to the Fermi liquid case is observed.
K. Iyer, J. Rech, T. Jonckheere, L. Raymond, B. Grémaud, T. Martin
2023-07-06T10:28:47Z
http://arxiv.org/abs/2307.02902v1
# Colored delta-\(T\) noise in Fractional Quantum Hall liquids ###### Abstract Photons are emitted or absorbed by a nano-circuit under both equilibrium and non-equilibrium situations. Here, we focus on the non-equilibrium situation arising due to a temperature difference between the leads of a quantum point contact, and study the finite frequency (colored) noise. We explore this delta-\(T\) noise in the finite frequency regime for two systems: conventional conductors described by Fermi liquid scattering theory and the fractional quantum Hall system at Laughlin filling fractions, described by the chiral Luttinger liquid formalism. We study the emission noise, its expansion in the temperature difference (focusing on the quadratic component) as well as the excess emission noise defined with respect to a properly chosen equilibrium situation. The behavior of these quantities are markedly different for the fractional quantum Hall system compared to Fermi liquids, signalling the role of strong correlations. We briefly treat the strong backscattering regime of the fractional quantum Hall liquid, where a behavior closer to the Fermi liquid case is observed. pacs: ## I Introduction In recent years, the study of non-equilibrium noise in mesoscopic devices has generated new investigations, both on the experimental and on the theoretical side. Indeed, instead of using the standard method to impose a non-equilibrium situation by connecting the device to leads with different voltages and generating so called quantum shot noise, experimentalists have opened the field of "delta-\(T\) noise" by choosing instead to apply a thermal gradient and zero voltage drop to the device. In this situation, provided that electron hole symmetry is respected, a finite zero frequency non equilibrium noise can be measured while the current flowing through the device remains zero. Voltage bias induced quantum noise [1; 2; 3; 4] has always been considered as a crucial diagnosis of quantum transport, providing complementary information about the charge of the current carriers or their statistics. Early theoretical works on delta-\(T\) noise suggest that it is also relevant to characterize nanoscopic devices. [5; 6] In particular, in correlated systems such as quantum Hall devices, delta-\(T\) noise in one dimensional correlated systems clearly depends on the dimension of the operators which describe the elementary excitations of the system, suggesting that it could provide information about anyonic statistics. On the experimental side delta-\(T\) noise has been studied in atomic break junctions representing quantum point contacts, [5] tunnel junctions, [7] integer quantum Hall effect edge channels, [8] under a weak or a strong temperature bias. Also, it has recently been employed to study the heat transport along the edges. [9] On the theoretical side, delta-\(T\) charge noise (and in some instance heat noise [10; 11]) has been already studied in a vast variety of systems, ranging from quantum point contacts/tunnel junctions, [6; 12; 13] resonant levels or quantum dots in the Kondo regime, [14] Fractional Quantum Hall systems, [15; 16; 17] bosonic systems and quantum spin Hall systems, [18] and normal metal/superconductor junctions. [19] All of these studies have focused uniquely on zero frequency noise, the experimental regime where the "white noise" has a weak dependence on the frequency because this frequency scale is sufficiently high so that 1/f noise can be neglected, but also sufficiently low to avoid specific features associated with the non equilibrium conditions which are imposed on the device. Voltage induced non-equilibrium noise at high frequency, sometimes dubbed "colored noise", [20; 21] has been discussed and introduced theoretically about a quarter of a century ago [22; 23]. It was pointed out that its measurement requires a quantum treatment of both the noise detector and the nanoscopic device under study. It is therefore considered a subtle quantity because of the necessity to distinguish emission noise, where the nanoscopic device emits microwave photons to the quantum detector, from absorption noise where the detector (which in practice has photon occupations specified by the Bose-Einstein distribution for instance) emits photons which are absorbed by the nanoscopic device. Voltage induced finite frequency (colored) noise in normal metal junctions is characterized by cusps in the emission and absorption noise located at frequencies corresponding to the Josephson frequency associated with the electron charge. Experimentally, the measurement of colored noise has for a long time shied away experimentalists because of the inherent difficulties of the measurement scheme, but some successes have been won in superconducting hybrid junctions [24; 25] and with the refinement of experimental detection techniques, recently the Josephson frequency [26] of fractional quasiparticles of the (Laughlin) fractional quantum Hall effect was measured [27], constituting the first finite frequency measurement of noise in a correlated electron system, and an alternative diagnosis of the fractional charge (as compared to the measurement of the Fano factor). The question which we want to address in the present work is simple, but the answer may not be so obvious: what is the frequency spectrum of photons emitted/absorbed from a nanoscopic device when the non-equilibrium condition is imposed solely by a temperature gradient? Does finite frequency delta-\(T\) noise have spe cific signatures which can be tied to the scaling dimension of the operators describing the elementary excitations - and thus their statistics? Similar questions have been addressed in recent works [11; 28] for normal metal leads connected by a quantum dot. As a starting point, we explore the physics of finite frequency delta-\(T\) noise in a (normal metal) Fermi liquid system. This will subsequently be used as a benchmark to study finite frequency delta-\(T\) noise in the fractional quantum Hall effect, the focus of this article. The paper is organized as follows: in Sec. II we introduce the emission and absorption noise, as well as the excess emission noise and the thermal-like contribution of finite frequency noise; in Sec. III we discuss finite frequency noise for Fermi liquids; in Sec. IV we focus on the Fractional Quantum Hall effect regime and we conclude in Sec. V. ## II Emission, Absorption and Excess Noise As explained in the literature, [2] when considering finite frequency noise, the quantum nature of the noise detector needs to be described on the same footing as the device under study. There exist typically two coupling schemes between the two circuits: an inductive coupling scheme [22; 23], where microwave photons are exchanged between the device and a resonant (LC) circuit, or a capacitive coupling scheme, [29] where photons emitted/absorbed by the device trigger inelastic transitions in a nearby measuring circuit where current is measured. As a result, in full generality, two distinct correlators need to be defined in order to define the physically measured noise. The emission noise describes the spectrum of microwave photons emitted to the (quantum) noise detection device: \[S_{+}(\omega)=\int_{-\infty}^{+\infty}\mathrm{d}\tau\ \langle\delta I(0) \delta I(\tau)\rangle\ \mathrm{e}^{i\omega\tau}\, \tag{1}\] where \(\delta I(\tau)=I(\tau)-\langle I(\tau)\rangle\) describes the deviation of the current operator from the stationary current \(\langle I(\tau)\rangle=\langle I\rangle\). The absorption noise describes the absorption of microwave photons emitted from the detector. \[S_{-}(\omega)=\int_{-\infty}^{+\infty}\mathrm{d}\tau\ \langle\delta I(\tau) \delta I(0)\rangle\ \mathrm{e}^{i\omega\tau}. \tag{2}\] They are related by the equation \(S^{+}(\omega)=S^{-}(-\omega)\), which allows us to consider only the emission noise from this point onward. We focus on a situation where the two lead reservoir have the same chemical potential, while the left (right) reservoir is at temperature \(T_{L}\) (\(T_{R}\)). In this situation, and in the presence of electron/hole symmetry, no net current flows (\(\langle I\rangle=0\)), but the (non-equilibrium) emission noise \(S_{+}(\omega,T_{R},T_{L})\neq 0\) depends on both temperatures. We now introduce the "thermal-like" contribution to the noise: \[S_{+}^{\mathrm{th}}(\omega,T_{R},T_{L})=\frac{1}{2}S_{+}(\omega,T_{R},T_{R}) +\frac{1}{2}S_{+}(\omega,T_{L},T_{L}) \tag{3}\] which reduces exactly to the finite frequency Johnson-Nyquist thermal equilibrium emission noise when \(T_{R}=T_{L}\). Following Ref. [14], it is then convenient to define the excess emission noise according to: \[\Delta S_{+}(\omega,T_{R},T_{L})=S_{+}(\omega,T_{R},T_{L})-S_{+}^{\mathrm{th} }(\omega,T_{R},T_{L}) \tag{4}\] where we have subtracted the thermal contributions of both the leads from the emission noise. This quantity is measurable experimentally, and reduces to the sole out-of-equilibrium contribution in the non-interacting regime, even when the transmission probability is energy-dependent. ## III Fermi Liquids We start by analyzing the general non-equilibrium noise in a system of non-interacting fermions. Consider a two-terminal phase coherent system composed of fermionic reservoirs separated by a scattering region specified by a scattering matrix \(\mathcal{S}\) (which contains the Figure 1: The figure depicts the processes giving rise to the emission/absorption noise, that is, the transmission of electrons from the right Fermi lead to the left lead (or vice versa) accompanied by the emission (subfigures a,b) or absorption (subfigures c,d) of photons of energy \(\hbar\omega\). \(T_{R/L}\) denote the temperatures of the Fermi leads, where we assume \(T_{L}>T_{R}\), and \(\epsilon_{F}\) denotes the chemical potential which is the same for both leads. amplitudes for a particle from reservoir \(L\left(R\right)\) to be transmitted/reflected in reservoir \(R\) or \(L\); for simplicity, we choose both leads to bear only a single channel). Each reservoir is described by a Fermi distribution function: \[f_{p}(\omega)=\frac{1}{e^{\frac{\hbar\omega}{L_{B}T_{p}}}+1}. \tag{5}\] where \(p\) is the lead index. Noise in such fermionic systems is caused by the transmission of electrons from the left/right lead to the right/left lead, accompanied by the absorption or emission of photons, as depicted in Fig. 1. When considering emission noise, an electron (top left panel) from the tail of the left (high temperature) Fermi function can lose energy and end up in the vicinity of the Fermi level because there are free states available. This can also happen in reverse (top right panel), but to a lower extent, due to the thermal broadening of the Fermi functions. We emphasize that the latter channel for emission noise is specific to temperature-biased junctions. It is absent for zero-temperature, voltage-biased junctions since there are no states available below the Fermi level. The two lower panels refer to absorption noise processes and both electron transfer processes due to photon absorption are also present for pure voltage-biased junctions. Our starting point is the general formula for finite frequency emission noise:[2] \[S_{+}(\omega)= \frac{4e^{2}}{h}\int dE\sum_{pp^{\prime}}\left[\delta_{Lp}\delta _{Lp^{\prime}}-s_{Lp}^{*}(E)s_{Lp^{\prime}}(E-\hbar\omega)\right]\] \[\times\left[\delta_{Lp^{\prime}}\delta_{Lp}-s_{Lp^{\prime}}^{*}(E- \hbar\omega)s_{Lp}(E)\right]\] \[\times f_{p}(E)\left[1-f_{p^{\prime}}(E-\hbar\omega)\right]. \tag{6}\] where \(p\) and \(p^{\prime}\) are lead indices. The scattering matrix is described by the minimal parametrization: \[\mathcal{S}=\begin{pmatrix}s_{LL}&s_{RL}\\ s_{LR}&s_{RR}\end{pmatrix}=\begin{pmatrix}i\sqrt{1-\mathcal{T}}&\sqrt{\mathcal{ T}}\\ \sqrt{\mathcal{T}}&i\sqrt{1-\mathcal{T}}\end{pmatrix}\, \tag{7}\] where \(\mathcal{T}(E)\) is the energy-dependent transmission probability. In the context of scattering theory, assuming that the measurement frequency \(\omega\) can be neglected in the scattering matrix elements [\(s_{pp^{\prime}}(E-\hbar\omega)\approx s_{pp^{\prime}}(E)\)] the emission noise can be split into thermal (equilibrium) and non-equilibrium (excess) contributions. The thermal-like contribution of the emission noise, given by Eq. (3) reads, in the context of scattering theory: \[S_{+}(\omega,T_{R},T_{L})=\frac{2e^{2}}{h}\int dE\ \mathcal{T}(E)\left[1- \mathcal{T}(E)\right]\left[f_{R}(E)-f_{L}(E)\right]\left[f_{R}(E-\hbar \omega)-f_{L}(E-\hbar\omega)\right] \tag{8}\] while the excess emission noise given by Eq. (4) reads \[\Delta S_{+}(\omega,T_{R},T_{L})=\frac{2e^{2}}{h}\int dE\ \mathcal{T}(E)\left[1- \mathcal{T}(E)\right]\left[f_{R}(E)-f_{L}(E)\right]\left[f_{R}(E-\hbar\omega )-f_{L}(E-\hbar\omega)\right] \tag{9}\] which naturally implies that it describes a purely off-equilibrium quantity as \(\Delta S_{+}(\omega,T,T)=0\). Assuming particle-hole symmetry, and using the basic properties of the Fermi distribution, one can prove the following identities: \[\Delta S_{+}(\omega,T_{R},T_{L}) = \Delta S_{+}(-\omega,T_{R},T_{L}) \tag{10}\] \[\int_{-\infty}^{+\infty}d\omega\ \Delta S_{+}(\omega,T_{R},T_{L}) = 0 \tag{11}\] The result of Eq. (10) suggests that the excess emission noise is symmetric with respect to frequency (parity rule) so that it does not distinguish between emission and absorption processes. This symmetry property then allows us to obtain the result of Eq. (11) that the excess emission noise also satisfies a sum rule, where the noise integrated over all frequencies is zero. These features of the excess emission noise will be later examined for fractional quantum Hall liquids. In the remainder of this section, we shall assume the transmission coefficient to be constant \(\mathcal{T}(E)=\mathcal{T}\), as the scattering theory result will be compared to the (Fermi) filling fraction \(\nu=1\) of the quantum Hall effect, where in the wide band limit, Ohm's law is satisfied and a constant transmission coefficient is implicit. Note that in this situation, at equilibrium (\(T_{L}=T_{R}=T\)), the thermal contribution of the emission noise has the analytical expression: \[S_{+}^{th}(\omega,T,T)=\frac{4e^{2}}{h}\mathcal{T}\frac{\hbar\omega}{\exp( \hbar\omega/k_{B}T)-1} \tag{12}\] which, by definition of Eq. (3), yields the usual zero frequency Johnson-Nyquist thermal noise as \(\omega\to 0\). ### Small temperature gradient We define the temperature difference \(\Delta T=T_{R}-T_{L}\) and the average temperature \(T_{\rm avg}=(T_{R}+T_{L})/2\). Working up to lowest order in the transmission amplitude, we ignore the \(\mathcal{T}^{2}\) term in the non-equilibrium part of the noise for later comparison with the weak backscattering regime of the fractional quantum Hall effect. The full emission noise (\(S_{+}\)) is plotted in Fig. 2, for a fixed gradient (\(\Delta T=10mK\)) and several average temperatures. We note that in the small \(\Delta T\) regime, \(S_{+}(\omega,T_{R},T_{L})\) is almost equal to \(S_{+}^{\rm th}(\omega,T_{\rm avg},T_{\rm avg})\), given by Eq. (12), the difference between the two being only of order \(O\left(\frac{\Delta T}{T_{\rm avg}}\right)\). Going from the left to the right of the plot, \(S_{+}\) corresponding to different \(T_{\rm avg}\) are equal for large, negative \(\omega\) and decrease linearly. As we get closer to \(\omega=0\)\(S_{+}\) still keep decreasing, but the curves corresponding to different \(T_{\rm avg}\) branch off and the curves with higher \(T_{\rm avg}\) decay at a slower rate. The temperature-dependent decay continues for \(\omega>0\) and eventually, for large, positive \(\omega\), all the curves vanish. These features can be understood as a consequence of the thermal broadening of the Fermi distributions, by looking at Fig. 1, where \(\omega>0\) corresponds to subfigures (i), (ii) (emission processes) and \(\omega<0\) to subfigures (iii), (iv) (absorption processes.) A greater number of higher energy states are occupied as \(T_{\rm avg}\) is increased, hence, for \(\omega>0\), \(S_{+}\) decays slower as a function of frequency until it ultimately vanishes - corresponding to energies where the state occupation is negligible. Likewise, for \(\omega<0\), the slower decay of \(S_{+}\) for higher \(T_{\rm avg}\), can be understood in a similar fashion. For large negative omega, the distinction between Fermi distributions corresponding to different \(T_{\rm avg}\) is negligible, and the noise is essentially the same, caused by absorption of high-frequency photons by the low energy states. This picture is better understood from the inset of Fig. 2 which shows the difference between the emission noise at a given temperature and the same quantity evaluated at zero temperature. This reflects precisely the change in the occupation of the levels due to a non-zero \(T_{\rm avg}\). As pointed out earlier, the non-equilibrium consequences of the temperature difference are completely masked by the equilibrium thermal noise for \(\Delta T\ll T_{\rm avg}\). This can also be checked by plotting the emission noise for a fixed \(T_{\rm avg}\) and different \(\Delta T\) where one finds that the curves almost all collapse with the pure equilibrium thermal noise. This motivates us to look at the _excess emission noise_ (\(\Delta S_{+}\)) given by Eq. (9), which has been designed specifically to get rid of the thermal contributions in the non-interacting regime and isolate the non-equilibrium contributions to the noise arising from the temperature difference [14]. \(\Delta S_{+}\) is displayed in Fig. 3, the top panel showing Figure 3: (i) Excess emission noise at a fixed \(\Delta T=20mK\) and different \(T_{\rm avg}\), (ii) Excess emission noise at the same \(T_{\rm avg}=100mK\) but different \(\Delta T\) – both for \(\Delta T\ll T_{\rm avg}\). The noise is computed for a transmission \(\mathcal{T}=0.01\), and expressed in units of \(S_{+}^{(0)}=e^{2}\mathcal{T}/(2\hbar)\). The spread of the excess noise spectrum scales linearly with the average temperature of the leads whereas the magnitude of the excess noise increases quadratically with the temperature difference. Figure 2: Emission noise at different \(T_{\rm avg}\), for the regime \(\Delta T\ll T_{\rm avg}\), for fixed \(\Delta T=10mK\) (\(\Delta T=0\) for \(T_{\rm avg}=0mK\).) The noise is computed for a transmission \(\mathcal{T}=0.01\), and expressed in units of \(S_{+}^{(0)}=e^{2}\mathcal{T}/(2\hbar)\). The decay rate of the spectrum as \(\omega\) increases is related to the average temperature of the leads. Higher \(T_{\rm avg}\) leads to a slower decay which is a consequence of the Fermi distributions broadening. In the inset, we subtracted the zero temperature emission noise from the finite temperature one. the excess emission noise for a fixed temperature gradient and several average temperatures, while the bottom panel corresponds to a fixed average temperature but several values of the temperature gradient. In all cases, \(\Delta S_{+}\) is characterized by a central peak at \(\omega=0\), where the noise is positive. Indeed, for small frequencies, the \(\Delta T\)-biased system is noisier than the corresponding equilibrium system averaged over the two temperatures. \(\Delta S_{+}\) then decreases gradually, bearing negative values for intermediate positive/negative frequencies, reaching a minimum whose position scales with the average temperature. Negative noise in the intermediate frequency regime suggests that there is _less_ noise in the \(\Delta T\) non-equilibrium scenario compared to an equilibrium situation of equal temperatures on both the leads. Finally, for large positive or negative frequencies, the excess noise vanishes meaning that the temperature difference does not modify the noise substantially compared to the equilibrium noise in this regime. The change in the sign of \(\Delta S_{+}\) for different frequency regimes can again be understood as a consequence of the difference in the occupation of the left and right Fermi leads. In the delta-\(T\) biased regime, for small frequencies, there is a higher number of processes contributing to the noise compared to an equilibrium situation, making the excess noise positive. On the contrary, for intermediate frequencies, there are fewer processes contributing to the noise, giving a negative excess noise. As predicted by the parity rule of Eq. (10), \(\Delta S_{+}\) is an even function of frequency, and the sum rule Eq. (11) has been checked numerically to be satisfied. We note that for a fixed \(\Delta T\), the peak and minima are more pronounced when the average temperature is small. This goes together with an overall spread infrequency which increases linearly as \(T_{\rm avg}\) increases. However, for a fixed \(T_{\rm avg}\) and increasing \(\Delta T\), while the spread of the spectrum remains the same, the size of the peak in \(\Delta S_{+}\) increases quadratically. It follows that the spread in frequency of the \(\Delta S_{+}\) spectrum seems to be governed by the average temperature, \(T_{\rm avg}\) of the system, while the magnitude of the excess noise, which reflects the _degree of non-equilibriumness_ of the noise, seems to be largely dictated by the temperature difference, \(\Delta T\). We express this quantitatively as \(\Delta S_{+}(\omega,T_{R},T_{L})\sim\Delta T^{2}/T_{\rm avg}\mathcal{S}\left( \omega/T_{\rm avg}\right)\). ### Large temperature gradient We next consider a non-equilibrium scenario where \(T_{R}\ll T_{L}\), such that we can essentially consider \(T_{R}\sim 0\). Our consideration of this regime is motivated by its relatively easy experimental accessibility.[7] We again find that the decay rate of the emission noise is controlled strongly by the average temperature of the system (see Fig. 4). The higher the average temperature, the slower the decay. The excess emission noise in this large temperature difference regime displays a behavior Figure 4: Emission noise at different \(T_{\rm avg}\), for the regime \(T_{R}\ll T_{L}\) or \(\Delta T\sim T_{\rm avg}\). The noise is computed for a transmission \(\mathcal{T}=0.01\), and expressed in units of \(S_{+}^{(0)}=e^{2}\mathcal{T}/(2\hbar)\). Like in the small \(\Delta T\) regime, we find that the noise spectrum decays slower with higher average temperature. Figure 5: (i) Excess emission noise for a fixed \(\Delta T\) (\(=60mK\) and different \(T_{\rm avg}\), (ii) Excess emission noise at the same \(T_{\rm avg}(=100mK)\) but different \(\Delta T\). The noise is computed for a transmission \(\mathcal{T}=0.01\), and expressed in units of \(S_{+}^{(0)}=e^{2}\mathcal{T}/(2\hbar)\). For both the plots we consider \(T_{L},T_{R}\) or \(\Delta T\sim T_{\rm avg}\). This regime gives a behaviour of the noise spectra quite similar to the small temperature difference regime. analogous to the small \(\Delta T\) case. Again, the spread of the excess noise spectrum depends strongly on the average temperature while the magnitude of the noise is fixed by the temperature difference, as can be seen in Fig. 5. However, in contrast with the small \(\Delta T\) case, \(S_{+}\) may substantially differ from \(S_{+}^{\rm th}\) in this regime since for large enough \(\Delta T\), the magnitude of \(\Delta S_{+}\) may be comparable to that of \(S_{+}^{\rm th}\) ## IV Fractional Quantum Hall Effect ### Luttinger Liquid model We now turn to a Hall bar in the fractional quantum Hall (FQH) regime, with a Laughlin filling factor, ie., \(\nu=1/(2n+1)\) (\(n\in\mathbb{Z}^{+}\)). We want to analyse the behaviour of delta-\(T\) noise in these systems, which constitutes the central part of this work. FQH systems host edge states that can be described by a chiral Luttinger liquid Hamiltonian given by [2; 30] \[H_{0}=\frac{v_{F}}{4\pi\nu}\int dx\left[(\partial_{x}\phi_{R})^{2}+(\partial_{ x}\phi_{L})^{2}\right] \tag{13}\] where \(\phi_{R/L}\) are chiral bosonic fields that describe the right/left moving modes, propagating with velocity \(v_{F}\). The bosonic fields are quantized by the commutation relation \([\phi_{R/L}(x),\phi_{R/L}(y)]=\pm i\pi{\rm sgn}(x-y)\) and are related to the quasiparticle operators on the edge through the identity: \[\psi_{R/L}(x,t)=\frac{U_{R/L}}{\sqrt{2\pi a}}e^{\pm ikx}e^{-i\sqrt{\nu}\phi_{ R/L}(x,t)}, \tag{14}\] where \(a\) is a short-distance cutoff, \(U_{R/L}\) are the Klein factors, and \(k_{F}\) the Fermi momentum. We further equip the Hall bar with a quantum point contact (QPC), placed at position \(x=0\), allowing tunneling between the counter-propagating edges. Working in the weak backscattering regime, where quasiparticles are allowed to tunnel between the edges, we need to add a tunneling term to the Hamiltonian \[H_{\rm WB}(t)=\Gamma_{0}\psi_{R}^{\dagger}(0,t)\psi_{L}(0,t)+{\rm H.c.} \tag{15}\] where \(\Gamma_{0}\) is the tunneling amplitude. With this, the tunneling current operator can be calculated to be \[I_{T}(t)=ie^{*}\Gamma_{0}\psi_{R}^{\dagger}(0,t)\psi_{L}(0,t)+{\rm H.c.} \tag{16}\] where \(e^{*}=\nu e\) is the quasiparticle charge. We compute the delta-\(T\) emission noise associated with the backscattering current at the QPC using the Keldysh formalism, to lowest order (\(\Gamma_{0}^{2}\)) in the tunneling amplitude: [2] \[S_{+}(\omega,T_{R},T_{L})=\left(\frac{e^{*}\Gamma_{0}}{\hbar\pi a}\right)^{2} \int d\tau\ e^{i\omega\tau}e^{\nu\mathcal{G}_{R}(-\tau)+\nu\mathcal{G}_{L}(- \tau)} \tag{17}\] where \(T_{R},T_{L}\) are the temperatures at the right- and left-moving edges respectively, \(\omega\) is the frequency at which the noise is measured and \(\mathcal{G}_{R/L}\) are the finite-temperature bosonic Greens functions of the bosonic fields \(\phi_{R/L}\), typical of the chiral Luttinger liquids modelling the FQHE: \[\mathcal{G}_{R/L}(\tau)=\ln\left[\frac{\sinh\left(i\pi\frac{k_{B}}{\hbar}T_{ R/L}\tau_{0}\right)}{\sinh\left(\pi\frac{k_{B}}{\hbar}T_{R/L}(i\tau_{0}-\tau) \right)\right]} \tag{18}\] with \(\tau_{0}=a/v_{F}\) being a short time cutoff. For \(T_{R}=T_{L}=T_{\rm avg}\) in Eq. (17), the thermal equilibrium emission noise can be evaluated analytically and is given by \[S_{+}^{\rm th}(\omega,T_{\rm avg},T_{\rm avg}) =\left(\frac{e^{*}\Gamma_{0}}{\hbar\pi a}\right)^{2}\tau_{0}\left( \frac{2\pi k_{B}T_{\rm avg}}{\hbar}\tau_{0}\right)^{2\nu-1}\] \[\times\exp\left(-\frac{\hbar\omega}{2k_{B}T_{\rm avg}}\right) \frac{\left|\Gamma\left(\nu+\frac{i\hbar\omega}{2\pi k_{B}T_{\rm avg}}\right) \right|^{2}}{\Gamma(2\nu)} \tag{19}\] ### Small temperature gradient We now discuss the properties of delta-\(T\) noise in the strongly correlated regime of the Laughlin fractional quantum Hall effect. The delta-\(T\) emission and excess noise, for the weak backscattering regime where anyons tunnel across the QPC, are plotted in Fig. 6, for several values of the fractional filling factors \(\nu=1/3,1/5,1/7\). For the sake of comparison with the Fermi liquid results of the previous section, we use here the same convention of excess emission noise, which was defined in Eq. (4). Note that, although this definition ensures that, in the non-interacting regime, all thermal contributions are filtered out, leaving only the purely non-equilibrium contributions to the noise, such a cancellation is not guaranteed in the FQH regime and may only be partial. In the FQH regime, similarly to the Fermi liquid case, the full emission noise (\(S_{+}\)) is almost equal to the equilibrium thermal noise (\(S_{+}^{\rm th}\)), given in Eq. (19). However, the general behavior is quite different from that of the Fermi liquid case, as the emission noise now shows a central asymmetric peak at small negative frequencies, then decreases for large positive/negative frequencies. The sharp decrease for positive frequencies is reminiscent of the Pauli blocking which restricts the emission of photons due to the presence of a Fermi sea. On the other hand, the slow decrease of \(S_{+}\) for negative frequencies has no Fermi liquid equivalent. This behavior at high frequency can be readily understood by considering the asymptotics of Eq. (19). For large, positive frequencies, one has \(S_{+}^{\rm th}(\omega\rightarrow\infty,T_{\rm avg},T_{\rm avg})\sim\omega^{2 \nu-1}{\rm exp}\left(-\frac{\hbar\omega}{2k_{B}T_{\rm avg}}\right)\), thus explaining the rapid exponential decay with frequency. Whereas in the limit of large negative frequency, it reduces to a simple power law in \(\omega\) given by \(S_{+}^{\rm th}(\omega\rightarrow-\infty,T_{\rm avg},T_{\rm avg})\sim\omega^{2 \nu-1}\). This power law behavior is directly related to the scaling dimension of the tunneling operator. It has been checked numerically. Interestingly, the noise spectrum always satisfies the inequality \(S_{+}(-\omega,T_{R},T_{L})\geq S_{+}(\omega,T_{R},T_{L})\), independently of the temperatures of the incoming edge states. This property can be proven exactly in the case of Fermi liquid leads and holds irrespective of the details of the junction or the temperature difference. It amounts to stating that the rate at which the system absorbs energy from the electromagnetic field is always greater than or equal to the rate at which it transfers energy to the field.[28] This is typically interpreted in terms of processes involving electrons and holes being scattered in the conductor before recombining to emit or absorb the energy of a photon.[31] It is quite striking to observe that this generalizes to the case of FQH devices, suggesting a similar interpretation based on quasiparticle-quasihole pairs. Contrary to the Fermi liquid case, the excess emission noise \(\Delta S_{+}\) is asymmetric in frequency for nontrivial Laughlin fractions, which constitutes another example of the role of electronic correlations in the FQH regime. This breaks the parity rule of Eq. (10), departing from the Fermi liquid picture. However, and quite importantly, the excess emission noise still satisfies the sum rule of Eq. (11), despite its asymmetry in frequency. This can be readily understood upon integrating the expression of Eq. (17) over the whole frequency range, noticing from Eq. (18) that \(\mathcal{G}_{R/L}(\tau=0)=0\), so that the integrated emission noise reduces to a constant, independently of the temperature of the leads. This result for the sum rule has also been checked numerically. We now look at the behaviour of \(\Delta S_{+}\) focusing on the filling factor \(\nu=1/3\) FQH, first as a function of the average temperature (\(T_{\rm avg}\)) for a fixed temperature difference (\(\Delta T\)), and then as a function of \(\Delta T\) for a fixed \(T_{\rm avg}\). The results are displayed in Fig. 7. First, we find that the \((\Delta T,T_{\rm avg})\) dependence of \(\Delta S_{+}\) for \(\nu=1/3\) FQH, is largely similar to that of the Fermi liquid regime. Indeed, even in this strongly correlated Figure 6: (i) Emission and (ii) excess noise (rescaled) at a QPC comprising two FQH edges in the weak backscattering regime, held at a \(10\ mK\) temperature difference, at an average temperature of \(100\ mK\). The noise is here expressed in units of \(\tilde{S}_{+}^{(0)}=\left(\frac{e^{\tau}T_{0}}{\pi\hbar\nu\rho}\right)^{2} \left(2\pi\frac{k_{B}}{\hbar}\right)^{2\nu-1}\frac{\sigma_{0}^{2\nu}}{\Gamma(2 \nu)}\) and computed for a small time cutoff \(\tau_{0}\) such that \(k_{B}\tau_{0}/\hbar=10^{-5}K^{-1}\). Note that the emission noise looks the same as the equilibrium noise since the consequences of the temperature difference are \(10^{3}\) times smaller than the equilibrium contributions. Figure 7: (i) Excess emission noise at a fixed \(\Delta T=20mK\) and different \(T_{\rm avg}\) for FQH \(\nu=1/3\) edges, (ii) Excess emission noise at the same \(T_{\rm avg}=100mK\) but different \(\Delta T\) for FQH \(\nu=1/3\) edges – both for \(\Delta T\ll T_{\rm avg}\). The noise is here expressed in units of \(\tilde{S}_{+}^{(0)}=\left(\frac{e^{\tau}T_{0}}{\pi\hbar\nu\rho}\right)^{2} \left(2\pi\frac{k_{B}}{\hbar}\right)^{2\nu-1}\frac{\sigma_{0}^{2\nu}}{\Gamma(2 \nu)}\) and computed for a small time cutoff \(\tau_{0}\) such that \(k_{B}\tau_{0}/\hbar=10^{-5}K^{-1}\). Similar to the Fermi liquid regime, the spread of the excess noise spectrum is dictated by the average temperature of the leads whereas the magnitude of the excess noise is fixed largely by the temperature difference. system, we find that the spread in frequency of the noise spectrum is a function of the average temperature of the entire system, whereas the magnitude of the excess noise depends primarily on the temperature difference between the two FQH edges. Other filling fractions display the same behavior (not shown.) This behavior can be described quantitatively by an expression of the form \(\Delta S_{+}\left(\omega,T_{R},T_{L}\right))\sim T^{2\nu-3}\Delta T^{2}\mathcal{ S}\left(\omega/T_{\text{avg}}\right)\), for small \(\Delta T\). As \(\Delta T\) increases, the higher-order contributions become more important and we observe deviation from this behavior. Comparing the results of Fig. 5 to those of Fig. 7, one first notices an overall sign flip of the excess emission noise with a similar-looking structure involving three extrema. The central zero-frequency peak of the Fermi liquid case is now shifted toward negative frequency signaling a strong reduction of the absorption. While the side peaks are also present, they differ from the Fermi liquid case in that they are no longer symmetric, occurring at frequencies that are seemingly unrelated, with a bigger amplitude at positive frequencies, corresponding to a stronger enhancement of the emission. This behavior is qualitatively reminiscent of the one observed for a resonant level asymmetrically coupled to Fermi liquids [28], and as such could be related to the nontrivial energy dependence of the scattering at the QPC. For completeness, we now consider the strong backscattering regime of the FQHE, where electrons, instead of anyons tunnel across the QPC. This regime can be accessed by invoking the standard duality properties of the chiral Luttinger liquid description of the edge states, which amounts to simply replacing \(\nu\to 1/\nu\) and \(e^{*}\to e\) in Eq. (17). We show the emission and excess noise for several values of the filling factors in Fig. 8. The emission noise \(S_{+}\) in the strong backscattering regime is closer to the one of Fermi liquids than that of the FQH weak backscattering regime, rapidly decaying for \(\omega>0\) and growing for \(\omega<0\), without any significant features. For negative frequencies, the emission noise now grows as a power law of \(|\omega|^{2/\nu-1}\), as opposed to the simple linear-in-frequency behavior observed in the Fermi liquid case. Again, a simple interpretation of this behavior of the emission noise is hard to come by, but one may point out that this is associated with the scaling dimension of the tunneling operator, which now involves electrons rather than anyons. The excess noise \(\Delta S_{+}\) in the strong backscattering regime is rather featureless, only showing a reduction of the absorption. Note that a careful examination of the excess noise at extremely high frequency does show a peculiar behavior. This is actually an artifact of the calculation as it happens for frequencies beyond the scale set by the cutoff of the theory, namely \(\omega\gg v_{F}/a\). These results are unphysical and only signal a breakdown of the chiral Luttinger liquid description at such high energies. Lastly, we note that the Luttinger liquid results map back exactly to the Fermi liquid results if one sets \(\nu=1\), as expected. This has been dealt with analytically in Appendix A. ### Temperature gradient expansion Unfortunately, Eq. (17) is not analytically tractable in its full form, motivating us to treat it perturbatively in the small temperature gradient limit. Following the zero frequency delta-\(T\) noise analysis of Ref. [15], starting from \(T_{R/L}=T_{\text{avg}}\pm\frac{\Delta T}{2}\) where \(\Delta T\ll T_{\text{avg}}\), we expand the exponentiated Green's function perturbatively up to second order in \(\Delta T/2\), giving us (we assume \(\hbar=k_{B}=1\) in this section to declutter the equations) \[S_{+}(\omega,T_{R},T_{L})=S_{0}(\omega,T_{\text{avg}})\left[1+\left(\frac{ \Delta T}{2T_{\text{avg}}}\right)^{2}C_{2}(\omega,T_{\text{avg}})\right] \tag{20}\] where \[S_{0}(\omega,T_{\text{avg}})=\left(\frac{e^{*}\Gamma_{0}}{\pi a}\right)^{2} \int d\tau e^{i\omega\tau}\left[\frac{\sinh\left(i\pi T_{\text{avg}}\tau_{0} \right)}{\sinh\left(\pi T_{\text{avg}}(i\tau_{0}+\tau)\right)}\right]^{2\nu} \tag{21}\] Figure 8: (i) Emission and (ii) excess noise (rescaled) at a QPC comprising two FQH edges in the strong backscattering regime, held at a \(10\ mK\) temperature difference, at an average temperature of \(100\ mK.\) The noise is here expressed in units of \(\tilde{S}_{+}^{(0)}=\left(\frac{e^{*}\Gamma_{0}}{\pi\hbar v_{F}}\right)^{2} \left(2\pi\frac{k_{B}}{\hbar}\right)^{2\nu-1}\frac{\gamma_{0}^{2\nu}}{\Gamma(2 \nu)}\) and computed for a small time cutoff \(\tau_{0}\) such that \(k_{B}\tau_{0}/\hbar=10^{-5}K^{-1}\). and \[C_{2}(\omega,T_{\rm avg})=\frac{1}{S_{0}(T_{\rm avg},\omega)}\left(\frac{e^{*} \Gamma_{0}}{\pi a}\right)^{2}\int d\tau\ e^{i\omega\tau}\left[\frac{\sinh\left(i \pi T_{\rm avg}\tau_{0}\right)}{\sinh\left(\pi T_{\rm avg}(i\tau_{0}+\tau) \right)}\right]^{2\nu}\left[\frac{\nu(\pi(i\tau_{0}+\tau))^{2}}{\sinh(\pi T_{ \rm avg}(i\tau_{0}+\tau))}-\frac{\nu(i\pi\tau_{0})^{2}}{\sinh(i\pi\tau_{0}T_{ \rm avg})}\right] \tag{22}\] Here, \(S_{0}(\omega,T_{\rm avg})\equiv S_{+}^{\rm th}(\omega,T_{\rm avg},T_{\rm avg})\) is just the equilibrium thermal noise, already evaluated in Eq. (19). Both the integrals of Eq. (22) can also be evaluated analytically. The details of the calculation are summarized in Appendix B. The result for \(C_{2}\) then reads: \[C_{2}\left(\frac{\omega}{2\pi T_{\rm avg}}\right) =\nu\left[-1+\frac{\left|\nu+\frac{i\omega}{2\pi T_{\rm avg}} \right|^{2}}{2\nu(2\nu+1)}\left(\pi^{2}+4\pi{\rm Im}\left[\psi\left(\nu+1+ \frac{i\omega}{2\pi T_{\rm avg}}\right)\right]\right.\right.\] \[\qquad\qquad\left.\left.+4\ \left\{{\rm Im}\left[\psi\left(\nu+1+ \frac{i\omega}{2\pi T_{\rm avg}}\right)\right]\right\}^{2}-2{\rm Re}\left[ \psi^{\prime}\left(\nu+1+\frac{i\omega}{2\pi T_{\rm avg}}\right)\right] \right)\right] \tag{23}\] where \(\psi\) is the digamma function and prime indicates a derivative. The \(C_{2}\) coefficient in Eq. (23) is obtained directly from an expansion of the emission noise, following in that respect the convention adopted in earlier works.[15; 18] This corresponds to a slightly different definition of the excess noise compared to the one used so far and defined in Eq. (4). It corresponds to an excess noise where the reference noise is chosen to be the equilibrium noise at the average temperature, i.e. \(C_{2}=\left[S_{+}(\omega,T_{R},T_{L})-S_{+}(\omega,T_{\rm avg})\right]/S_{+} (\omega,T_{\rm avg})\). While one could equally introduce an equivalent coefficient by expanding in \(\Delta T\) the excess noise \(\Delta S_{+}\) defined in Eq. (4), this is merely a matter of convention, and ultimately allows to highlight different properties. Here, we resort to the present choice since it readily distinguishes the weak and strong backscattering regime,[15] which is not so clear with other conventions. Interestingly, it turns out that \(C_{2}(\omega,T_{\rm avg})\) actually does not depend separately on frequency and temperature, but rather in a combined way, being a function of the ratio \(\omega/T_{\rm avg}\). The behavior of this \(C_{2}\) coefficient, which encodes the relevant "non-equilibrium" information, is plotted in Fig. 9 as a function of \(\theta=\hbar\omega/(2\pi k_{B}T_{\rm avg})\) in the case of weak backscattering at the QPC. There is a clear distinction between the behavior for the Laughlin fractions and the one for the trivial integer case. While for \(\nu=1\), the \(C_{2}\) coefficient increases monotonically, it displays a dip, crossing into negative values for frequencies close to zero for the Laughlin fractions. The value of the minimum is only marginally affected by the filling factor (within the Laughlin sequence), however the range of frequency over which \(C_{2}<0\) is \(\nu\)-dependent and shrinks with the filling factor. In all cases, the \(C_{2}\) coefficient grows as a power-law at high frequency, nevertheless the contribution to the emission noise is washed out by the exponential decay of the equilibrium thermal noise. In the strong backscattering regime, which is accessed by making use of the duality properties and simply replacing \(\nu\to 1/\nu\) in Eq. (23), we find that the curves for Laughlin FQH show a strong resemblance to the \(\nu=1\) curve, monotonically increasing as a function of \(\omega/T_{\rm avg}\) with no dips to negative values as shown in Fig. 10. ## V Conclusions This work dealt with the finite frequency spectrum of photons emitted from a thermal gradient generated non Figure 9: The \(C_{2}\) coefficient for FQH edges, plotted with respect to the dimensionless quantity \(\theta=\frac{\hbar\omega}{2\pi k_{B}T_{\rm avg}}\), with the QPC operating in the weak-backscattering regime. The coefficient displays dips to negative values for Laughlin fractions. equilibrium transport in both Fermi and quantum Hall junctions. The finite frequency noise was characterized here by the emission noise as well as with the excess emission noise, which has solely non equilibrium origins in the Fermi picture as thermal noises of each reservoirs are subtracted. For electron-hole symmetric Fermi junctions, the Landauer-Buttiker formalism can be employed, and the excess noise does not distinguish between emission and absorption processes as it is an even function of frequency. The excess emission noise of Fermi liquid thus has a central positive peak, and changes sign at moderate frequencies, acquires a minimum and then vanishes to zero. The height of the peak is controlled by the temperature difference and its width is determined by the average temperature. For a QPC in the fractional quantum Hall regime, we employed the chiral Luttinger liquid theory to compute in the weak backscattering regime the emission and excess noise when both edges have different temperatures. We started with the weak backscattering regime which is dominated by quasiparticle tunneling where new physics is expected. While the emission noise vanishes for positive frequencies, it also does for negative frequencies which departs strongly from the Fermi liquid picture. The emission noise has a central, asymmetric peak for small negative frequencies. The excess noise contains a minimum for small negative frequencies, in sharp contrast with the Fermi liquid case, and it is also asymmetric. The excess noise can be explored by varying both the average temperature and the temperature gradient. The emission noise in the strong back-scattering regime, where only electrons can tunnel between the two semi infinite Hall fluids, resembles strongly the Fermi liquid case (it decays as positive frequencies and grows for negative frequencies) but it follows a Luttinger liquid power law (rather than the linear behavior predicted by Fermi liquid theory) at negative frequencies. It seemed judicious to follow Ref. [15] and explicitly perform a thermal gradient expansion of the emission noise (in the weak backscattering regime) to characterize the coefficient \(C_{2}\) of the quadratic term in the gradient, which was obtained analytically as a function of the ratio between the frequency and the average temperature. \(C_{2}\) is negative and has a minimum for small negative frequencies (in accordance with the zero frequency result). It grows for positive frequencies and decays to zero for negative frequencies. \(C_{2}\) plotted as a function of frequency allows to further point out the differences with Fermi liquid theory. In the strong backscattering regime \(C_{2}\) behaves roughly as in the Fermi liquid case, monotonically increasing, with no minima of negative contributions. This work does open the path to the investigation of finite frequency noise in mesoscopic systems driven out of equilibrium by a thermal gradient. While the regime of Fermi liquids was used here primarily as a benchmark and a point of comparison, we believe that our study of the strongly correlated regime of the fractional Hall effect deserves attention, as on many instances, departures from the Fermi liquid picture are observed. ###### Acknowledgements. This work received support from the French government under the France 2030 investment plan, as part of the Initiative d'Excellence d'Aix-Marseille Universite - A*MIDEX. We acknowledge support from the institutes IPhU (AMX-19-IET-008) and AMUtech (AMX-19-IET-01X).
2303.17146
A Deep Learning Approach to Extracting Nuclear Matter Properties from Neutron Star Observations
Understanding the equation of state of dense QCD matter remains a major challenge in both nuclear physics and astrophysics. Neutron star observations from electromagnetic and gravitational wave spectra provide critical insights into the behavior of dense neutron-rich matter. The next generation of telescopes and gravitational wave observatories will offer even more detailed observations of neutron stars. Utilizing deep learning techniques to map neutron star mass and radius observations to the equation of state allows for its accurate and reliable determination. This work demonstrates the feasibility of using deep learning to extract the equation of state directly from neutron star observational data, and to also obtain related nuclear matter properties such as the slope, curvature, and skewness of the nuclear symmetry energy at saturation density. Most importantly, we show that this deep learning approach is able to reconstruct \textit{realistic} equations of state, and deduce \textit{realistic} nuclear matter properties. This highlights the potential of artificial neural networks in providing a reliable and efficient means to extract crucial information about the equation of state and related properties of dense neutron-rich matter in the era of multi-messenger astrophysics.
Plamen G. Krastev
2023-03-30T04:48:59Z
http://arxiv.org/abs/2303.17146v1
# A Deep Learning Approach to Extracting Nuclear Matter Properties from Neutron Star Observations ###### Abstract Understanding the equation of state of dense QCD matter remains a major challenge in both nuclear physics and astrophysics. Neutron star observations from electromagnetic and gravitational wave spectra provide critical insights into the behavior of dense neutron-rich matter. The next generation of telescopes and gravitational wave observatories will offer even more detailed observations of neutron stars. Utilizing deep learning techniques to map neutron star mass and radius observations to the equation of state allows for its accurate and reliable determination. This work demonstrates the feasibility of using deep learning to extract the equation of state directly from neutron star observational data, and to also obtain related nuclear matter properties such as the slope, curvature, and skewness of the nuclear symmetry energy at saturation density. Most importantly, we show that this deep learning approach is able to reconstruct _realistic_ equations of state, and deduce _realistic_ nuclear matter properties. This highlights the potential of artificial neural networks in providing a reliable and efficient means to extract crucial information about the equation of state and related properties of dense neutron-rich matter in the era of multi-messenger astrophysics. neutron stars, equation of state, dense matter, deep learning ## I Introduction The quest to determine the equation of state (EOS) of dense neutron-rich matter is a paramount challenge facing modern physics and astrophysics, representing one of the most pressing and critical unanswered questions [1; 2; 3]. The EOS has significant implications for a broad range of phenomena, including heavy ion collision dynamics, binary neutron star mergers, supernovae, and gravitational waves. Both nuclear physics (see, for example, Refs. [4; 5; 6; 7; 8; 9; 10; 11; 12; 13]) astrophysics (see, for example, Refs. [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]) communities have made it a priority to investigate this fundamental problem and have established a wide range of research facilities, including telescopes, observatories, and gravitational-wave detectors, in order to advance our understanding of the EOS [1; 2]. The nucleonic component of the EOS of cold neutron star matter can be expressed in terms of the energy per nucleon \(E(\rho,\delta)\) as [36] \[E(\rho,\delta)=E_{SNM}(\rho)+E_{\rm sym}(\rho)\delta^{2}, \tag{1}\] where \(E_{SNM}(\rho)\) is the energy per nucleon of symmetric nuclear matter (SNM), \(E_{\rm sym}(\rho)\) is the nuclear symmetry energy, and \(\delta=(\rho_{\rm n}-\rho_{\rm p})/\rho\) is the isospin asymmetry. In the above equation, \(\rho_{\rm n}\), \(\rho_{\rm p}\), and \(\rho\) represent the neutron, proton, and total density, respectively. Currently, the EOS of cold nuclear matter under extreme conditions is still uncertain and controversial, especially at supra-saturation densities, mainly because of the unknown high-density behavior of the nuclear symmetry energy \(E_{sym}(\rho)\)[4; 5]. In order to obtain the equation of state (EOS) of nuclear matter from first principles, one must solve for quantum chromodynamics (QCD), the fundamental theory of strong interactions. However, current model-independent results are limited to a narrow density range. At low densities around \(1-2\rho_{0}\) (where \(\rho_{0}=0.16\) fm\({}^{-1}\) is the saturation density of symmetric nuclear matter), _ab initio_ methods can be combined with nuclear interactions derived from Chiral Effective Theory (\(\chi\)EFT) with controlled uncertainty estimates [37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. For densities at \(\rho\gtrsim 50\rho_{0}\), perturbative QCD calculations provide reliable results [47; 48; 49; 50; 51; 52; 53]. However, for intermediate densities between \(2-10\rho_{0}\), no reliable QCD predictions exist [54]. To determine the EOS in this region, non-perturbative methods such as Monte Carlo simulation of QCD on a lattice (lattice QCD) must be developed, which faces significant challenges such as the sign problem in finite-density systems [55]. Consequently, construction of the EOS at intermediate densities still relies on phenomenological approaches using many-body methods and effective interactions such as the relativistic mean field (RMF) theory and density functionals based on Skyrme, Gogny, or Similarity Renormalization Group (SRG) evolved interactions. In recent years, there has been significant progress in determining the EOS at high densities from both nuclear laboratory experiments and multi-messenger astrophysical (MMA) observations of neutron stars (NSs). The experimental data from heavy-ion reactions collected from intermediate to relativistic energies, specifically related to nucleon collective flow and kaon production, has already significantly constrained the EOS of symmetric nuclear matter up to around 4.5 \(\rho_{0}\)[6]. The cooperation between the nuclear physics and astrophysics communities has resulted in substantial advancements in constraining the symmetry energy around and below the saturation density of nuclear matter using a combination of terrestrial nuclear experiments and astrophysical observations [56, 57, 58, 59, 10, 11, 56, 5]. However, the density dependence of the nuclear symmetry energy \(E_{sym}(\rho)\) at supra-saturation densities and the possible hadron-to-quark phase transition remain the most uncertain aspects of the high-density EOS [21, 22, 24, 4, 5]. The presence of new particles, such as hyperons and resonances, is also highly dependent on the high-density trend of \(E_{sym}(\rho)\)[60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75]. The recent MMA observations of NSs have offered a unique opportunity to explore the high-density EOS. This represents an alternative way to independently extract the EOS by means of statistical approaches (as highlighted in Refs. [76, 77, 78, 79, 80, 81]). These observations encompass a wide range of methods, such as the Shapiro delay measurements of massive \(\sim\)2\(M_{\odot}\) pulsars [82, 83, 84], the radius measurement of quiescent low-mass X-ray binaries and thermonuclear bursters [76, 77, 78, 85, 86], X-ray timing measurements from the NICER mission [87, 88, 89], and the detection and inference of gravitational waves from compact binary mergers involving NSs [90, 91, 92] (by the LIGO/VIRGO/KAGRA [93, 94, 95] collaboration). Common observables for NSs include mass \(M\), radius \(R\), moment of inertia \(I\), quadrupole moment \(Q\), dimensionless tidal deformability \(\Lambda\) (and its derivatives such as Love number \(k_{2}\) and tidal deformability \(\lambda\)), and compactness \(M/R\). The NICER mission, for instance, targets the compactness \(M/R\) of neutron stars by measuring the gravitational lensing effect of the thermal emission from the star's surface. Meanwhile, gravitational-wave (GW) observations of binary neutron star (BNS) and neutron star-black hole (NSBH) mergers provide information about the tidal disruption of the star in the presence of its companion, which is quantified through the tidal deformability parameter \(\lambda\). There exist various statistical approaches to determine the most likely EOS from neutron star observational data. Of these, the use of Bayesian inference is widespread [76, 77, 78, 79, 80]. Gaussian processes also provide a non-parametric representation of the EOS [81]. However, the uncertainty in Bayesian analyses raises questions regarding the true nature of dense matter EOS [54]. In light of this, alternative model-independent methods are being sought. Deep neural networks (DNNs) [96, 97] have garnered attention in the research community, where deep learning (DL) algorithms have displayed exceptional proficiency in tasks such as image recognition [98] and natural language processing [99]. Furthermore, these techniques have been applied to various physics and astrophysics domains, including the analysis of GW data for detection [100, 101, 102, 103, 104, 105, 106, 107], parameter estimation [108, 109], and denoising [110]. In previous works we employed Convolutional Neural Network (CNN) [111] algorithms to detect and infer GW signals from BNS [112, 113] and, very recently, from NSBH [114] mergers. Additionally, the use of DNNs as a tool to extract the dense matter EOS from neutron star observations has also been explored in a growing number of studies [115, 116, 117, 118, 119, 54, 55, 120]. In a recent investigation [121], we presented an innovative approach to determine the nuclear symmetry energy, \(E_{sym}(\rho)\), by utilizing DL techniques in conjunction with astronomical observations of neutron stars. Our results demonstrate that deep neural networks have the capacity to accurately extract \(E_{sym}(\rho)\) from a set of \(M-R\) or \(M-\Lambda\) NS observations. This approach offers a promising avenue for exploring the high-density behavior of \(E_{sym}(\rho)\), which remains a challenging task in nuclear physics. In this paper, we extend our DL approach for determining the EOS of dense matter and associated nuclear properties using mass-radius \(M(R)\) measurements of neutron stars. In particular, we pay special attention to deducing the slope, curvature, and skewness of the nuclear symmetry energy, in addition to the EOS. Our results demonstrate that DL algorithms can accurately and reliably extract the NS EOS and nuclear matter properties from observational data. Moreover, we find that our DL approach can successfully reconstruct _realistic_ EOSs and nuclear matter properties, which brings us one step closer to revealing the true nature of the EOS of dense, neutron-rich matter. In the present work, we have structured our discussion as follows. In the first section, we have provided a brief introduction. In Section II, we have presented the main aspects of our formalism. This encompasses a comprehensive overview of the essential characteristics of the EOS employed in our analysis, along with the details of our DL algorithms, such as data generation, neural network architectures, and training methodologies. Subsequently, in Section III, we have put forth our results and their implications. Finally, in Section IV, we have summarized our findings and provided future research directions. Formalism In this section, we present the methodologies utilized in our study. First, we provide a comprehensive overview of the key characteristics and specifications of the EOS used. Subsequently, we briefly outline the procedure for solving the static NS structure equations. In Section II.3, we discuss the DL approach adopted in mapping the NS mass-radius \(M(R)\) observations to the EOS, as well as the procedure for mapping the reconstructed EOS to selected nuclear matter properties. ### Equation of State The equation of state plays a critical role in determining the properties of neutron stars, such as mass \(M\) and radius \(R\). To determine the nuclear matter EOS, two main theoretical approaches are commonly used: phenomenological and microscopic methods. Phenomenological approaches rely on effective interactions to describe the ground state of finite nuclei. These methods, including those based on Skyrme interactions [122; 123] and relativistic mean-field (RMF) models [124], have been widely used in the study of low-density nuclear systems. However, they are not well-suited for high isospin asymmetry systems and at large densities, where experimental data are unavailable to constrain such interactions, predictions based on these methods can be far from realistic behavior [125]. On the other hand, microscopic approaches use realistic two-body and three-body nucleon forces to describe the behavior of nucleons. These interactions can be based on meson-exchange theory [126; 127] or more recent \(\chi\)EFT [128; 129; 42; 130]. Microscopic many-body methods, such as the Brueckner-Hartree-Fock (BHF) approach [131], the Dirac-Brueckner-Hartree-Fock (DBHF) theory [132; 133], the variational approach [134], the Quantum Monte Carlo technique and its derivatives [135; 136], the self-consistent Green's function technique [137], \(\chi\)EFT [45], and the \(V_{low;k}\) approach [138] are based on these interactions. The major challenge for these methods is the treatment of the short-range repulsive core of the nucleon-nucleon interaction, which distinguishes the different techniques from each other. The nucleonic component of the EOS can be described by two quantities: the binding energy of symmetric nuclear matter, \(E_{SNM}(\rho)\), and the symmetry energy, \(E_{sym}(\rho)\) (see Eq. (1)). These two quantities can be expanded as Taylor series around \(\rho_{0}\) as given by: \[E_{SNM}(\rho)=E_{0}+\frac{K_{0}}{2}x^{2}+\frac{J_{0}}{6}x^{3}, \tag{2}\] \[E_{sym}(\rho)=S_{0}+Lx+\frac{K_{sym}}{2}x^{2}+\frac{J_{sym}}{6}x^{3}, \tag{3}\] where \(x\equiv(\rho-\rho_{0})/3\rho_{0}\). The coefficients of these expansions can be related to various physical properties of nuclear matter and can be experimentally constrained. They have the following meanings [139]: \(E_{0}\equiv E_{SNM}(\rho_{0})\), \(K_{0}\equiv[9\rho^{2}d^{2}E_{SNM}/d\rho^{2}]_{\rho_{0}}\), and \(J_{0}\equiv[27\rho^{3}d^{3}E_{SNM}/d\rho^{3}]_{\rho_{0}}\) are the binding energy, incompressibility, and skewness of SNM; \(S_{0}\equiv E_{sym}(\rho_{0})\), \(L\equiv[3\rho dE_{sym}/d\rho]_{\rho_{0}}\), \(K_{sym}\equiv[9\rho^{2}d^{2}E_{sym}/d\rho^{2}]_{\rho_{0}}\), and \(J_{sym}\equiv[27\rho^{3}d^{3}E_{sym}/d\rho^{3}]_{\rho_{0}}\) are the magnitude, slope, curvature, and skewness of the nuclear symmetry energy at saturation density. Currently, the most likely values of these coefficients are known within certain ranges: \(E_{0}=-15.9\pm 0.4\) MeV, \(K_{0}=240\pm 20\) MeV, \(-300\leq J_{0}\leq 400\) MeV, \(S_{0}=31.7\pm 3.2\) MeV, \(L=58.7\pm 28.1\) MeV, \(-400\leq K_{sym}\leq 100\) MeV, and \(-200\leq J_{sym}\leq 800\) MeV; as reported in Ref. [140]. Several of these parameters have rather moderate uncertainty. For example, the binding energy \(E_{0}\) is estimated to be \(-15.9\pm 0.4\) MeV, while the magnitude of the symmetry energy \(S_{0}\) is \(31.7\pm 3.2\) MeV. However, many of them still have significant uncertainty, such as the curvature of the symmetry energy \(K_{sym}\), which could range from \(-400\) MeV to \(100\) MeV, and the higher order coefficients, \(J_{0}\) and \(J_{sym}\), with even wider uncertainty ranges. Although the Taylor expansions given by Equations (2) and (3) are known to diverge at higher densities [141], these expressions can also be viewed as parameterizations with free parameters [140]. This duality means that, for systems with low isospin asymmetries, the Taylor expansions are valid near saturation density, while for highly neutron-rich systems at supra-saturation densities, Equations (2) and (3) should be treated as parameterizations [140]. For further information on the relationship between the Taylor expansions and the parameterizations, we refer the reader to Ref. [140]. These expressions are frequently utilized in modeling the NS EOS, and they have been applied, for instance, in solving the inverse structure problem of NSs and constraining high-density symmetry energy through NS observational data [140; 142]. In addition, the NS EOS _metamodel_ has been used in Bayesian analyses to determine the most likely values of high-density EOS parameters through inference from NS data [143]. Compared to the widely used piecewise _polytropes_, these parameterizations have the advantage of including isospin dependence and composition information throughout the density range, while still allowing for modeling a wide range of EOSs from various many-body approaches. This feature is particularly important for deducing the high-density \(E_{sym}(\rho)\), as the parameterizations separate clearly the contribution of the symmetry energy to the EOS. For example, these parameterizations were instrumental in our previous work [121] for extracting the nuclear symmetry energy directly from NS observational data via deep neural networks. In this study, we utilize an EOS metamodel to facilitate the extraction of the EOS and selected nuclear matter properties from NS observational data. By varying the parameters of the EOS, we can generate numerous EOSs and the corresponding sequences of mass and radius (\(M-R\)) by solving the NS structure equations. The matter in the core of the neutron star is modeled as a mixture of protons, neutrons, electrons, and muons in beta-equilibrium (referred to as the \(npe\mu\)-model). We use the expressions for \(E_{SNM}(\rho)\) and \(E_{sym}(\rho)\) from Equations (2) and (3) to calculate \(E(\rho,\delta)\) through Equation (1). The pressure of the neutron star matter in \(\beta\)-equilibrium \[P(\rho,\delta)=\rho^{2}\frac{d\epsilon(\rho,\delta)/\rho}{d\rho} \tag{4}\] can then be determined from the energy density, \(\varepsilon(\rho,\delta)=\rho[E_{n}(\rho,\delta)+M_{N}]+\varepsilon_{l}(\rho, \delta)\), where \(M_{N}\) is the average nucleon mass and \(\varepsilon_{l}(\rho,\delta)\) is the lepton energy density. Further details on calculating \(\varepsilon_{l}(\rho,\delta)\) can be found in, e.g., Ref. [144]. When the density of the neutron star matter falls below approximately \(0.07~{}fm^{-3}\), the core EOS is complemented by a crustal EOS, which is more appropriate for lower density regions. For the inner crust, we use the EOS provided by Pethick et al. [145] and for the outer crust, the EOS by Haensel and Pichon [146]. In our analysis, we use Equations (2) and (3) as parameterizations together with the parabolic approximation of the nucleonic EOS given by Equation (1). The values of \(E_{0}\) and \(S_{0}\) are fixed at their most probable current values, which have been obtained through a combination of nuclear laboratory experiments and theoretical calculations. We subsequently vary the rest of the parameters, \(K_{0}\), \(J_{0}\), \(L\), \(K_{sym}\), and \(J_{sym}\) to generate many samples of the EOS. With the expanded parameter space, compared to the one considered in our previous work [121], we are able to model a wider class of EOSs as predicted by various many-body approaches, and models of the nuclear interaction. The effect of varying the individual parameters is shown in Figure 1. While, in principle, these parameters are absolutely free, Figure 1: (**Upper left**) Energy per particle of SNM as a function of the reduced density \(\rho/\rho_{0}\) for various values of \(K_{0}\), with \(E_{0}=15.9\) MeV and \(J_{0}=0\) MeV. (**Upper middle**) Same as the upper left window but for various values of \(J_{0}\), with \(E_{0}=15.9\) MeV and \(K_{0}=240\) MeV. (**Upper right**) Symmetry energy \(E_{sym}\) as a function of \(\rho/\rho_{0}\) for various values of \(L\), with \(S_{0}=31.7\) MeV, \(K_{sym}=0\) MeV, and \(J_{sym}=0\) MeV. (**Lower left**) Same as the upper right window but for various values of \(K_{sym}\), with \(S_{0}=31.7\) MeV, \(L=58.7\) MeV and \(J_{sym}=0\) MeV. (**Lower right**) Same as the previous two windows but for various values of \(J_{sym}\), with \(S_{0}=31.7\) MeV, \(L=58.7\) MeV and \(K_{sym}=0\) MeV. See text for details. the asymptotic boundary conditions of the EOS near \(\rho_{0}\) and \(\delta=0\) provide some prior knowledge of the ranges of these parameters. Their ranges are further restricted by imposing the requirement that the EOSs must satisfy causality and the microscopic stability condition, and the resultant NS models can support a maximal mass of at least 2.14 M\({}_{\odot}\) of the heaviest pulsar observed so far [84]. The ranges of \(E_{SNM}\) and \(E_{sym}(\rho)\) satisfying all constraints are shown in Figure 2. ### Structure Equations of Static Neutron Stars In this section, we briefly revisit the procedure for calculating the mass \(M\) and radius \(R\) of static neutron stars. For a spherically symmetric relativistic star, Einstein's field equations can be simplified to the Tolman-Oppenheimer-Volkoff (TOV) equation [147], as follows: \[\frac{dP(r)}{dr}=-\frac{\varepsilon(r)m(r)}{r^{2}}\left[1+\frac{P(r)}{ \varepsilon(r)}\right]\left[1+\frac{4\pi r^{3}P(r)}{m(r)}\right]\left[1-\frac {2m(r)}{r}\right]^{-1}, \tag{5}\] where the mass within a sphere of radius \(r\) is determined by \[\frac{dm(r)}{dr}=4\pi\varepsilon(r)r^{2}. \tag{6}\] To solve the above equations, one needs to supplement them with the EOS in the form \(P(\varepsilon)\). Starting with the initial conditions \(m(r=0)=0\) and \(\varepsilon_{c}=\varepsilon(r=0)\) at the NS center (\(r=0\)), integration of Equations (5) and (6) is carried out until the pressure \(P\) reaches zero, marking the edge of the star. Some care should be taken at \(r=0\) since the above equations are singular at the center. The point \(r=R\) where \(P\) vanishes determines the NS radius and \(M=m(R)=4\pi\int_{0}^{R}\varepsilon(r^{\prime})r^{\prime 2}dr^{\prime}\) its gravitational mass. For a given EOS, there is a unique relationship between the stellar mass and the central density \(\varepsilon_{c}\). Thus, for a particular EOS, there is a unique sequence of NSs parameterized by the central density (or equivalently the central pressure \(P_{c}=P(0)\)). In Figure 3 we show the range of possible EOSs (\(P(\rho)\)) satisfying all constraints (left window), and the resultant \(M-R\) NS sequences (right window). ### Artificial Neural Networks In this section, we briefly discuss the basic setup, structure, and workflow associated with implementing DNNs for our specific application. For more extensive discussions, the reader is referred to a number of machine learning articles [96; 148] and textbooks [97; 149]. We apply a combination of two DNNs with similar architectures to first extract the EOS of dense neutron-rich matter from a set of mass-radius NS measurements, and then deduce selected nuclear matter properties from the Figure 2: Range of the energy of symmetric nuclear matter \(E_{SNM}\) (**left window**) and the nuclear symmetry energy \(E_{sym}\) (**right window**). The \(E_{SNM}\) and \(E_{sym}\) are plotted as functions of the reduced density \(\rho/\rho_{0}\). \(\beta\)-equilibrium NS EOS. We refer to the first neural network as EOS DNN (equation of state deep neural network), and the second one NuPRO DNN (nuclear matter properties deep neural network). The procedure of using these DNNs to extract the EOS and selected nuclear matter properties is illustrated schematically in Figure 4. #### iii.1.1 EOS Network (Eos DNN) To extract the EOS, in this analysis, we apply a supervised DL approach and formulate a regression problem, where the input to the DNN consist of \(M(R)\) sequences (sets of points representing pairs of NS mass-radius measurements), while the output consist of EOS (\(P(\varepsilon)\)) estimates. Accordingly, the datasets used for the training, validation, and testing of EOS DNN consist of \(M(R)\) sequences and \(P(\varepsilon)\) samples. We use the EOS metamodel discussed in Section II.1, and vary the parameters in Equations (2) and (3) to generate many samples of the EOS, and subsequently by solving the NS structure equations, corresponding \(M-R\) sequences. Specifically, we set \(E_{0}=15.9\) MeV and \(S_{0}=31.7\) MeV, and vary the rest of the parameters by randomly sampling their values from their respective ranges: \(K_{0}=240\pm 20\) MeV, \(-300\leq J_{0}\leq 400\) MeV, \(L=58.7\pm 28.1\) MeV, \(-400\leq K_{sym}\leq 100\) MeV, \(-200\leq J_{sym}\leq 800\) MeV. Recently, the latest results of the PREX collaboration suggested a rather high value of \(L\) with an upper limit at 143 MeV [150]. Examining the effect of higher \(L\) values is left to following works. The resultant EOSs \(P(\varepsilon)\) are checked regarding whether they satisfy (i) the microscopic stability condition, i.e., \(\frac{dP}{d\varepsilon}\geq 0\), and (ii) the causality condition, i.e., the speed of sound \(c_{s}\equiv\sqrt{\frac{dP}{d\varepsilon}}\geq c\). In addition, the resultant NS models must be able to sustain a maximal mass of at least 2.14 M\({}_{\odot}\)[84]. These constraints restrict the values of \(K_{0}\), \(J_{0}\), \(L\), \(K_{sym}\) and \(J_{sym}\), and the final EOS samples. To simulate NS observational data, from a given genuine \(M-R\) sequence, we randomly choose 50 points in the range of 1M\({}_{\odot}\) to M\({}_{max}\) supported by the given EOS [167]. Then, each input sample is an array of dimension \(2\times 50\) consisting Figure 4: Using deep neural networks to extract the EOS of dense neutron-rich matter and nuclear matter properties from neutron star mass-radius measurements. EOS DNN takes as input a set of points from a genuine \(M-R\) curve, and returns as output a set of points representing the EOS, \(P(\varepsilon)\). Subsequently, these are fed into NuPRO DNN which outputs selected nuclear matter properties (\(K_{0}\), \(J_{0}\), \(L\), \(K_{sym}\), and \(J_{sym}\)). See text for details. Figure 3: **(Left window)** Range of the EOS incorporating all constraints: Total pressure \(P\) as a function of the reduced density \(\rho/\rho_{0}\). **(Right window)** Range of mass–radius relation: Corresponding \(M-R\) sequences of the NS models computed with the EOSs considered in this study. The mass ranges of the three heaviest pulsars known at present [82; 83; 84] are indicated in the right window. of 50 pairs of (\(M\), \(R\)) values. The values of \(M\) and \(R\) are scaled by dividing them by 3 and 20 respectively to ensure that the input data are in the \((0,1)\) range. Similarly, each output sample is an array of dimension \(2\times 50\) consisting of 50 pairs of estimated (\(P\), \(\varepsilon\)) values, representing the EOS in the density range from \(\sim 0.4\rho_{0}\) to \(5\rho_{0}\). In this respect, the DNN maps an input \(M(R)\) sequence to an output EOS, \(P(\varepsilon)\). In supervised learning, the data are divided into training, validation, and testing data sets. The training data set is used by the DNN to learn from, the validation data are used to verify whether the network is learning correctly, and the testing data are used to assess the performance of the trained model. Here, the training dataset consist of 120,000 independent \(M(R)\) sequences, representing the DNN inputs, and 120,00 matching EOS samples, \(P(\varepsilon)\), representing the DNN outputs. From each \(M(R)\) sequence we further draw 50 ensembles, each containing 50 randomly selected (\(M\), \(R\)) pairs. In this way, each EOS sample in the training data set is represented by 50 different random ensembles drawn from the same genuine \(M(R)\) curve. The final training data set therefore consist of \(6\times 10^{6}\) samples. Similarly, the final validation data set consist 250,000 samples, where each of 5,000 independent output EOS samples is represented by 50 different random ensembles drawn from the same \(M(R)\) sequence. Finally, the testing data set consist of 5,000 unique input and output samples, not used in the training and validation of the DNN. #### ii.2.2 Nuclear Matter Properties Network (MuPRO DNN) Once the \(\beta\)-stable matter EOS becomes available, we can proceed with the extraction of chosen nuclear matter properties. In order to achieve this goal, we have trained another DNN, which we refer to as the NuPRO DNN. The input to the DNN consists of EOS samples represented as \(P(\rho)\), which are sets of 50 equally spaced points within the interval \(\rho=[0.08-0.8]fm^{-3}\), where the input data was converted to decimal logarithm values. On the other hand, the output corresponds to estimations of selected nuclear matter properties, with respect to each input EOS sample. Our aim is to learn the mapping \(\mathbf{y(x)}\) with \(\mathbf{x_{i}}=[P_{\beta}(\rho_{1}),P_{\beta}(\rho_{2}),...,P_{\beta}(\rho_{50 })]\) and \(\mathbf{y_{i}}=[K_{0},J_{0},L,K_{sym},J_{sym}]\) being the corresponding set of parameters. In our work, we utilized a training dataset comprising 120,000 samples of the equation of state \(P(\rho)\), each with an accompanying set of parameters (\(K_{0}\), \(J_{0}\), \(L\), \(K_{sym}\), and \(J_{sym}\)) matching the specific EOS realization. Additionally, we constructed separate validation and testing datasets, each consisting of 5,000 data samples. The architecture of the NuPRO DNN model that we have implemented involves a feedforward structure with five hidden layers, each being dense with dimensions 200, 200, 200, 100, and 50, respectively. We have utilized the _ReLU_ activation functions for each of these layers. The neural network architecture of the EOS DNN was optimized through an iterative process involving multiple experiments and tuning of the hyper-parameters. The neural network's input layer has a dimension of 50 and corresponds to the 50 uniformly distributed data points representing the equation of state, \(P(\rho)\), while the output layer has a dimension of 5, which returns the estimated nuclear matter parameters. A summary of the network architecture is provided in Table 2. We used Keras and TensorFlow to develop and train our neural network. Similarly to before, we employed stochastic gradient descent with an adaptive learning rate through the ADAM method [152], which was further modified with the AMSgrad technique [153]. For training the DNN, we selected a batch size of 1000 and initialized the learning rate at 0.001. Additionally, we set a limit of 5000 epochs for each training session or until the validation error reached a minimum. Using a checkpoint-callback, we selected the model with the lowest loss value on the validation dataset. The cost function used for this task is the mean-squared error (MSE), which is defined as the sum of the squared differences between the predicted values of the DNN model, \(\hat{y}_{i}\), and the actual or "true" values, \(y_{i}\), divided by the number of samples, \(n\): \[MSE=\frac{1}{n}\sum_{i=1}^{n}(\hat{y}_{i}-y_{i})^{2}. \tag{8}\] ## III Results ### Extracting the EOS We first examine the ability of the DNN to reconstruct the EOS, \(P(\varepsilon)\), from a set of mass and radius \(M-R\) measurements that may result from electromagnetic observations of neutron stars, such as those from the NICER mission, for instance. In particular, we apply the trained EOS DNN, described in the previous section, to a test dataset containing \(\sim\)5000 simulated \(M(R)\) sequences, and compare the corresponding estimated output EOS with the exact \begin{table} \begin{tabular}{c c c c} & Layer & Activation & Size \\ \hline & Input & – & 50 \\ 1 & Dense & ReLU & 200 \\ 2 & Dense & ReLU & 200 \\ 3 & Dense & ReLU & 200 \\ 4 & Dense & ReLU & 100 \\ 5 & Dense & ReLU & 50 \\ & Output & – & 5 \\ \hline \end{tabular} \end{table} Table 2: The NuPRO DNN architecture comprises an input layer with 50 dimensions, corresponding to the 50 equally spaced points of the EOS \(P(\rho)\). This is followed by five dense fully connected layers of varying dimensions, culminating in an output layer that returns the estimated nuclear matter parameters \(K_{0}\), \(J_{0}\), \(L\), \(K_{sym}\), and \(J_{sym}\). The total number of trainable parameters in this DNN model is 118,555. Further information about this architecture can be found in the text. EOS for each sample. In Figure 5, we show results for five representative examples from the test dataset. It is seen that the EOS (broken colored lines) for each input \(M-R\) sequence matches almost exactly the "true" EOS (solid black lines) over the entire density range considered here. For the purpose of presentation, the EOS is shown in the form \(P(\rho)\). The results are very similar for the rest of the test data samples. Quantitatively, the mean absolute error over the whole test dataset is 0.5 MeV fm\({}^{-3}\) with standard deviation of 1.3 MeV fm\({}^{-3}\). Choosing different ensembles of randomly selected points from the genuine \(M(R)\) curves does not alter appreciably the accuracy with which the EOS is estimated. Realistic NS observations inevitably carry uncertainties, which result in corresponding uncertainties in the estimated EOS. To investigate the effect of the observational uncertainties on extracting the EOS via our DL approach, we prepared a test dataset assuming that the NS mass and radius measurements are subject to a measurement error. After randomly selecting \(N\) points (\(M_{i}\), \(R_{i}\)) from a genuine \(M(R)\) curve, with \(\{i=1,2,...,N;N=50\}\), we draw the actual \(M\) and \(R\) values from normal distributions, \(\mathcal{N}(\mu_{M},\sigma_{M})\) and \(\mathcal{N}(\mu_{R},\sigma_{R})\) respectively. Specifically, we examined the effect of smaller and larger observational errors by choosing \(\sigma_{M}=0.02\) M\({}_{\odot}\) and \(\sigma_{R}=0.1\) km to simulate smaller uncertainties, and \(\sigma_{M}=0.1\) M\({}_{\odot}\) and \(\sigma_{R}=1\) km to model larger uncertainties, where \(\mu_{M}=M_{i}\) and \(\mu_{R}=R_{i}\). To quantify the effect of observational uncertainties for each sample in the test dataset, for each case we draw 100 ensembles from the respective normal distribution and calculate the mean absolute errors. In Figure 6, we illustrate the effect of measurement errors for a representative example from the test dataset. It is seen that for smaller observational uncertainties (\(\sigma_{M}=0.02\) M\({}_{\odot}\), \(\sigma_{R}=0.1\) km) the estimated EOS \(P(\rho)\) (broken green line) matches almost exactly the "true" EOS (solid black line). The greenish shaded band represents the corresponding mean absolute errors. For larger observational uncertainties (\(\sigma_{M}=0.1\) M\({}_{\odot}\), \(\sigma_{R}=1\) km), we see that at higher densities, of \(\sim\rho/\rho_{0}\geq 4\), the reconstructed EOS (broken red line) starts to moderately diverge from the ground-truth EOS, but it is still within the reconstruction errors represented by the reddish shaded band (corresponding to the mean absolute errors). The results follow a very similar trend for the rest of the test data samples, and suggest that the accuracy of the EOS estimation is mainly affected by the magnitude of the assumed observational errors. We emphasize that the observational uncertainties are introduced in the analysis by directly subjecting _only_ the test dataset to an assumed level of "noise", and the trained DNN model does not have prior knowledge of similar uncertainties in the training data. Nevertheless, these results show that the neural network is able to accurately reconstruct the EOS from moderately noisy NS \(M(R)\) observational data. The performance of the DNN can be further improved by introducing the measurement errors also in the training dataset. Since the systematic study of the measurement uncertainty effects is not the major focus of this work, detailed investigations, and also studying the effect of introducing "noise" in the training data, are left for following articles. Figure 5: Example input \(M(R)\) sequences (**left window**) and corresponding estimated \(P(\rho)\) (**right window**). The input samples consist of 50 randomly selected points, denoted by the ”\(o\)” characters, from the genuine \(M(R)\) curves, denoted by the solid lines, in the range of 1–M\({}_{max}\) M\({}_{\odot}\). The output data samples consist of 50 \(P(\rho)\) points in the range of \(\sim\)0.4–5 \(\rho_{0}\). Broken colored lines in the right window denote the estimated EOS and the solid lines represent the ”true” EOS. Same curve colors in both windows denote pairs of input \(M(R)\) sequences and corresponding output EOS samples. ### Application to realistic EOSs To test further the performance of the trained EOS DNN model, we apply it to several _realistic_ EOSs from the CompOSE repository [154] ([https://compose.obspm.fr](https://compose.obspm.fr)). CompOSE is an online tool that provides data tables containing state-of-the-art EOSs that can be readily used for various applications in astrophysics and nuclear physics. For the purpose of our analysis we chose several EOSs within the range of the parameter space of the DNN training dataset: APR [134], BL [155], QMC-RMF2 [156], QMC-RMF3, [156], SK255 [157], and SK272 [157]. We generated the required input data following the procedure outlined in Section II.3.1. For each EOS (Figure 7, left window), we calculated the \(M-R\) relation (Figure 7, right window), and then drew \(10^{5}\) random ensembles of \(50\)\((M_{i},R_{i})\) points to determine the mean and MAE of the reconstructed EOSs. The results of this test are shown in Figure 8, and demonstrate the ability of the EOS DNN to reconstruct _realistic_ EOSs from \(M(R)\) data. In all frames, the solid blue lines denote the ground-truth EOS and the red dot characters represent the mean of the DNN predictions respectively. The error bars represent the MAE with which the EOS is reconstructed in each case due to the random drawing of the \(M(R)\) data, and in order to clearly separate the error contribution of this effect alone, they do not include the effect of assumed observational uncertainties. Among the realistic EOSs we have considered, it is seen that the predicted \(P(\rho)\) relation Figure 6: Example EOS (\(P(\rho)\)) predictions of the trained DNN model illustrating the effect of observational uncertainties included in the test dataset (simulated \(M(R)\) measurements with errors). The broken green and red lines represent the mean of the extracted EOS for smaller and larger measurement errors respectively, while the greenish and reddish shaded bands denote the corresponding mean absolute errors. The magnitude of the errors introduced in the input \(M(R)\) measurements is controlled by the values of \(\sigma_{M}\) and \(\sigma_{R}\) defining the normal distributions from which \(M\) and \(R\) are drawn. Specifically, to model smaller uncertainties, we choose \(\sigma_{M}=0.02\) M\({}_{\odot}\) and \(\sigma_{R}=0.1\) km. Similarly, to model larger uncertainties, we choose \(\sigma_{M}=0.1\) M\({}_{\odot}\) and \(\sigma_{R}=1\) km. The solid black line denotes the ground-truth EOS. See text for details. Figure 7: Pressure as a function of density, \(P(\rho)\)**(left window)**, and mass-radius relation, \(M(R)\)**(right window)**, for the realistic EOSs considered in this study. The shaded regions denote the range of the parameter space of the DNN training dataset. matches almost exactly the ground-truth values for the APR, BL, QMC-RMF2 and QMC-RMF3 models, while the the SK255 and SK272 EOSs are reconstructed less accurately, but still within the reconstruction errors. Here we briefly recall the main features of the realistic EOSs used in our analysis. The APR EOS is calculated using variational approaches with the A18 + delta v + UIX* interaction [134]. The BL EOS is obtained using realistic two-body and three-body nuclear interactions derived in the framework of \(\chi\)EFT and including the \(\Delta(1232)\) isobar intermediate state [155]. This EOS has been derived using the Brueckner-Bethe-Goldstone quantum many-body theory in the Brueckner-Hartree-Fock (BHF) approximation with the continuous choice for the auxiliary single particle potential. The QMC-RMF2 and QMC-RMF3 EOSs are computed using a relativistic mean-field (RMF) theory constrained by \(\chi\)EFT calculations of pure neutron matter (from 0.08 fm\({}^{-3}\) to 0.32 fm\({}^{-3}\)) and by properties of isospin-symmetric nuclear matter around \(\rho_{0}\)[156]. The SK255 and SK272 EOSs are unified models by Gulminelli and Raduta [157] computed with the SK255 and SK272 effective interactions [158]. The APR and BL EOSs are microscopic while the rest of the EOSs are based on phenomenological models. These results clearly demonstrate that a DNN, trained on a relatively simple dataset generated with the EOS metamodel discussed in Section II.1, is able to generalize the task of reconstructing the EOS and predict accurately realistic EOSs. ### Deducing Nuclear Matter Properties #### ii.3.1 Performance on the Test Dataset In the following analysis, we examine the effectiveness of the trained NuPRO DNN model in extracting particular nuclear matter properties from the EOS of \(\beta\)-equilibrium NS matter. After the model is trained and the optimal architecture is determined (as shown in Table 2), we assess its final performance by evaluating it on a test dataset composed of 5,000 samples of \(P(\rho)\), and corresponding sets of selected nuclear matter properties matching each EOS sample. To evaluate the model's performance, we compute the standard deviation, \(\sigma_{e_{i}}\), of the residuals, \(\epsilon_{i}=Q_{i}^{DNN}-Q_{i}\), for each of the nuclear matter parameters: \(K_{0}\), \(J_{0}\), \(L\), \(K_{sym}\), and \(J_{sym}\). Here \(Q_{i}\) is one of the selected 5 nuclear matter properties. By examining the standard deviation, we can determine the degree of accuracy and precision of the model's predictions for each of these parameters. In Figure 9, we present the results of our evaluation of the performance of the trained NuPRO DNN model in extracting selected nuclear matter properties from the EOS of \(\beta\)-equilibrium NS matter. We provide scatter plots of the distribution of the residuals for each of the nuclear matter parameters, along with the numerical values for the mean Figure 8: Reconstructed EOSs from \(M(R)\) data for several realistic EOS models from the CompOSE repository [154]. \(\mu_{\epsilon}\) and the standard deviation \(\sigma_{\epsilon}\). These results clearly demonstrate that the trained NuPRO DNN model achieved a high degree of accuracy in extracting the nuclear matter parameters. In particular, we observed that the lower-order terms were extracted with higher accuracy, which can be attributed to the smaller range of possible values compared with the higher-order terms. The better interpolation precision of the model associated with the smaller range of possible values allowed for more accurate extraction of the lower-order terms. These results are also summarized in Table 3. #### iv.2.2 Reconstructing \(E_{sym}(\rho)\) Next, we focus on evaluating the ability of the trained NuPRO DNN model to accurately extract nuclear symmetry energy parameters, namely \(L\), \(K_{sym}\), and \(J_{sym}\), which are used to reconstruct \(E_{sym}(\rho)\). The nuclear symmetry energy is a crucial yet uncertain component of the high-density equation of state, and it is imperative to explore whether DL Figure 9: Residuals of the model for each of the selected nuclear matter parameters along with the numerical values for the mean, \(\mu\), and standard deviation, \(\sigma\). As observed, for the lower-order parameters (\(K_{0}\) and \(L\)), the mean values of the residuals are less than 0.5 MeV (with \(|\mu|\approx 0.2\) MeV for both cases). Additionally, it can be seen that the standard deviation is comparatively smaller for the lower-order parameters. This can be attributed to the fact that the range of possible values for the lower order parameters is smaller, resulting in better interpolation precision. On the other hand, the higher-order parameters exhibit larger values of \(\sigma\), owing to the larger range of possible values. For further information, refer to the text. \begin{table} \begin{tabular}{c c c} \(Q_{i}\) & \(\bar{\epsilon_{i}}\) & \(\sigma_{\epsilon_{i}}\) \\ \hline \(K_{0}\) & -0.22 & 2.77 \\ \(J_{0}\) & 0.79 & 4.09 \\ \(L\) & 0.23 & 0.49 \\ \(K_{sym}\) & 0.67 & 3.37 \\ \(J_{sym}\) & 0.57 & 5.34 \\ \hline \end{tabular} \end{table} Table 3: Mean, \(\mu_{\epsilon_{i}}\), and standard deviation, \(\sigma_{\epsilon_{i}}\), of the residuals, \(\epsilon_{i}=Q_{i}^{DNN}-Q_{i}\), of the trained NuPRO DNN model, determined on the test dataset. All values are given in MeV. See text for details. techniques could offer a viable means of deducing it from astrophysical observations of neutron stars. In our previous work [121], we pioneered the use of DL methods for extracting the nuclear symmetry energy from a set of neutron star observations in the \(M-R\) or \(M-\Lambda\) planes. In our "proof-of-concept" study [121], our main focus was on the extraction of \(E_{sym}(\rho)\), and thus we generated our datasets by holding all parameters in Equation (2), representing the energy of symmetric nuclear matter, constant and varying only the nuclear symmetry energy parameters \(L\) and \(K_{sym}\) in Equation (3). We demonstrated that, under the given model assumptions, DNNs could extract \(E_{sym}(\rho)\) effectively and accurately directly from astronomical observations of neutron stars. In our present investigation, we have advanced our deep learning methodology for extracting the nuclear symmetry energy \(E_{sym}(\rho)\) by significantly enlarging the parameter space of our neural network training dataset. To achieve this, we have kept only the parameters \(E_{0}\) and \(S_{0}\) fixed at their most probable values while varying the other parameters, namely \(K_{0}\), \(J_{0}\), \(L\), \(K_{sym}\), and \(J_{sym}\), in Equations (2) and (3), to generate numerous samples of \(P(\rho)\). As depicted in Figure 7, the augmented parameter space of the neural network training datasets also permits the modeling of predictions of modern _realistic_ equations of state, which satisfy the constraints from recent mass-radius observations of neutron stars. In Figure 10 we show results for five selected instances from the test dataset. It can be seen that the reconstructed nuclear symmetry energy (shown as broken colored lines) for each input \(P(\rho)\) sample agrees nearly perfectly with the true \(E_{sym}(\rho)\) (depicted by solid black lines). Similar outcomes are obtained for the remaining test data samples. We emphasize that the reconstructed nuclear symmetry energy is deduced by substituting the estimated values of the nuclear symmetry energy parameters, namely \(L\), \(K_{sym}\), and \(J_{sym}\), predicted by the NuPRO DNN, into Equation (3). Moreover, we assume that the \(\beta\)-equilibrium NS EOS, \(P(\rho)\), is already known with a certain level of precision, such as being extracted from mass-radius observations of neutron stars by using the EOS DNN trained model. #### iv.2.3 Model Uncertainty Let us not forget that realistic observations of neutron stars unavoidably harbor uncertainties, which in turn give rise to uncertainties in the inferred EOS of \(\beta\)-stable NS matter, and consequently in the extracted nuclear matter parameters and the symmetry energy \(E_{sym}(\rho)\). To investigate the impact of errors in the reconstruction of the EOS on the inferred nuclear matter parameters and symmetry energy, we have incorporated "noise" into the \(P(\rho)\) data samples that portray the EOS, and evaluated the nuclear matter parameters and \(E_{sym}(\rho)\). We have conducted experiments by varying the level of noise and investigated the resulting effect on the accuracy of the extracted nuclear matter parameters and symmetry energy. As shown in Figure 11, we elucidate the effect of introducing a 20% uncertainty to the input \(P(\rho)\) data samples on the reconstructed \(E_{sym}(\rho)\). In the left window, we show the exact EOS (represented by the solid blue line) and an EOS data sample containing 20% uncertainty (indicated by the red broken line). The reddish colored band denotes the uncertainty of \(P(\rho)\). In order to assess the uncertainty in determining the nuclear matter parameters and the symmetry energy, we generate \(10^{5}\) random sets of 50 equally spaced points in \(\rho\), \(P(\rho_{i})\), where \(i=1,2,...,50\), lying within the uncertainty band. We subsequently compute the mean and standard Figure 10: Reconstructed \(E_{sym}(\rho)\) from the \(\beta\)-equilibrium NS EOS, \(P(\rho)\), for several representative samples from our training dataset. The black solid curves represent the ground-truth symmetry energy and the broken colored lines denote the DNN predictions respectively. The predicted symmetry energy is obtained through Equation (3) with the parameters \(L\), \(K_{sym}\) and \(J_{sym}\) estimated by the trained NuPRO EOS model. deviation for each of the nuclear matter parameters for each set. Thereafter, with every estimated set of nuclear matter parameters, we determine \(E_{sym}(\rho)\) through Equation (3). The reconstructed symmetry energy is illustrated in the right window of Figure 11. The reddish colored band represents the mean absolute error (MAE) in deducing \(E_{sym}(\rho)\), the solid line depicts the exact symmetry energy, and the red broken line indicates the mean symmetry energy. As anticipated, since the inferred symmetry energy is reconstructed via Equation (3), it closely follows the exact \(E_{sym}(\rho)\) in a qualitative manner. Quantitatively, the estimated values begin to deviate moderately from the exact ones at approximately \(\rho\geq 2\rho_{0}\), however, they remain within the range specified by the mean absolute errors of the model for the assumed uncertainty of the input EOS. The mean, standard deviation, and MAE for each of the nuclear matter parameters for the specific example shown in Figure 11 are presented in Table 4. The results are highly analogous for the remainder of the data samples from our test dataset. It is important to note that the uncertainties presented in our analysis are solely introduced to the test dataset, and the trained DNN model does not possess any prior knowledge of uncertainties in the training data. Despite this, our findings demonstrate that the neural network is capable of accurately extracting the nuclear matter parameters and reconstructing \(E_{sym}(\rho)\), even when faced with moderately noisy input EOS data. In order to further improve the performance of the DNN, it may be beneficial to introduce uncertainties to the training dataset as well. Although the impact of measurement uncertainties is not the primary focus of the current study, the potential effects of such uncertainties can be studied in detail in future works. This includes investigating the effects of introducing "noise" to the training data, as well as conducting systematic studies on the impacts of measurement uncertainties. These investigations may provide further insight into the behavior of the DNN and help to enhance its performance in future applications. Figure 11: **(Left window)** EOS data sample, \(P(\rho)\), with added 20% uncertainty. The ground-truth equation of state, \(P(\rho)\), is represented by the blue solid line, while the red dashed line shows an instance of data with random noise added within the range of uncertainty. The uncertainty of the input equation of state is illustrated by the reddish band. **(Right window)** Estimated nuclear symmetry energy, \(E_{sym}(\rho)\). The precise value of nuclear symmetry energy is indicated by the blue solid line, while the red dashed line represents the average value of the derived \(E_{sym}(\rho)\), and the reddish colored band indicates the mean absolute error (MAE). The calculation of \(E_{sym}(\rho)\) is done using Equation (3) with the nuclear matter parameters \(L\), \(K_{sym}\), and \(J_{sym}\) extracted through NuPRO DNN. Further information can be found in the text. \begin{table} \begin{tabular}{c c c c c c} \(Q_{i}\) & Exact & Predicted & \(\mu_{i}\) & \(\sigma_{i}\) & MAE \\ \hline \(K_{0}\) & 259.19 & 256.49 & 292.18 & 55.66 & 50.50 \\ \(J_{0}\) & -78.38 & -73.22 & -69.91 & 69.05 & 56.66 \\ \(L\) & 58.90 & 58.99 & 71.38 & 12.79 & 13.35 \\ \(K_{sym}\) & -225.75 & -223.30 & -227.52 & 51.41 & 42.23 \\ \(J_{sym}\) & 520.61 & 516.33 & 531.51 & 226.82 & 179.65 \\ \end{tabular} \end{table} Table 4: Values for the exact, predicted, mean \(\mu_{i}\), standard deviation \(\sigma_{i}\), and mean absolute error (MAE) of the nuclear matter parameters \(Q_{i}\)=\([K_{0},J_{0},L,K_{sym},J_{sym}]\) for the example illustrated in Figure 11. All values are given in MeV. Please see the text for further details. Application to Realistic Nuclear Models So far we have demonstrated that the trained NuPRO DNN model performs with high accuracy on the test dataset. Having established this, we now proceed to applying the model to a set of _realistic_ EOSs, which were previously discussed in Section III.2. However, before we discuss the results, it is important to highlight the complexity of the inference task and the limitations of our model assumptions. Firstly, the DNN model was trained on a dataset that assumes the EOS of nuclear matter depends on the matter isospin asymmetry via a quadratic dependence only, as specified in Equation (1). Secondly, for the hadronic component of the EOS, we used parameterizations given by Equations (2) for symmetric nuclear matter and (3) for the nuclear symmetry energy \(E_{sym}(\rho)\) in the density range from approximately \(0.04~{}fm^{-3}\) to \(0.8~{}fm^{-3}\). It is important to note that beyond the saturation density \(\rho_{0}\), these expressions should be regarded solely as parameterizations, and not as Taylor expansions. Thirdly, we made the assumption that neutron star matter is composed of nucleons, electrons, and muons in \(\beta\)-equilibrium. This assumption is made to simplify the problem, and it may not accurately represent the composition of matter in neutron stars, which may include other exotic particles, such as hyperons, or quark matter. For the purpose of our analysis, in Figure 12, we present the residuals of the nuclear matter parameters \(K_{0}\), \(L\), and \(K_{sym}\) of the trained NuPRO DNN model for each _realistic_ EOS considered in this study. These residuals correspond to the differences between the predicted values of the parameters and their true values obtained from the CompOSE repository. We note that \(K_{sym}\) values were not available for the APR and BL EOSs, and hence, we do not show the residuals for these models. The standard deviations, \(\sigma_{i}\), of the residuals were also calculated to assess the uncertainty associated with the estimation of the nuclear matter parameters from a real \(\beta\)-equilibrium NS EOS. The standard deviations for \(K_{0}\), \(L\), and \(K_{sym}\) are 30.18 MeV, 11.22 MeV, and 19.09 MeV, respectively. These values are smaller than the reported uncertainties in the literature [159]. It is important to emphasize that our model assumptions and limitations should be taken into account when interpreting these results. Therefore, the applicability of our results to other types of matter, such as hyperonic matter or quark matter, remains an open question. Nonetheless, our results demonstrate the potential of using DNN models to extract nuclear matter parameters from astrophysical observations of neutron stars. Precise measurements of the masses and radii of a sufficient number of neutron stars would ultimately allow for the accurate determination of the EOS of \(\beta\)-stable matter through converting the \(M(R)\) curve, via various methods, to the underlying EOS [119]. However, extracting the nuclear matter properties from the \(\beta\)-equilibrium EOS poses another challenge itself since the interior composition of a neutron star is unknown, and even the determination of the proton fraction is highly challenging [119]. For instance, in a Bayesian approach presented in Ref. [160], the authors were unable to deduce the nuclear matter properties from the \(\beta\)-stable matter EOS. Similarly, the authors of Ref. [161] demonstrated the existence of multiple solutions for the determination of the NS interior composition from the \(\beta\)-stable matter EOS, owing to the high level of degeneracy. Furthermore, the determination of the nuclear symmetry energy from the \(\beta\)-equilibrium EOS requires an accurate knowledge of the EOS of symmetric nuclear matter [162], which is necessary for determining the proton fraction in the NS interior. These considerations underscore the importance and potential of the DL methods presented in this work as they provide a model-independent avenue to deducing the EOS of \(\beta\)-stable matter, and in turn, the nuclear matter parameters and \(E_{sym}(\rho)\). Figure 12: NuPRO DNN model residuals for \(K_{0}\) **(left window)**, \(L\)**(middle window)**, and \(K_{sym}\)**(right window)** for the EOSs considered in our analysis. Note that \(K_{sym}\) values are not available for the APR and BL EOSs and therefore residuals for these models are not shown in the figure. See text for details. Summary and outlook In this study, we have demonstrated the feasibility of using a DL approach to directly extract the EOS of dense neutron-rich matter from observational data of neutron stars. Through analysis of simulated mass and radius measurements of neutron stars, we have shown that deep neural networks can accurately extract the EOS of \(\beta\)-stable NS matter. Furthermore, we have illustrated the ability of a trained DNN model to deduce selected nuclear matter properties, including the \(E_{sym}(\rho)\). Most importantly, we have demonstrated that our DL approach can accurately extract _realistic_ EOSs and nuclear matter properties from NS observational data. These results represent an important step towards the ultimate goal of determining the EOS of dense nuclear matter, and highlight the potential of DL-based techniques in the era of multi-messenger astrophysics, where a growing volume of NS observational data is rapidly becoming available. In the near future, we plan to systematically examine the uncertainties associated with the NS observational data and the DNN model, and their impact on the model's performance. By understanding the effect of these uncertainties, we aim to explore potential approaches to further enhance the model's reliability and performance. In particular, in order to apply our approach practically, it is essential to consider the empirical errors and uncertainties and incorporate them consistently into the formalism. A possible strategy to achieve this is to recast the regression problem of extracting the EOS and nuclear matter properties into a probabilistic framework. Specifically, in subsequent research, we intend to use Bayesian neural networks to perform the inference task. In this paradigm, instead of obtaining deterministic values, the weights of the network are characterized by probability distributions by placing a prior over the network weights [163]. In future studies, we also plan to apply our DL approach to real observational data of neutron stars, which would enable the extraction of a model-independent EOS, nuclear matter properties, and symmetry energy. Finally, we also plan to investigate likelihood-free inference methods using normalizing flows [164]. These techniques are able to model complex posteriors by applying nonlinear transformations to a simple posterior shape, such as a multivariate Gaussian, without evaluating the likelihood directly. This approach has already generated considerable interest in the scientific community, and it has been successfully applied in multiple research domains. For instance, a recent study [165] applied a likelihood-free inference method using normalizing flows to rapidly estimate the parameters of eight GW binary black hole (BBH) events in the first LIGO Gravitational Wave Transient Catalog, GWTC-1 [166]. With the next-generation of space telescopes and GW detectors, which will be sensitive enough to detect and observe compact binary collisions and neutron stars throughout the history of the universe, identifying over a million events per year, including thousands of BNS and NSBH detections per year, it will be crucial to process the incoming observational data quickly and accurately. In this context, it is important to emphasize that traditional Bayesian inference methods are not scalable to the study of thousands of BNS and NSBH events per year, and modern normalizing flow models, and similar approaches, could play a critical role in accurately and promptly extracting important NS parameters. In the end, with the increasing number of observed events involving neutron stars, these contemporary data-driven techniques will enable us to rapidly process the growing volume of neutron star observational data and accurately determine the equation of state of dense nuclear matter and the nuclear symmetry energy. **Data Availability:** Codes and data from this analysis are available upon request from the author. ## Acknowledgements The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University.
2308.15423
Defining and Constraining the Electrical Cardinality of Multiport Converter Mission Profiles
Mission profiles describe a representative set of conditions that a power converter is designed to operate under, and are known to be more complicated for multiport converter applications due to a wider range of combinations of powers that can be transferred between ports. This paper studies the properties of mission profiles derived from operational optimization of multiport converters in distribution system applications (e.g., soft open points). The electrical cardinality of the mission profile is introduced as a useful, naturally varying property of multiport mission profiles derived from optimal operation within distribution system, with the cardinality equal to the number of non-zero power transfers at a given time. Furthermore, it is shown that the cardinality can be conveniently constrained within the framework of conventional mixed-integer conic optimization problems, yielding a family of mission profiles that can enable converter designers to simplify multiport designs via converter reconfiguration. Results demonstrate the potential to reduce the cardinality in a four-terminal multiport converter from four to two whilst still effectively supporting congestion management and achieving 91.7% of the loss reduction capabilities of a conventional design.
Matthew Deakin
2023-08-29T16:29:19Z
http://arxiv.org/abs/2308.15423v1
# Defining and Constraining the Electrical Cardinality of Multiport Converter Mission Profiles ###### Abstract Mission profiles describe a representative set of conditions that a power converter is designed to operate under, and are known to be more complicated for multiport converter applications due to a wider range of combinations of powers that can be transferred between ports. This paper studies the properties of mission profiles derived from operational optimization of multiport converters in distribution system applications (e.g., soft open points). The electrical cardinality of the mission profile is introduced as a useful, naturally varying property of multiport mission profiles derived from optimal operation within distribution system, with the cardinality equal to the number of non-zero power transfers at a given time. Furthermore, it is shown that the cardinality can be conveniently constrained within the framework of conventional mixed-integer conic optimization problems, yielding a family of mission profiles that can enable converter designers to simplify multiport designs via converter reconfiguration. Results demonstrate the potential to reduce the cardinality in a four-terminal multiport converter from four to two whilst still effectively supporting congestion management and achieving 91.7% of the loss reduction capabilities of a conventional design. Mission Profile, Multiplexed Soft Open Point, Multiport Converter, Power Converter Reconfiguration, Hybrid AC/DC distribution systems. ## I Introduction Applications of power electronics-based power converters in power systems today are rapidly growing, with industry requiring new methods of integrating electric transportation and renewable generation into the ac grid. Multiport converters are an important class of grid-connected power converters, allowing power to be efficiently transferred between different feeders in a power distribution system. For example, a three terminal multiport converter can act as a soft open point (SOP), connecting two ac distribution feeders through a dc link with an integrated energy storage system that can provide temporal flexibility. A mission profile is a representative time series collecting the parameters that affect the key performance indices for a given power converter (e.g., reliability, lifetime or efficiency). This typically will consist of at least the thermal or electrical stresses that the converter will be under, but can also include other relevant environmental parameters such as humidity or vibration. Accurate mission profile simulation can be complex and time consuming, and so there has been significant research on this topic. For example, recent works focus on fast emulation of thermal and electrical aspects of mission profiles including realistic switching profiles [1], or considering comprehensive sets of parameters, such as switching frequency or the direction of real power transfer for increasing accuracy [2]. The electric power distribution community is also concerned with the development of mission profiles, but typically from the point of deriving optimal power transfers for a given converter design and optimization approach, rather than simulating the impacts on long-term converter reliability or state of health. For example, a three-terminal ac/dc/ac multiport, operated as an integrated SOP-energy storage device, is scheduled to provide loss reduction, price arbitrage and robust congestion management in [3], with the main focus of the work exploring impacts of uncertainty on the feeder voltages. Similarly, a robust strategy is developed for a five-terminal SOP in [4], with the main contribution a two-stage optimization combining both semidefinite programming and droop control, again considered over a representative day. Of those works that do consider properties of mission profiles that result from distribution system optimization, the metrics considered typically do not clearly link to potential impacts on converter design. For example, average and time-varying utilization of power transferred through individual converters are presented to summarise performance in [5] for a two-terminal phase changing SOP. Similarly, in [6], the utilization of individual power converter's mission profiles are studied as a function of variable converter sizes. The variability of the electrical parameters of the mission profile are shown to influence efficiency in [7], proposing redeployment of modular converters within a multiport converter. The dimension and volume of a multiport's capability chart is studied in [8], with it recognized that multiport converters with degenerate capability charts (i.e., with a capability chart hypervolume of zero) can still yield good performance within a network. Indeed, this final observation clearly motivates the study of cardinality of the power transfers within mission profiles to explore and understand how prevalent these simpler designs might be in future [8, 9]. The contribution of this paper is to introduce the electrical cardinality of a mission profile, providing an intuitive property that can be used to study interactions between a distribution network's requirements and the design of reconfigurable multiport converters. The proposed definition is particularly useful as it conveniently links to cardinality constraints that can be imposed in mixed-integer conic optimization problems commonly used in distribution optimal power flow problems. The cardinality is subsequently used to explore the properties of optimal mission profiles of a given dimension, with case studies demonstrating a halving of the cardinality whilst still achieving 91.7% of the potential system benefits. The paper is structured as follows. In Section II, the electrical cardinality is introduced, with a short discussion highlighting why this naturally varies with the outputs of optimal distribution network scheduling, then showing how reconfigurable power converters can exploit mission profiles with reduced cardinality. Subsequently, cardinality constraints are proposed in Section III to demonstrate how a family of mission profiles can be developed that could allow a converter designer to explore a range of potential reconfigurable designs with varying complexity. Two case studies are presented and solved in Section IV to demonstrate qualitative and quantitative attributes of mission profiles with varying cardinality constraints. Conclusions are drawn in Section V. ## II Mission Profile Electrical Cardinality The cardinality of a vector is the number of non-zero elements of that vector. In this section, the electrical cardinality of a mission profile is introduced from this concept, with it demonstrated how it can be determined from the powers that are transferred to and from the ac distribution grid. Reconfigurable multiplexed soft open points (MOPs) are then introduced as a specific multiport topology that can take advantage of reduced cardinality, highlighting a direct application of the electrical cardinality. ### _Defining Electrical Cardinality_ In this work, the electrical cardinality of a mission profile is defined as the cardinality of the powers transferred by a multiport converter into an ac distribution grid at a given time. It is referred to as the electrical cardinality so as to distinguish from the thermal aspects of the mission profile which are also of significant interest (but are not considered further in this work). For example, consider apparent powers transferred by an \(m\)-terminal SOP over duration \(\tau\) collected in a matrix \(S_{\mathrm{MP}}\in\mathbb{R}^{\tau\times m}\). In this case, the electrical cardinality (EC) at a given time instant \(\tau\) is defined as \[\mathrm{EC}[\tau]=\mathrm{nnz}(S_{\mathrm{MP}}[\tau,:])\,, \tag{1}\] where \(\mathrm{nnz}\) returns the number of non-zero elements of a vector, with matrix \(S_{\mathrm{MP}}\) indexed using 'Matlab' notation. The EC can then be used to also define the maximum electrical cardinality (\(\mathrm{MEC}\)) of the mission profile as \[\mathrm{MEC}=\max_{\tau}\,\mathrm{EC}[\tau]\,. \tag{2}\] When the EC needs to be calculated in practise (e.g., from a mission profile \(S_{\mathrm{MP}}\) returned by a numerical optimization routine), it can be calculated based on a tolerance \(\epsilon\), e.g., \[\mathrm{EC}[\tau]=\sum_{i=1}^{m}I_{\epsilon}(S_{\mathrm{MP}}[\tau,i])\,, \tag{3}\] where the indicator function \(I_{\epsilon}\) returns 1 if its argument is greater than \(\epsilon\), and zero otherwise. A relative value of \(\epsilon\) of \(10^{-5}\) times the total converter capacity is used in this work. There are several reasons that the value of the power transferred through a given terminal of a multiport converter might have a value of zero and therefore yield time-varying electrical cardinality \(\mathrm{EC}\). If a terminal is connected to a variable generator (such as solar) then the generator might have zero output for significant time periods. A distribution feeder might also be out of service. If a multiport is used most of the time for system loss reduction and arbitrage, the losses within the converter might be sufficiently high that it is not cost-effective to run the converter. In fact, converter losses can tend to produce optimal solutions that are sparse in the apparent power transfers \(S_{c}\). For example, for converter losses of an \(m\)-terminal multiport \(P_{\mathrm{Loss}}^{\mathrm{Conv.}}\in\mathbb{R}^{m}\), loss coefficient \(k\) and linear model [8, 10] \[P_{\mathrm{Loss}}^{\mathrm{Conv.}}[i]=kS_{c}[i]\ \forall\ i\in[1,\,m]\,, \tag{4}\] the solution of optimization problems that involve minimization of total converter losses will tend to result in a solution which is sparse in \(S_{c}\) (i.e., has few non-zero entries). This is because (4) acts as a'shrinkage operator' on \(S_{c}\)[11]. Increasing values of \(k\) will yield increasingly sparse solutions in \(S_{c}\)[12, Ch. 6.2]. ### _Exploiting Partial Electrical Cardinality_ The electrical cardinality \(\mathrm{EC}\) can take integer values between 0 and \(m\) (for a multiport connected to \(m\) ac feeders), with cases that are of most interest in this work being when the \(\mathrm{MEC}\) is less than \(m\). This is because an \(\mathrm{MEC}\) less than \(m\) implies that a multiport converter with \(m\) legs has a degree of redundancy, insofar as there is potential for the exploitation of reconfigurable multiplexed soft open point (MOP) designs with a reduced number of ac/dc legs [8]. For example, three multiport designs are plotted in Fig. 1, both with and without MOP reconfiguration stages. If the maximum cardinality \(\mathrm{MEC}\) has value two (i.e., it is required that at some time periods power must be transferred to or from point of common coupling (PCC) 1 to PCC 2 simultaneously), then the conventional design (Fig. 1a) or dual converter MOP design (Fig. 1b) can be considered. In contrast, if the \(\mathrm{MEC}\) has value unity, such that the solar power is exported only through PCC 1 or PCC 2 (or reactive power is never required for voltage control concurrently in both feeders), then the simpler MOP design Fig. 1c can also be considered. Such a design only requires a single ac/dc converter and can inject a full 1 pu into either PCC 1 and PCC 2, and so has advantages over both the conventional SOP and dual converter MOP in at least one sense. Fig. 1: Single line diagram of a conventional SOP (with integrated DG connected to the DC link), (a), as compared to reconfigurable multiplexed SOP (MOP) multiport designs with two converters (b), or a single reconfigurable ac/dc converter (c). All designs have the same pu ac/dc converter capacity, but different capability charts and varying complexity of construction in terms of the number and type of components. ## III System Modelling In the previous section, the electrical cardinality and its maximum value \(\mathrm{MEC}\) were introduced as metrics that can be used to explore the potential degree of redundancy of non-reconfigurable multiport converters. In this section, we first consider an additional benefit of defining the electrical cardinality: this cardinality can be varied in a computationally efficient way, enabling a multiport converter designer to explore the performance of a family of mission profiles. Cardinality constraints are introduced and then formulated using a big-\(M\) procedure, so that these constraints can be appended to existing convex optimization problems. Subsequently, a mixed-integer second order cone program is presented that integrates these cardinality constraints into an examplar optimal power flow problem for a distribution system, utilizing idealised MOP designs to bound the performance of all profiles of a given cardinality. ### _Cardinality Constraints Formulated Using the Big-M Method_ For an optimization decision variable \(y\in\mathbb{R}^{p}\) for some integer \(p\), a cardinality constraint [13] \[\mathrm{nnz}(y)\leq q\,, \tag{5}\] for some \(q<p\) can be recast using a big-\(M\) formulation (this reformulation is required as the constraint as-written in (5) cannot be incorporated into integer optimization packages). Specifically, if \(y\geq 0\) and \(M\) is an upper bound for each element of \(y\), then \(p\) auxiliary binary variables \(z_{i}\in\mathbb{B}\) can be introduced alongside the constraint \[y[i]\leq Mz_{i}\;\forall\;i\,, \tag{6}\] such that if the \(i\)th element of \(y\) is zero then so will the value of \(z_{i}\); if the \(i\)th element of \(y\) is non-zero, then \(z_{i}\) will have value one. These auxiliary variables allow for (5) to then be rewritten \[\sum_{i=1}^{p}z_{i}\leq q\,. \tag{7}\] Therefore, a optimization problem with constraints (6), (7) is equivalent to having constraint (5). These equations are linear in binary variables \(z_{i}\), and so can be included directly in conventional mixed-integer conic optimization packages. ### _System model_ For a reconfigurable MOP connected to an \(m\)-feeder node within a distribution network, the goal is to schedule the real and reactive power flows transferred \(P_{c}\in\mathbb{R}^{m}\), \(Q_{c}\in\mathbb{R}^{m}\) to minimize network and converter losses, subject to congestion constraints. The optimization developed to solve this problem builds on the approach outlined in [8], with a number of substantive changes as follows. * The primary topic of this work is the exploration of the solution as the cardinality varies, and so a cardinality constraint of the form (5) is added and subsequently solved for a range of cardinality values \(q\) up to the number of terminals \(m\). * Instead of considering large numbers of variable MOP or SOP designs for a given converter capacity, instead idealised, fully reconfigurable designs are considered within the modelling [8]. This provides a convenient upper bound for the performance of a design with a given cardinality, although it is possible the mission profiles may not be realisable as designs (this is discussed further in Section IV-C). * To model network congestion, a linearized model is used that maps power injections to changes in voltage magnitudes. The 'First Order Taylor' method is used for linearization from [14], linearized at the no-load solution. * Finally, the model of losses in power injections is based only on a quadratic model developed on the linearization of complex voltages [14] around the no-load point (rather than calculating a linearization at all loading conditions separately). With these changes, the optimization problem can be written as follows. \[\min_{P_{c},\,Q_{c}}P_{\mathrm{Loss}}^{\mathrm{Ntwk.}}+\sum_{i}P_{ \mathrm{Loss}}^{\mathrm{Conv.}}[i] \tag{8}\] \[\mathrm{s.t.}\;S_{c}[i] \geq\sqrt{P_{c}^{2}[i]+Q_{c}^{2}[i]}\;\forall\;i\in[1,\,n]\,,\] (9) \[\sum_{\mathrm{d}\mathrm{c}}P_{\mathrm{dc}} =0\,,\] (10) \[P_{\mathrm{dc}}[i]+P_{\mathrm{Loss}}^{\mathrm{Conv.}}[i] =P_{c}[i]\;\forall\;i\in[1,\,m]\,,\] (11) \[P_{\mathrm{dc}}[m+1] =P_{\mathrm{DER}}\,,\] (12) \[P_{\mathrm{Loss}}^{\mathrm{Conv.}}[i] =kS_{c}[i]\;\forall\;i\in[1,\,m]\,,\] (13) \[P_{\mathrm{Loss}}^{\mathrm{Ntwk.}} =x^{\mathrm{T}}\Lambda x+\lambda x+\sigma\,,\] (14) \[x =[P_{c}^{\mathrm{T}},\,Q_{c}^{\mathrm{T}}]^{\mathrm{T}}\,,\] (15) \[V =Kx+b\,,\] (16) \[V_{-} \leq V\leq V_{+}\,,\] (17) \[\sum_{i}S_{c}[i] \leq S_{c}^{\mathrm{Total}}\,,\] (18) \[\mathrm{EC} \leq n\,. \tag{19}\] The objective (8) is to schedule the converter real \(P_{c}\) and reactive \(Q_{c}\) power flows to minimize converter losses \(P_{\mathrm{Loss}}^{\mathrm{Conv.}}\) and network losses \(P_{\mathrm{Loss}}^{\mathrm{Ntwk.}}\). The apparent power of each converter \(S_{c}[i]\) must be bounded by the capacity connected to that converter leg (9). The powers injected into the dc node \(P_{\mathrm{dc}}\) must balance (10), and the power injected by a distributed energy resource \(P_{\mathrm{DER}}\) is the final element of the vector of powers at the dc node (12). The power must balance across the lossy converters (11), with converter losses modelled as being linear in apparent power (13) for loss coefficient \(k\). Network losses are quadratic in power injections (14), with \(x\) the stacked vector of real and reactive power injections (15) and \(\Lambda,\,\lambda,\,\sigma\) the parameters of the quadratic model. Voltage magnitudes \(V\) are linear in injection (16), with \(K,\,b\) the sensitivity and offset components of the power-voltage affine model, and upper and lower bounds on voltages \(V_{+},\,V_{-}\) imposed via (17). The total ac/dc converter capacity \(S_{c}^{\mathrm{Total}}\) limits the sum of power transfers into the ac feeders (18), based on the idealised MOP of [8]. Finally, the number of non-zero power injections (i.e., the electrical cardinality) is less than or equal to \(n\) (19). Note that the full optimization formulation includes auxiliary variables to include the cardinality constraint (19) as described in Section III-A (these are not written explicitly for brevity). Additionally, the quadratic network losses (14) are converted to a relaxed second order cone constraint, as in [8]. ## IV Case Studies In the previous section, the operational optimization method was developed to optimally dispatch an idealised reconfigurable power converter, subject to a cardinality constraint that ensures the electrical cardinality is no greater than a given value. In this section, the approach is demonstrated on a pair of distribution network models, simulated over the course of a full calendar year. Fig. 2 shows the topology of the interconnected feeders of the two networks studied, the 75 Bus UK Generic Distribution System (GDS) HV UG network [15] and the IEEE 33 Bus Network [16]. Both of these models have a significant wind and solar generator connected within the network. Additionally, to illustrate how injections into the dc link of a multiport can influence the mission profile, the GDS network also has an 0.8 MW solar generator connected to the dc link. Each load is allocated an individual demand profile from real measured annual profiles from a utility in the UK [17], and embedded wind and solar profiles are taken from [18] to represent the renewable generation profiles. A converter loss coefficient \(k\) of 1% is used (i.e., ac-dc-ac efficiency is 98%), as in [8]. The solution approach has been developed using the Mosek Fusion API [19]. The conic relaxation of (14) is accurate across all cases and time periods to within a relative error of \(3\times 10^{-5}\). Relative and absolute integer optimization gaps of \(10^{-4},10^{-5}\) are used, with all problems then solved to optimality. The quadratic loss model (14) has good performance as compared to the true non-linear solution obtained from OpenDSS. For example, for the 33 Bus system and unconstrained cardinality (Section IV-B), the correlation between modelled and true loss changes has a correlation coefficient of 94.4% and slope (as determined via linear regression) of 0.914. Linearization errors for voltage magnitudes are even more accurate-for example, relative errors across the whole year are less than 3.5% for Case Study 1 (in terms of the 2-norm of the changes in voltages from the linear model and OpenDSS). ### _Case Study 1: UKGDS Distribution System_ The first case considered is the UKGDS system (Fig. 1(a)), with total idealised converter capacity \(S_{c}^{\mathrm{Total}}\) of 3200 kVA. Fig. 3 plots the operation of the device for five representative days in the summer, showing the real and reactive flows of the two multiport converters with unrestricted cardinality (Fig. 2(a)), the power flows for the converter with a cardinality constraint (Fig. 2(b)), the additional losses in the system as compared to a system with no converter or the 0.8 MW additional generator (Fig. 2(c)), and finally normalised values of the range and quartiles of the demand, the solar profile and wind profile (Fig. 2(d)). It can be observed in this figure that, as expected, when the cardinality is unrestricted that power can be transferred between feeders in any combination. In contrast, when the cardinality EC is constrained, real and reactive powers can only be transferred into one feeder at a time (Fig. 3(a)). This has a substantial effect on power flows and losses, as there is voltage congestion at Bus 1132 (Fig. 3(a)) due to the large wind generator on Feeder 6 (for the purposes of this work, a limit of 1.045 pu is considered). For the restricted case with \(n=1\), when there are high winds and solar generation (as on June 25th and 26th), the converter must connect to Feeder 6 to lower the voltage by drawing reactive power; however, this means that real power is also injected from the solar plant, and so further reactive power must be drawn to reduce the voltage even further. In contrast, the mission profile with unrestricted cardinality can simply transfer the surplus real power from the wind generator from Feeder 6 to Feeder 3. Despite these increased system losses, congestion is still effectively managed by the power converter and so there are no infeasible points (i.e., time periods which the voltage is outside of the limits defined by the utility). Across the year, the increase in losses is 19.1 MWh for the converter with constrained cardinality as compared to the converter with unconstrained cardinality. However, this may be offset by the reduced complexity of the mission profile, as shown in Fig. 4. A monolithic ac/dc converter with capacity 3200 kVA could be designed with a reconfiguration output stage to meet the mission profile profile, rather than a more complex ac/dc/ac converter. ### _Case Study 2: IEEE 33 Bus Network_ The results with the IEEE 33 Bus system are plotted in Fig. 5 for representative days of August 1st-3rd, with a total ac/dc converter capacity \(S_{c}^{\mathrm{Total}}\) of 750 kVA. Real power transfers are plotted the unrestricted design in Fig. 4(a) and the design with cardinality \(n\) restricted to 2 in Fig. 4(b); the Fig. 2: The topology and generators installed for the two case studies, with the UKGDS having a three-port MOP with two ac feeders and a generator connected to the dc link (similar to Fig. 1); and, the 33 Bus system with a 4-terminal MOP. reduction in system losses are plotted in Fig. 4(c) and the relative fraction of the loss reduction as compared to the unrestricted cardinality are plotted in Fig. 4(d) (to highlight more clearly the difference between the three lines in Fig. 4(c)); the out-turn cardinality of the hourly mission profile are plotted in Fig. 4(e), and finally the normalised range and quartiles of the demand profiles, and the solar and wind profiles, are plotted in Fig. 4(f). As expected, from this figure, it can again be seen that power can be transferred between any combination of powers for a converter with unrestricted cardinality, where the case with cardinality \(n\) restricted to 2 or less can only transfer power between any two feeders. For example, on August 1st, the high solar generation leads the MOP to draw from bus 17 \(P_{c}[2]\) and export to bus 32 \(P_{c}[1]\) due to high demand and low wind. The results of Fig. 5 show clearly the fact that the cardinality EC can vary significantly as a function of time (calculated according to (3)). As a result, during the time periods with an EC of 2, all three of the converter models perform the identically. Over the course of the full year (17520 half-hours), the losses for systems with 2, 3 or no cardinality constraint values are shown in Fig. 6, as is the distribution of EC values. In this system, more than 4% of time periods have no power injections (i.e., an EC value of zero), even with the low ac/dc conversion loss coefficient of 1%. ### _Discussion_ This work has considered the construction of mission profiles with a constrained cardinality to explore the properties of these solutions. However, in general the idealised MOP Fig. 4: Scatter plot of the mission profile for the unrestricted and constrained mission profiles, demonstrating cardinality EC of the mission profile has been reduced from 2 (a) to 1 (b). The thin red dashed line indicates the apparent power constraint for the 3200 kVA converter considered. Fig. 5: Results for Case 2. (a, b) plot real power transfers for the unrestricted case (\(n=4\)) and for \(n=2\); (c) plots the loss reduction, and (d) plots the equivalent fraction of loss reduction against the restricted cases. Finally, (e) plots the number of non-zero power transfers, whilst (f) plots the generation profiles and demand quartiles for this case. Fig. 3: Power transfers (a, b), changes to losses (c) and generation profiles alongside the range, interquartile range and median demand (d) for Case 1 (UKGDS HVUG network, Fig. 1(a)). These subfigures show that power flow is feasible for the case with cardinality restricted to \(n=1\), but the additional reactive power that must be drawn to manage voltages increases losses substantially. model [8], as used in the optimization (18), may or may not be realisable with fixed converter sizes (as modelled in, e.g., [8]). Nevertheless, the proposed approach can show if there are significant potential benefits of reduced cardinality, thereby motivating a future method to consider an optimal sizing strategy for individual ac/dc legs for a given cardinality. Furthermore, this work has focussed on the _electrical_ cardinality, as this drives electrothermal stresses and is therefore the primary component of a mission profile. However, the thermal aspect of a mission profile has not been modelled explicitly, and the impact of changing cardinality may be considerable if a reconfigurable model is designed which substantially changes converter utilization. Conversely, distribution optimization problems could also be developed that aim to directly avoid unnecessary thermal cycling of power converter modules to improve reliability and lifetime of these devices. ## V Conclusion Multiport mission profiles have more complex mathematical structure than two-terminal power converters, and so require more sophisticated techniques to model, design and simulate. The electrical cardinality of a multiport's mission profile has been introduced, based on the outputs of distribution system optimal power flow, and has been shown to have temporal variability within systems composed of power converters embedded within electrical distribution networks. Furthermore, a method to include cardinality constraints within optimal power flow problems has been proposed that enables a family of mission profiles to be provided to a power converter designer for further analysis. Results show that converters with restricted cardinality can still provide effective congestion management and provide grid services such as loss reduction. A case study demonstrating a halving of the cardinality only reducing the potential losses reduction by 91.7% as compared to an unrestricted case. Power flows in distribution systems are increasingly becoming controllable and variable, whilst developments in power electronics are resulting in increasing power density and reliability of converters. To maximise the potential benefits of these devices within systems, much higher levels of dialogue will be required between power converter designers and network operators. The electrical cardinality of the mission profile provides one useful piece of information that can be clearly communicated that affects both the power converter and its operation within a network. It is concluded that new integrated power distribution-power electronics design and modelling approaches will be necessary to most effectively realise the potential of power converters to support the integration of low carbon technologies in power distribution.
2308.05630
Reactor Antineutrino Spectral "Bump": Cumulative Fission Yields of Irradiated U-235 and Pu-239 Measured by HPGe Gamma-Ray Spectroscopy
Recent measurements of the reactor antineutrino emission show that there exists a spectral excess (the "bump") in the 5-7 MeV region when compared to the Huber-Muller prediction based on the conversion method. Analysis within an alternate prediction technique, the summation method, suggests that the bump could be due to excess contributions from a certain few of the beta-decaying fission products. However, it has been shown that when updated fission yield values are used in the summation method, the predicted excess vanishes. In the present preliminary study, fission yields for nuclides suspected of causing the neutrino spectral bump are investigated using gamma-ray spectroscopy of U-235 and Pu-239 samples freshly irradiated using the High Flux Isotope Reactor. For several of the suspect nuclides, the derived fission yields are consistent with JEFF3.3 fission yield library. The exception is the case of Cs-140 from Pu-239, where the discrepancy between the fitted and expected values suggests a potential error in the fission yield library. This highlights the importance of using accurate nuclear data libraries in the analysis of the reactor antineutrino spectra, and the need for ongoing efforts to improve these libraries.
Samuel Kim, C. J. Martoff, Michael Dion, David Glasgow
2023-08-10T15:18:20Z
http://arxiv.org/abs/2308.05630v1
Reactor Antineutrino Spectral "Bump": Cumulative Fission Yields of Irradiated \({}^{235}\)U and \({}^{239}\)Pu Measured by HPGe Gamma-Ray Spectroscopy ###### Abstract Recent measurements of the reactor antineutrino emission show that there exists a spectral excess (the "bump") in the 5-7 MeV region when compared to the Huber-Muller prediction based on the conversion method. Analysis within an alternate prediction technique, the summation method, suggests that the bump could be due to excess contributions from a certain few of the \(\beta\)-decaying fission products. However, it has been shown that when updated fission yield values are used in the summation method, the predicted excess vanishes. In the present preliminary study, fission yields for nuclides suspected of causing the neutrino spectral bump are investigated using gamma-ray spectroscopy of \({}^{235}\)U and \({}^{239}\)Pu samples freshly irradiated using the High Flux Isotope Reactor. For several of the suspect nuclides, the derived fission yields are consistent with JEFF3.3 fission yield library. The exception is the case of \({}^{140}\)Cs from \({}^{239}\)Pu, where the discrepancy between the fitted and expected values suggests a potential error in the fission yield library. This highlights the importance of using accurate nuclear data libraries in the analysis of the reactor antineutrino spectra, and the need for ongoing efforts to improve these libraries. + Footnote †: Currently at Thomas Jefferson National Accelerator Facility + Footnote †: Currently at Thomas Jefferson National Accelerator Facility ## I Introduction Nuclear reactors are intense sources of electron antineutrinos, and are therefore widely used to study the complex properties of these intriguing particles. From one fission, approximately 6 \(\bar{\nu}_{e}\) are produced, and a 1 GW thermal reactor emits about \(10^{20}\)\(\bar{\nu}_{e}\) per second [1]. Fission fragments are neutron-rich, resulting in \(\beta\) decays. Recent large-scale anti-neutrino spectral measurements [2; 3] show that there is a spectral bump in the 5 to 7 MeV region of \(\bar{\nu}_{e}\) that is not predicted by the \(\beta\)-conversion method of predicting the expected neutrino spectrum from measured beta spectra (Huber-Muller method). The aggregated beta spectrum is made up of thousands of decay channels with different end point beta energies. Conversion to an antineutrino spectrum is performed by fitting the measured electron spectrum with a superposition of 30 end-point beta energies and using the kinematics of \(\beta\)-decay to obtain the corresponding neutrino spectra [4]. In the summation method, an alternate approach is used to evaluate the \(\bar{\nu}_{e}\) spectrum. Nuclear data files such as Evaluated Nuclear Data File (ENDF) library and Joint Evaluated Fission and Fusion File (JEFF) library are used to estimate the associated neutrino spectrum using all the relevant tabulated fission yields and \(\beta\)-decay parameters. Based on the ENDF/B-VII library, the summation method suggests that the spectral bump could be due to yields in excess of the eight particular \(\beta\)-decaying fission products, which give a combined 42% of the total decay rate in the \(\beta\)-energy region of 4 to 6 MeV (\(\bar{\nu}_{e}\) energy region of 5 to 7 MeV) [5]. Table 1 lists the same eight fission products discussed in Ref.[5] as the primary contributors to the spectral bump. In a follow-up study, Sonzogni et al.[6] demonstrate \begin{table} \begin{tabular}{c c c c} isotope & Half life (s) & Gamma Energy (keV) & Intensity \\ \hline \({}^{94}\)Rb & 5.84(2) & 432.61(3) & 0.202(14) \\ \({}^{100}\)Nb & 1.4(2) & 535.66(14) & 0.46(6) \\ \({}^{140}\)Cs & 63.7(3) & 602.25(5) & 0.53(3) \\ \({}^{95}\)Sr & 23.90(14) & 685.6 & 0.226 \\ \({}^{92}\)Rb & 4.49(3) & 814.98(3) & 0.032(4) \\ \({}^{90}\)Y & 5.34(5) & 1750.4(2) & 0.0235(24) \\ \({}^{97}\)Y & 3.75(3) & 3287.6(4) & 0.181(19) \\ \({}^{142}\)Cs & 1.68(14) & 359.598(14) & 0.27(3) \\ \end{tabular} \end{table} Table 1: Decay data for 8 nuclides singled out in Ref. [5] from the ENDF/B-VIII decay data sublibrary, including the decay chain gamma-ray with the strongest intensity selected for the present analysis. Uncertainty is given in the parenthesis. that reproduction of the bump based on the summation method is due to errors in the fission yield values contained in ENDF/B-VII library. When corrected and improved fission yield values are used, no excess contributions from the eight nuclides are observed. In the present work, fission is induced in \({}^{235}\)U and \({}^{239}\)Pu samples by neutron irradiation in the High Flux Isotope Reactor (HFIR), and the resulting gamma-ray spectra are measured by a high purity germanium (HPGe) detector after rapid transport out of the core. The measured spectra are compared to predictions based on data from JEFF3.3 fission yield library and the ENDF/B-VIII decay data sublibrary. ## II Experiment The \({}^{235}\)U sample consists of 252.72 nanograms of natural uranium nitrate in an Inductively Coupled Plasma calibration solution. The \({}^{239}\)Pu sample consists of 301.3 nanogram of National Institute of Standards and Technology (NIST) Certified Reference Material (CRM-137). The samples are irradiated using the PT-2 pneumatic tube of the HFIR at the Neutron Activation Analysis laboratory (NAA) of Oak Ridge national Laboratory. The measured thermal and epithermal neutron fluxes at the irradiation location are 4.59\(\times\)10\({}^{13}\) n/cm\({}^{2}\)/sec and 1.96E\(\times\)10\({}^{11}\) n/cm\({}^{2}\)/sec respectively for \({}^{235}\)U, and 4.43\(\times\)10\({}^{13}\) n/cm\({}^{2}\)/sec and 3.24\(\times\)10\({}^{11}\) n/cm\({}^{2}\)/sec respectively for \({}^{239}\)Pu. The neutron fluxes are measured using manganese and gold activation foils. Each sample is irradiated for 30 seconds, and then transported to the detector chamber using the pneumatic tube transfer system [7] which introduces a 20-second delay prior to the gamma-ray measurement. This delay is problematic for the short-lived \({}^{97}\)Y and \({}^{142}\)Cs. Future work is planned to reduce the delay and improve detection sensitivity. Fig. 1 shows the measured gamma-ray spectra of the irradiated \({}^{235}\)U and \({}^{239}\)Pu. The gamma rays are measured with a 44% relative efficiency, ORTEC p-type coaxial HPGe detector with an aluminum end cap. Each sample is placed at 33 cm above the detector and measured for 30 seconds. For the analysis presented in this work, only the \(\beta\)-decay path for the parent-daughter chains is used, neglecting \(\beta\)-delayed neutron emission channels. ## III Calculated gamma rays The expected gamma-ray yield calculation starts by determining the number of \({}^{235}\)U and \({}^{239}\)Pu nuclides initially present in the sample from the sample mass (m), Avogadro's number (\(N_{A}\)) and the molar weight (M). Total fission production and decay of the gamma emitter during the irradiation (\(N_{\rm fd}\)) is determined using Eq. (1). \[N_{\rm fd}={\rm IFY}\,\sigma_{f}\,\phi\,\frac{mN_{A}}{M}(1-e^{-\lambda t}) \tag{1}\] The equation includes the independent fission yield (IFY) of a specific nuclide, the thermal neutron cross section (\(\sigma_{f}\)) and the thermal neutron flux (\(\phi\)) and the irradiation [8]. The IFY is obtained from the JEFF3.3 library, and the neutron cross section is based on the ENDF/B-VIII.0 neutron cross section standard sublibrary. The JEFF3.3 and ENDF/B-VIII.0 fission yield libraries contain different IFY values for certain nuclides. This is demonstrated using the \({}^{140}\)Cs decay chain in Table 2. In this example, JEFF3.3 does not have IFY for \({}^{140}\)Sb, so the IFY value from ENDF/B-VIII.0 is used instead in our analysis. Thus, \(N_{\rm fd}\) will be different depending on the fission library used. In addition, \(N_{\rm fd}\) of each precursor of the gamma emitter needs to be determined and \(\beta\)-decayed to properly account for the total number of the gamma emitter produced at the end of the irradiation. Cumulative yield (CY) used in this study is described in Eq. (2). \[CY_{i}=[N_{\rm fd}]_{i}+\sum_{ij}Decay([N_{\rm fd}]_{j}) \tag{2}\] The second term describes the total number of \(i\) gamma emitter resulting from the \(\beta\)-decay of \(j^{th}\) precursor of \(i\) gamma emitter. Each decay chain leading from primary fission products to a gamma ray emitter measured in this study is described by a set of coupled linear differential equations describing the radioactive decays. These equations are reformulated as a set of matrices and solved using Matlab. The solution to each decay chain gives the number of gamma emitter resulting from the decay of its precursors during the 30-second irradiation. Resulting total CY is further decayed for 20 seconds modeling the RABBIT transportation delay. Expected gamma-ray yields during the subsequent (delayed) 30-second measurement time are calculated as factors of the decayed CY, decay constant (\(\lambda\)), absolute efficiency (\(\epsilon\)) of the HPGe, and the gamma emission intensity (\(I_{\gamma}\)). The yield calculation is aided by Radiation Intensity Calculator (RadICal), a python-based application developed by Pacific Northwest National Laboratory (PNNL) researchers [9]. Its solutions are built on Laplace transform of the Bateman equation. The half lives of the nuclides investigated here are much shorter than the detector measurement time, therefore, it is necessary to decay-correct the measured peak counts for the count time. The ANSI standard for the correction factor is described in Ref.[10]. The largest uncertainty contribution comes from the uncertainties associated with IFY. The relative uncertainty [11] of IFY from each nuclides is: \({}^{93}\)Rb(18%), \({}^{100}\)Nb(35%), \({}^{140}\)Cs(24%), \({}^{95}\)Sr(10%), \({}^{92}\)Rb(18%), \({}^{96}\)Y(24%), \({}^{97}\)Y(15%) and \({}^{142}\)Cs(20%). ## IV Measured gamma rays The energy and full width at half maximum (FWHM) calibrations of the HPGe detector have previously been determined by analyzing known gamma ray peaks. The absolute efficiency of the detector is estimated using the Geometry and Tracking[12] simulation package. In the simulation, 17 gamma ray energies are selected to cover the energy range from 50 keV to 3.5 MeV. Each gamma-ray simulation was performed using 1E+6 photons to determine the efficiency of the detector at each photon energy. The detector model in GEANT4 includes all the details of detector construction, including a 0.1 cm thick aluminum window on the endcap of the detector and a 0.07 cm thick dead layer on the surface of the HPGe crystal. Dimensions of the HPGe were taken to be 6.5 cm in the diameter and 6.45 cm in the length based on published ORTEC documents[13]. According to the ANSI/IEEE standard 325 [14], the relative efficiency of an HPGe is defined by Eq. (3). The absolute efficiency of HPGe at 1.33 MeV is measured with a source to detector distance of 25 cm. The relative efficiency is the ratio of this HPGe absolute efficiency to the absolute efficiency of a 3-inch by 3-inch Na(Tl) at 1.33 MeV measured at 25 cm (1.2E-3). \[\mathrm{Relative\,efficiency}=\frac{Absolute\,efficiency}{1.2\times 10^{-3}} \tag{3}\] To establish a benchmark, a GEANT4 simulation was performed for a point source placed at the standard distance of 25 cm from the detector. The absolute efficiency at 133 keV was expected to be 5.3E-4 for a 44% relative efficient HPGe [15]. The simulated absolute efficiency was 5.9E-4(7.7E-5). Fig. 2 shows the simulated detector efficiency. The efficiency is fitted using the parametric equation given in the RADWARE software package [16]. Above 150 keV, efficiency is fitted with the parameters (D, E and F) in the form of: \[\mathrm{Efficiency}=e^{D+Ey+Fy^{2}} \tag{4}\] where \(y=ln(E_{\gamma}/1000)\) and \(E_{\gamma}\) is a gamma-ray energy in keV. Peaks were analyzed from the measured energy spectra using two methods: non-linear fitting and a simple summation. The ANSI standard for the summation method is given in [17; 10; 18], and the detailed explanation is given in [19; 20]. The fit function was a combination of a Gaussian and a linear continuum. Fit analysis was performed using GF3M program from the RADWARE package [16] and an open-source software, GNUPLOT \begin{table} \begin{tabular}{c c c c c c} IFY (\({}^{235}\)U) & \({}^{140}\)Sb & \({}^{140}\)Te & \({}^{140}\)I & \({}^{140}\)Xe & \({}^{140}\)Cs \\ \hline JEFF3.3 & No data & 6.57E-08 (2.26E-08) & 3.03E-04 (1.03E-04) & 1.25E-02 (3.10E-03) & 1.84E-02 (3.85E-03) \\ ENDF/B-VIII.0 & 2.82E-09 (1.81E-09) & 9.04E-06 (5.78E-06) & 1.11E-03 (7.13E-04) & 2.59E-02 (1.04E-03) & 3.05E-02 (1.83E-03) \\ \hline \hline IFY (\({}^{239}\)Pu) & \({}^{140}\)Sb & \({}^{140}\)Te & \({}^{140}\)I & \({}^{140}\)Xe & \({}^{140}\)Cs \\ \hline JEFF3.3 & No data & 2.33E-07 (8.06E-08) & 4.77E-04 (1.63E-04) & 1.83E-02 (4.06E-03) & 2.18E-02 (4.52E-03) \\ ENDF/B-VIII.0 & 5.61E-11 (3.59E-11) & 1.41E-06 (9.02E-07) & 5.94E-04 (3.80E-04) & 1.54E-02 (4.31E-04) & 2.28E-02 (3.64E-03) \\ \hline \hline \end{tabular} \end{table} Table 2: Examples from the \({}^{140}\)Cs decay chain, showing the differing IFY of \({}^{235}\)U and \({}^{239}\)Pu from JEFF3.3 and ENDF/B-VIII.0 fission yield libraries. Uncertainty of each IFY is indicated in the parenthesis. Figure 1: Measured gamma-ray spectra from freshly irradiated \({}^{235}\)U and \({}^{239}\)Pu are plotted. See text for details. [21]. Details for the fitting method are described in the references. Fig. 3 and Fig. 4 show the data and fits for fission product gamma-ray peaks of interest resulting from neutron irradiation of \({}^{235}\)U and \({}^{239}\)Pu, respectively. Table 3 and Table 4 summarize the fitting statistics of \({}^{235}\)U and \({}^{239}\)Pu. In general, the fitted peak energies are consistent with tabulated values for both \({}^{235}\)U and \({}^{239}\)Pu. However, the p-values suggest that a single Gaussian may be a poor model for some peaks, likely indicating interference from additional unidentified gamma rays. This could be clarified with greatly improved statistics. As shown in Fig. 1, \({}^{239}\)Pu generates generally more gamma-ray activities than \({}^{235}\)U, suggesting more interference. This fact appears to be consistent with all p-values being lower for \({}^{239}\)Pu compared to \({}^{235}\)U. to be consistent with the expected net count. However, the results are below the statistical significance and are inconclusive. The contribution from interference is estimated as described below. For \({}^{93}\)Rb, 6 nuclides (A = 90, 134, 138, 143, 144, 145) produce a similar or larger order of magnitude of gamma-ray yield (\(I_{\gamma}\)\(\times\) total fission yield) in the 432 keV region in our data [9; 22]. Based on the estimate of the gamma rays having a measurable effect, the proportion of gamma-ray yield of \({}^{93}\)Rb with respect to the 6 nuclides gives about 6% which is consistent with the expected net count of \({}^{93}\)Rb. In addition, \({}^{93}\)Rb (432.61 KeV, \(I_{\gamma}\) = 0.202) and \({}^{143}\)Ba (431.2 KeV, \(I_{\gamma}\) = 0.0276) are expected to produce approximately the same number of counts in our data. The 431.2 keV gamma-ray peak was fitted to obtain its net count which shows consistency with \({}^{93}\)Rb. As for \({}^{92}\)Rb, 13 nuclides (A = 82, 91, 92, 101, 132, 133, 132, 136, 137, 139, 140, 144, 147) produce a similar or larger order of magnitude of gamma-ray yields than \({}^{92}\)Rb in the 815 keV energy region [9; 22]. The proportion of measured peak counts from \({}^{92}\)Rb with respect to the thirteen nuclides is about 1.4% which is consistent with the expected net count of \({}^{92}\)Rb. For \({}^{96}\)Y, the strongest gamma ray energy is at 1750.4 KeV with \(I_{\gamma}\) = 0.0235, with interference from \({}^{96}\)Y (1750.06 KeV, \(I_{\gamma}\) = 0.88). A proportion of gamma rays yield from \({}^{96}\)Y with respect to the total gamma rays yields from both \({}^{96}\)Y and \({}^{96}\)Y is about 3% which is consistent with the expected net count of \({}^{96}\)Y. Fig. 5 and Fig. 6 show the main results of the present study. The net peak counts, the limits (Lc and Ld), and the expected counts calculated from the JEFF3.3 fission yields and the detector simulations, are plotted vs. gamma ray energy. For both \({}^{235}\)U and \({}^{239}\)Pu, the measured gamma rays for \({}^{100}\)Nb and \({}^{95}\)Sr are statistically significant (\(\alpha\) = 0.05), are above the minimum detection limit (\(\beta\) = 0.05), and are fully consistent with the expected counts. This indicates that the independent fission yield from JEFF3.3 fission yield library and gamma-ray intensity from ENDF/B-VIII.0 decay data sublibrary are reliable for these nuclides. For \({}^{140}\)Cs, the measured gamma ray yield is statistically significant, and for the case of \({}^{235}\)U its value is consistent with that expected. But for the case of \({}^{239}\)Pu, the measured count is 35% larger than the expected value. This suggests a possible problem with the fission yield value in JEFF3.3, which should be confirmed by a follow up study. A possible complication for the present type of measurement was pointed out by Hayes et al. [1] in the form of a potential contribution from epithermal neutron induced fission. The measured epithermal neutron flux at HFIR for this study is 0.4% (0.7%) of the thermal neutron flux for \({}^{235}\)U ( \({}^{239}\)Pu). In this analysis, the contribution from epithermal neutrons is too low to make any significant difference in the data. The role of epithermal neutrons should be further investigated in an actual reactor environment where the fuel composition is precisely known. ## VI Conclusion The summation method [4] used to estimate the \(\bar{\nu}_{e}\) spectrum analysis depends on accurate data values in \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \({}^{235}\)U & \(\chi^{2}\)/DOF & Calibrated Centroid & Fitted Centroid & Calibrated FWHM & Fitted FWHM & p-value \\ \hline \({}^{140}\)Cs & 1.90 & 602.33 & 602.03(5) & 1.93 & 3.41(11) & \(<\) 0.01 \\ \({}^{95}\)Sr & 1.37 & 685.57 & 685.45(6) & 2.01 & 2.41(11) & \(<\) 0.01 \\ \({}^{100}\)Nb & 2.74 & 535.61 & 534.75(10) & 1.87 & 2.09(23) & \(<\) 0.01 \\ \({}^{93}\)Rb & 2.92 & 432.68 & 434(10) & 1.76 & 5(18) & \(<\) 0.01 \\ \({}^{92}\)Rb & 1.89 & 814.96 & 815.4(3) & 2.12 & 1.9(10) & \(<\) 0.01 \\ \({}^{96}\)Y & 1.9 & 1750.50 & 1750.19(13) & 2.73 & 2.23(23) & \(<\) 0.01 \\ \hline \hline \end{tabular} \end{table} Table 4: Best-fit values and fitting statistics for the fission products from \({}^{239}\)Pu are summarized. Similar to the case of \({}^{235}\)U, fitted energies are consistent with the calibrated energies while fitted FWHMs show some divergence. Unlike \({}^{235}\)U, the fit statistics (p-values) indicate poor fit quality for \({}^{100}\)Nb, \({}^{140}\)Cs and \({}^{95}\)Sr due to interference from other gamma rays. (see Results section for detail). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \({}^{235}\)U & \(\chi^{2}\)/DOF & Calibrated Centroid & Fitted Centroid & Calibrated FWHM & Fitted FWHM & p-value \\ \hline \({}^{140}\)Cs & 1.63 & 602.33 & 602.28(4) & 1.93 & 2.87(8) & 0.05 \\ \({}^{95}\)Sr & 1.43 & 685.57 & 685.43(6) & 2.01 & 2.50(11) & 0.03 \\ \({}^{100}\)Nb & 1.26 & 535.61 & 535.4(3) & 1.87 & 2.4(10) & 0.02 \\ \({}^{93}\)Rb & 1.21 & 432.67 & 430.38(15) & 1.76 & 3.4(3) & \(<\) 0.01 \\ \({}^{92}\)Rb & 2.26 & 814.98 & 814.6(5) & 2.12 & 1.1(12) & \(<\) 0.01 \\ \({}^{96}\)Y & 1.62 & 1750.50 & 1749.4(3) & 2.73 & 1.9(6) & \(<\) 0.01 \\ \hline \hline \end{tabular} \end{table} Table 3: Best-fit values and fitting statistics for the fission products from \({}^{235}\)U are summarized. Fitted energies are consistent with the calibrated energies. But fitted FWHMs show some divergence. The fit statistics (p-values) indicate acceptable fits for \({}^{100}\)Nb, \({}^{140}\)Cs and \({}^{95}\)Sr. (see Results section for detail). the nuclear libraries. To check the fission yield library values, \({}^{235}\)U and \({}^{239}\)Pu samples are irradiated using HFIR, and gamma-ray spectroscopy is used evaluate the gamma-rays from 8 short-lived fission products which were suggested[5] as a possible source of the spectral bump in the reactor neutrino spectrum at the 5 to 7 MeV region. The gamma ray yields for \({}^{100}\)Nb and \({}^{95}\)Sr from both \({}^{235}\)U and \({}^{239}\)Pu, as well as \({}^{140}\)Cs from \({}^{235}\)U are found to be statistically significant and consistent with expectation based on the JEFF3.3 fission yield library. However, an inconsistent result was found for \({}^{140}\)Cs from \({}^{239}\)Pu, which suggests that the JEFF3.3 fission yield value for this nuclide may be incorrect. The results for remaining fission products are inconclusive due to insufficient statistics. A follow-on experiment with increased sample sizes and a faster reactor-to-counting station transfer is in discussion. Overall, the present study underscores the importance of continuous improvement and refinement of nuclear data libraries. Additional experimental data and continued analysis of existing data are important for verifying and improving fission yield values. ###### Acknowledgements. Authors would like to thank A. Sonzongni and E. McCutchan for valuable comments. This work was funded Figure 4: Data and fits for the six gamma-ray peaks of interest from 239-Pu fission are shown. The measured spectra are shown by the circled dots with 1-\(\sigma\) uncertainties, and the fits by solid lines. Due to low yield, environmental backgrounds, and Compton scattering from higher energy photons, gamma ray peaks from \({}^{97}\)Y and \({}^{142}\)Cs are not detectable, and omitted in this figure. Figure 5: Fitted and expected net counts and statistical limits and uncertainties for \({}^{235}\)U. The yields of \({}^{100}\)Nb, \({}^{140}\)Cs and \({}^{95}\)Sr are shown to be consistent with the expected values. \({}^{93}\)Rb, \({}^{92}\)Rb and \({}^{96}\)Y are below the statistical limit of detection, and are excluded from the plot for clarity. Figure 6: Fitted and expected net counts and statistical limits and uncertainties for \({}^{239}\)Pu. The yields of \({}^{100}\)Nb and \({}^{95}\)Sr are plotted and shown to be consistent with the expected values. The fitted \({}^{140}\)Cs is about 35% larger than the expected value, suggesting a possible problem with the JEFF3.3 fission yield library. \({}^{93}\)Rb, \({}^{92}\)Rb and \({}^{96}\)Y are below the statistical limit of detection, and are excluded from the plot for clarity. by the U. S. National Science Foundation under grant NSF-PHY1242611, NSF-1812504, 1747523, and 1314483. The work at Brookhaven National Laboratory was sponsored by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC.
2303.07355
Intensity-based dynamic speckle method using JPEG and JPEG2000 compression
Statistical processing of speckle data enables observation of speed of processes. In intensity-based pointwise dynamic speckle analysis, a map related to speed's spatial distribution is extracted from a sequence of speckle patterns formed on an object under coherent light. Monitoring of time evolution of a process needs storage, transfer and processing of a large number of images. We have proposed lossy compression of these images using JPEG and JPEG2000 formats. We have compared the maps computed from non-compressed and decompressed synthetic and experimental images, and we have proven that both compression formats can be applied in the dynamic speckle analysis.
Elena Stoykova, Blaga Blagoeva, Natalya Berberova-Buhova, Mikhail Levchenko, Dimana Nazarova, Lian Nedelchev, Joongki Park
2023-03-13T12:56:48Z
http://arxiv.org/abs/2303.07355v1
# Intensity-based dynamic speckle method using JPEG and JPEG2000 compression ###### Abstract Statistical processing of speckle data enables observation of speed of processes. In intensity-based pointwise dynamic speckle analysis, a map related to speed's spatial distribution is extracted from a sequence of speckle patterns formed on an object under coherent light. Monitoring of time evolution of a process needs storage, transfer and processing of a large number of images. We have proposed lossy compression of these images using JPEG and JPEG2000 formats. We have compared the maps computed from non-compressed and decompressed synthetic and experimental images, and we have proven that both compression formats can be applied in the dynamic speckle analysis. (c) 2022 Optica Publishing Group ## 1 Introduction Dynamic speckle method (DSM) is growing in popularity due to simple data acquisition, high sensitivity and applicability to three-dimensional (3D) objects [1-3]. The method is evolving rapidly thanks to advances in computers and 2D optical sensors [4]. Most frequently, the DSM relies on i) acquisition of varying in time speckle patterns formed on the surface of diffusely reflecting objects under laser illumination and ii) statistical processing of speckle intensity data. The aim is obtaining information about the speed of the process that has caused the change in the speckle patterns. To name a few of the reported applications, the DSM has been applied for monitoring of blood flow perfusion in human tissues [5-7], penetration of cosmetic ingredients in human skin [8], ear biometrics [9], bacterial response [10,11], plants growth and leaves chemical contamination [12,13], seeds viability [14,15], animal reproduction [16], food quality assessment [17-19], and drying of paints, coatings and polymer thin films [20,21]. The strong impact of micro changes in topography or refractive index on speckle formation ensures high DSM sensitivity for evaluation of the speed of processes. The speed, which may vary on the object surface, is encoded in the rate of speckle intensity fluctuations in time. A 2D map is built for representing the speed distribution across the object by pointwise computing of a given statistical parameter from a sequence of correlated in time speckle images. The map is mostly known as an activity map since it presents an instant picture of areas of faster or slower intensity changes across the object surface. Accuracy of the DSM increases while the temporal resolution decreases with the number of images, \(N\), needed for a single map. This number depends on the tested object, but usually it is rather large, e.g. it may reach 64, 128 or 256 images. Thus, building a set of maps at consecutive instants entails capture, storage and transfer of huge number of images. Data compression becomes a mandatory step for the DSM implementation. Compression of coherent optical signals can be done for real and complex-valued signals [22-24]. In the intensity-based DSM, different algorithms are applied to intensity data [4, 25-29]. Relevant information is not the intensity value but its change in time. Based on this DSM feature, we proposed and analyzed several approaches for lossy data compression. As a first solution, we transformed 8-bit encoded speckle images into binary images with only two intensity levels [30]. We applied as a binarization threshold the 2D distribution of the pointwise estimate of the averaged in time intensity calculated for each pixel. Using this pointwise threshold, we obtained activity maps, which were as informative as the ground truth map (GTM) computed from 8-bit encoded bmp images. The advantage of binarization is its efficiency in processing for non-uniform illumination when the speckle has spatially varying statistics. The drawback is the need for preprocessing before the binarization. We continued studying speckle data compression by analysis of the coarse quantization of intensity [31, 32]. We studied the case when the data were quantized directly at acquisition without calculation of the average intensity and normalization. We considered as coarse any quantization with fewer levels than at acquisition, e.g. less than 256 levels. Efficiency of the coarse quantization depends on the intensity histogram within the acquired images. We proposed usage of uniform scalar quantization for symmetric intensity distributions when the speckle contrast is low. We proved efficiency of the non-uniform quantization for an asymmetric intensity distribution observed for a high-contrast speckle and uniform illumination. We studied also the case of non-uniform illumination when the histogram of intensity is asymmetric, the average and the variance of the intensity vary across the object and it is necessary to introduce normalization in the processing algorithm after the quantization. In all analyzed cases, we obtained activity maps comparable to the GTM for a number of levels going down from 256 to 8 or 16 depending on the tested objects. This paper addresses applicability of two common JPEG compression standards, JPEG and JPEG2000, to speckle images acquired in the intensity-based DSM. Both standards are well recognized and supported and provide significant data compression in image processing. The conventional JPEG format may still boost with a dominant use while JPEG2000 is well accepted for medical image coding [33]. Both standards are realized by different algorithms and so exhibit different artifacts. Despite the substantial differences between them, they distort the recorded images in a similar way from the DSM point of view, i.e. they change the spatial correlations within each image. This distortion inevitably affects the activity map. The JPEG Pleno is not considered in the paper as being developed for light field, holography and point cloud data compression [34]. We focus on the lossy compression scheme in view that the original speckle images are not needed for the processing stage and that the lossless compression provides rather modest compression ratios. The objective of the paper is to prove that the images compressed by using JPEG or JPEG2000 are useful for the pointwise DSM. Analysis of both compression schemes is done by comparing activity maps built from decompressed grayscale synthetic images and color experimental images to the GTMs from images in bmp format. We limit analysis to the case of 256 quantization levels as acceptable for most of the DSM applications. As figures of merit of the studied compression schemes, we use the probability density function (PDF) of the estimate of the statistical parameter characterizing the activity as well as the structural similarity index (SSI) between the maps built from decompressed and original images. We study compression of low and high contrast speckle for uniform/non-uniform illumination. The paper is organized as follows: in Sec.2 we give brief description of the intensity-based pointwise DSM and analyze distortions in the decompressed speckle images stored in JPEG and JPEG2000 formats. The study presented in Sec.3 focuses on distortions induced in the activity maps. Both Sec.2 and Sec.3 present results for simulated grayscale speckle images. Applicability of both compression schemes is analyzed in Sec.4 for the case of experimentally acquired images in two dynamic speckle measurements. ## 2 Distortion Analysis of decompressed speckle images ### Intensity-based pointwise dynamic speckle method For convenience, we briefly explain the capture and processing steps of the intensity-based pointwise DSM. The capture is shown in Fig.1. A 3D object on a vibration-isolated table is illuminated with linearly polarized laser light. The light beam is spatially filtered, expanded and collimated. A 2D optical sensor is focused on the object and captures the light reflected from the object surface. The processes in the object cause phase changes in the complex amplitude of light, which enters the sensor aperture. The sensor records images of fluctuating in space and time intensity distributions. The images have \(N_{x}\times N_{y}\) pixels at a pixel interval, \(\Delta\). The time interval, \(\Delta t\), between two consecutive images provides correlation of intensities in a sequence \(I_{u,u}=I\big{(}k\Delta,l\Delta,n\Delta t\big{)}n=1..N\) recorded at pixel \(\big{(}k\Delta,l\Delta\big{)}\) at \(N\) instants. The images are stored using JPEG formats. The processing step of the method is given in Fig.2. A single sequence of \(N\) correlated speckle images is used to calculate an activity map. A set of \(P\) such sequences for instants \(t_{1},t_{2}...t_{P}\) is processed with \(t\) being the time instant of recording the first image in the \(i\)-th sequence. Then, \(P\) activity maps are produced for the processes ongoing in the object. The choice of \(t_{1},t_{2}...t_{P}\) depends on the task, so the image sequences may overlap or be separated by equal/unequal time intervals. In the 3D volume \(N_{x}\times N_{y}\times N\) of speckle data in each image sequence, the data are weakly correlated in space and show some spatial distribution of the temporal correlation radius, \(\tau_{c}\big{(}k,l\big{)}=\tau_{c}\big{(}k\Delta,l\Delta\big{)}\), of intensity fluctuations. The larger the temporal radius, the larger the correlation, the slower the process and the lower activity. Different correlation-based algorithms are applied to build the set of activity maps to show evolution of \(\tau_{c}\big{(}k,l\big{)}\) in time. We give in Fig.2 activity maps for a synthetic object with two circular regions surrounded by a background with much slower intensity fluctuations at \(\tau_{c}=50\Delta\). The maps were calculated with the modified structure function (MSF) [28]: \[S_{1}\big{(}k,l,m\big{)}=\frac{1}{\big{(}N-m\big{)}}\sum_{i=1}^{N-m}\big{|}I_ {u,i,l}-I_{u,i,m}\big{|} \tag{1}\] where the integer \(m\) shows the time lag, \(\tau=m\Delta t\), between the compared intensities. The radius \(\tau_{c}\big{(}k,l\big{)}\) for the larger Figure 1: Schematic representation of intensity-based DSM. circular region takes values of 8, 12 and 20 At at \(t=t_{1}\), \(t_{2}\) and \(t_{b}\); for the smaller region, \(\tau_{c}\) is equal to 14, 20 and 30 At at these moments. The time lag is \(\tau=10\) At. The number of images is 256. They have been simulated as 8-bit encoded bitmap images as is described below for \(N_{s}\times N_{y}=256\times 256\) and a wavelength of 532 nm. The maps clearly demonstrate the spatial regions of different activity and the MSF decrease at lower activity. ### Distortions in decompressed synthetic speckle patterns Effective way to study the impact of various compression schemes is processing of synthetic speckle patterns for controllable spatial distribution of activity. In simulation, we accepted the model of scattering centers, which changed their positions normally to the object surface. Mutual independence was assumed between the amplitude and the phase of light scattered from a given center and between the amplitudes and phases at any two centers. Thus, the normally distributed phase change, \(\Delta\varphi_{m}^{ul}\), at point \(\left(k\Delta,l\Delta\right)\) between the moments separated by a time lag \(\tau=m\Delta t,m=1,2...N_{s}<N\) leads to temporal fluctuations of intensity in the optical sensor with a normalized correlation function \(\rho_{\mu}\left(\tau=m\Delta t\right)=\exp\left(-\sigma^{2}\left\langle\Delta \varphi_{m}^{ul}\right\rangle\right)\)[35]. Here \(\sigma^{2}\left\langle\Delta\varphi_{m}^{ul}\right\rangle\) is the variance of the phase change. Different models can be used for \(\rho_{\mu}\left(\tau=m\Delta t\right)\), but we chose an exponentially decreasing function \(\rho_{\mu}\left(\tau\right)=\exp\left[-\tau\,/\,\tau_{c}\left(k,l\right)\right]\) as appropriate for description of many processes. The standard deviation of the phase variation between any two successively captured images is given by \(\sigma\left\langle\Delta\varphi_{m=1}^{ul}\right\rangle=\sqrt{\Delta t/\tau_{ c}\left(k,l\right)}\). The simulation included the following steps: Step 1: generation of 2D spatial distributions of delta-correlated random phase on the object surface \(\varphi\left(k\delta,l\delta,i\Delta t\right)k=1..2N_{s},l=1..2N_{y},i=1..N\) with \(\delta=\Delta/2\) by using a 2D array of random phase values with uniform distribution from 0 to 2 \(\pi\). Step 2: computation of the phase distribution at \(i\Delta t\) from \[\varphi\left(k\delta,l\delta,i\Delta t\right)=\varphi\left[k\delta,l\delta, \left(i-1\right)\Delta t\right]+\sqrt{\Delta t/\tau_{c}\left(k,l\right)}N(0,1) \qquad,\] where \(N(0,1)\) is a newly generated for each value of "\(l\)" normally distributed random number with zero mean and variance equal to \(1,\ k=1..N_{x},l=1..N_{y},i=1..N\). Step 3: generation of the complex amplitude on the object surface for intensity distribution \(I_{0}\left(k\delta,l\delta\right)\) of the laser beam at the instant, \(i\Delta t\), \(U_{S}=\sqrt{I_{0}\left(k\delta,l\delta\right)}\exp\left\{-\,j\varphi\left(k \delta,l\delta,i\Delta t\right)\right\}\). Step 4: generation of the complex amplitude of the light field on the sensor aperture \(U_{can}=FT^{-1}\left\langle H\cdot FT\left\{U_{S}\right\}\right\rangle\) with \(H\) given by a _circ_ function for a diffraction limited 4\(f\) capture system [36] and \(FT^{\{}\)\(\}\) corresponding to Fourier transform. Step 5: summation of intensity values \(\left|U_{can}\right|^{2}\) in a window of size 2\(\times\)2 pixels for simulating integration of speckle by the camera pixels; time averaging of the speckle at acquisition is not simulated. Step 6: recording of the generated speckle images of \(N_{x}\times N_{y}\) pixels at a pixel interval, \(\Delta\), as 8-bit encoded grayscale images in bitmap format (file extension bmp), JPEG format (file extension jpg) at a given quality, Q, and JPEG2000 format (file extension jp2) at a given compression ratio, \(\eta\). Generation of the same sequence of correlated in time speckle images in the three formats allowed for adequate comparison of the activity maps from the compressed data to the GTMs from bitmap images. The histogram of intensity in the original bitmap image is adjusted to cover the entire range of 256 levels. The wavelength was 532 nm. Analysis of compression efficiency was done for symmetric (low contrast speckle) and asymmetric (high contrast speckle) intensity distributions in the speckle images. The speckle contrast was controlled by the transfer function, \(H\). We performed step "4" simultaneously for uniform and Gaussian beam, \(I_{0}\left(k\Delta,l\Delta\right)=\exp\left\{-\left[\left(k-N_{x}/2\right) \Delta^{2}+\left(l-N_{y}/2\right)\Delta^{2}\right]/\Omega^{2}\right\}\), \(k=1.2..N_{s},l=1,2...N_{y}\). The parameter \(\Omega\) gives the spread of the laser beam on the object surface. Study of the case of non-uniform illumination causing spatial variation of speckle statistics is critical for strongly compressed images. For non-uniform illumination, the histogram of the intensity within the images is asymmetric independently of the speckle contrast, and it is not an estimate of the intensity PDF. Figure 3: Sections of synthetic grayscale speckle images (64\(\times\)64 pixels) in bmp, jpg and jp2 formats (top) and intensity distributions along column 32 in the presented image sections. Figure 2: Capture of a set of sequences of speckle images at instants \(t_{1},t_{2}\) and \(t_{b}\) (top) and computation of a set of MSF maps related to these instants (bottom); the maps are plotted with the same color scale and show decrease of the MSF from left to right. In order to study distortions introduced by compression, we focused first on distortions in the speckle images. For the purpose, we generated images of size 256x256 pixels and recorded them at different degree of compression. We compared in Fig.3 a section of an 8-bit encoded grayscale bitmap image with asymmetric intensity distribution for uniform illumination to its jpg version recorded with Q = 10 and its jp2 version recorded at \(\eta=6\). Both compressed images are 6 times smaller in size (about 10 KB) than the size of the bitmap image (65 KB). The intensity distributions along column 32 in the presented sections are also given in Fig.3. We especially chose to depict in Fig.3 the case of strongly compressed grayscale image. As is seen, the image decompressed from the jpg format clearly shows the typical for this type of compression blocking artifact. The section decompressed from the jp2 image shows greater resemblance to the bmp section. We built the intensity histograms for the bmp and the decompressed images. Again, we present in Fig.4 the results only for the case of high degree of compression (Q = 10 and \(\eta\) = 6). The histograms are shown for low and high speckle contrast under uniform and Gaussian illumination. For the latter case, we took \(\Omega=400\Delta\) for the image size 256x256 pixels. This choice of \(\Omega\) corresponds to rather small drop of the intensity at the image periphery compared to the center. Nevertheless, even such a slight non-uniformity severely increased the blocking artifacts in the jpg images at large degree of compression as is seen from Fig.4(b) and Fig.4(d). The JPEG and JPEG2000 compression schemes change the temporal correlation within the intensity sequences used to form the entries for an activity map. We generated 256 speckle images with the same correlation radius in all points, \(\tau_{c}=20\Delta t\), and estimated the mean normalized temporal correlation function from \[\hat{\rho}(m) =\frac{1}{N_{x}N_{y}\left(N-m\right)}\sum\limits_{k=l}^{N_{x}N_{ y}}\frac{1}{\sigma_{kl}^{2}}\sum\limits_{l=l}^{N-m}\left(I_{kl,i}-\bar{I}_{kl} \right)\left(I_{kl,i+m}-\bar{I}_{kl}\right) \tag{2}\] \[\sigma_{kl}^{2}=\frac{1}{N}\sum\limits_{i=l}^{N}\left(I_{kl,i}- \bar{I}_{kl}\right)^{2}\,\ \bar{I}_{kl}=\frac{1}{N}\sum\limits_{i=l}^{N}I_{kl,i} \tag{3}\] The images were generated both for speckle with low and high contrast, under uniform and Gaussian illumination. Note that the estimate (2) is biased with respect to the real normalized function, \(\rho_{kl}(\tau)=\exp[-\tau/\tau_{c}\left(k,l\right)]\), which is the same for all points. The bias is due to determination of the estimates (3) of the mean value, \(\bar{I}_{kl}\), and the variance, \(\sigma_{il}^{2}\), at each point from a finite and rather short with respect to \(\tau_{c}\) sequence of intensity values. We give in Fig.5 falling of \(\hat{\rho}(m)\) with the time lag for bmp images, jpg images with Q = 10 and jp2 images with \(\eta\) = 6. The presented curves correspond to the high contrast speckle under uniform illumination. The curves for the low contrast speckle are practically the same. The same is true for Gaussian illumination taking into account that normalization in (2) is pointwise. The result in Fig.5 proves that both JPEG schemes change the temporal correlation for the points of successively acquired correlated speckle images. ## 3 Distortions in Activity Maps from Decompressed Speckle Images ### Analysis at constant activity Raw speckle data lead to strong fluctuations of the activity estimates from point to point and spread their PDFs, thus worsening sensitivity of the method. If simulation is made at spatially constant activity, which means to have the same value of the temporal correlation radius in all pixels, \(\tau_{c}\left(k,l\right)=const\), it is possible to build a histogram of the estimate used for activity evaluation. For an image containing 256x256 points, the histogram is built from 65536 entries and can be considered as a good approximation to the PDF of the estimate. We simulated sequences of correlated images at constant activity to study the impact of data compression on the PDFs of the estimates. In Fig.6, we plotted the histograms obtained from activity maps built at a time lag \(\tau=10\)\(\Delta t\) from decompressed jpg images at Q = 70 (image size 32.4 KB), 30 (19.5 KB) and 10 (9.77 KB) and from decompressed jp2 images at \(\eta=2\) (31.4 KB), 3 (20.7 KB) and 6 (10 KB). In this way, we compare the maps and the histograms for compressed images of almost equal size. The size of the bmp images was 65 KB. For all simulated cases, \(\tau_{c}=20\)\(\Delta t\). The presented histograms correspond to processing of high contrast speckle for illumination with 532 nm. The histograms in Fig.6 (a) and Fig. 6 (b) correspond to uniform illumination and hence to the estimate \(S_{1}\) given by Eq. (1). Figure 4: Histograms of intensity distributions within a grayscale bmp speckle image with 256x256 pixels and its jpg and jp2 versions with 6 times smaller size; (a) low contrast speckle, uniform illumination, (b) low contrast speckle, Gaussian illumination, (c) high contrast speckle, uniform illumination, (d) high contrast speckle, Gaussian illumination. Figure 5: Estimates of the normalized temporal correlation function of intensity fluctuations for a sequence of 256 images recorded in bmp, jpg and jp2 formats; high contrast speckle, uniform illumination. This algorithm, however, is not appropriate for the case of Gaussian illumination and we used the following algorithm: \[S_{z}(i,k,m)\!=\!\frac{1}{\left(N-m\right)}\sum\limits_{i}^{N_{\mathrm{c,up}}} \frac{\left|I_{u_{i}}-I_{u_{i,m}}\right|}{\left(I_{u_{i,i}}+I_{u_{i,m}}+q\right)} \tag{4}\] The parameter \(q\) in (4) stabilizes the algorithm for strongly compressed data. We used \(q\) = 1 in the results presented below. The parameter \(\Omega\) was equal to \(400\Delta\). As is seen, both compressing schemes shift the histograms of the activity estimates with respect to the histogram obtained for the bmp images. As a whole, compression narrows the histograms for the estimate \(S_{1}\) and vice versa for \(S_{2}\). The compression impact is greater for the JPEG compression scheme. ### Analysis at spatially varying activity The objective of the DSM is to indicate areas of faster or slower changes on the object surface due to some processes of various origin. The studied compression schemes modify substantially the recorded 8-bit images. Given the fact that absolute values of intensity are not important, the lossy data compression is acceptable if it keeps information about i) spatial variation of the speed of the processes across the object and ii) evolution of this speed in time. We first analyzed efficiency of lossy JPEG or JPEG2000 compression for the task of visualization of spatial distribution of activity. This was done by simulation for an object with two compact regions of a rapidly evolving process that are buried in a background with slower temporal variation of intensity. As in our previous studies [32], we formed the higher activity regions as the logos "IOMT" and "ETRI" of both research institutions involved in the current analysis. A test object thus created is suitable for checking efficiency of the DSM for detection of relatively small areas of higher activity with sharp borders. We simulated both high and low contrast speckle, but here we present only the high contrast case. The simulation parameters were as follows: temporal correlation radii for the logos and the background equal to \(\tau_{cl}\!=\!10\Delta t\) and \(\tau_{cb}\!=\!40\Delta t\) respectively, time lag \(\tau\!=\!10\Delta t\) (\(m\!=\!10\)), image size \(256\!\times\!256\) pixels, \(N\!=\!256\), wavelength \(532\) nm. The maps from the bmp images are the GTMs. They are shown in Fig.7 for uniform and Gaussian illumination and the estimators \(S_{1}\) and \(S_{2}\). For illustration, we included in Fig.7 (b) the map obtained with \(S_{1}\) from patterns acquired under non-uniform illumination. No smoothing was applied to the maps to decrease the fluctuations. We see that the maps in Fig.7 (a) and Fig.7 (c) correctly reflect activity in the object. The sharp borders of the logos regions are also well reconstructed due to pointwise processing. The result in Fig.7(b) reflects the non-uniform intensity distribution in the laser spot. The maps from the decompressed images for uniform illumination are shown in Fig.8 for 2, 3 and 6 times smaller size of the compressed images compared to the bmp images. For better evaluation of the compression impact, we computed maps of the SSI (Fig.9) between the activity maps from bmp and decompressed jpg and jp2 images. The same scale of the maps in Fig.8 and Fig.9 helps to visualize better the impact of degree of compression. The higher activity in the regions of the logos is properly detected even at large degree of compression. Decreasing the image size twice by applying JPEG or JPEG2000 keeps rather high the similarity to the GTM for the activity maps from the decompressed images. Further decrease of the image size leads to artifacts Fig. 8: Activity maps from decompressed images for an object with two logos and a uniform background under uniform illumination; type and degree of compression are given under each map, \(\tau_{cd}\!=\!10\Delta t\), \(\tau_{cb}\!=\!40\Delta t\). \(N\!=\!256\), \(\tau\!=\!10\Delta t\). Fig. 6: Histograms of the activity estimates for uniform illumination, \(S_{1}\) (a,b), and Gaussian illumination, \(S_{2}\) (c,d), determined from bmp images and decompressed JPEG (a,c) and JPEG2000 (b,d) images at different degrees of compression and constant activity with \(\tau_{c}=20\Delta t\) ; \(N\!=\!256\), \(\tau\!=\!10\Delta t\). Fig. 7: Ground truth activity maps for an object with two logos and uniform background: (a) \(S_{1}\), uniform illumination, (b) \(S_{1}\), Gaussian illumination, (c) \(S_{2}\), Gaussian illumination, \(\tau_{cd}\!=\!10\Delta t\), \(\tau_{cb}\!=\!40\Delta t\), \(N\!=\!256\), \(\tau\!=\!10\Delta t\). in the activity maps, especially to blocking artifact typical for JPEG compression. The similarity of the activity map to the GTM is better for JPEG2000 algorithm and is greater in the regions of higher activity (regions of the logos). It remains comparatively high in these regions even for low quality of JPEG compression or high compression ratio of JPEG2000. The mean SSI for comparing the maps from decompressed jpg images with Q = 70, 30 and 10 to the GTM is equal to 0.955, 0.783 and 0.503, respectively. The same index in the case of jp2 format with \(\eta\) = 2, 3 and 6 is equal to 0.992, 0.925 and 0.639, respectively. Actually, similarity between the GTM and the map from decompressed images is not so critical. The vital requirement is to differentiate between the regions of different activity. The observed artifacts are stronger in the background region but they do not interfere with correct description of activity for the used test object. This means that lossy JPEG or JPEG2000 compression is fully appropriate for tasks dedicated to visualization of spatial regions with different speed of the ongoing processes. The results of normalized processing of decompressed images for Gaussian illumination are shown in Fig.10. The reconstruction of the logos is good, even when the image size decreases from 65 to 10 KB. The blocking artifacts are removed by normalization in Eq. (4). The mean SSI for comparison of the maps computed from jpg images with Q = 70, 30 and 10 to the GTM is 0.645, 0.467 and 0.410, respectively. Compared to uniform illumination, the SSI decrease is greater. This index is again larger for the JPEG2000 compression: it achieves 0.881, 0.646 and 0.486 for \(\eta\) = 2, 3 and 6, respectively. Despite increasing dissimilarity of the decompressed jpg and jp2 images to the original bmp images for non-uniform illumination, lossy compression using jpg and jp2 formats is completely applicable for the DSM. ## 4 Compression of experimental speckle images We proved applicability of JPEG or JPEG2000 compression in real dynamic speckle measurements by processing data from two drying experiments. The first experiment was conducted with a metal coin covered by a non-transparent paint. The main advantage of this object is the complicated relief formed by grooves and embossments. Different speed of paint evaporation for these relief's elements and the flat coin surface makes possible effective evaluation of the detrimental effect of compression on activity visualization. The second experiment included capture of speckle images for drying of a droplet from water and methanol solutions of the azopolymer poly [1- [4- (3-carboxy-4-hydroxy-phenylazo) benzene-sulfonamido]-1,2-ethanediyl, sodium salt] at different temperatures. We use abbreviation PAZO as a short name for this polymer from Sigma Aldrich. We use thin layers of this anisotropic material for polarization holographic recording. The DSM is highly suitable for monitoring of drying of deposited thin layers of PAZO. The observed droplet was an object with a smoothly changing thickness. The activity maps obtained for the droplet may be spatially inhomogeneous and may evolve in time. This makes the drying droplet a suitable object for checking whether processing of the decompressed jpg or jp2 images provides the same results as the obtained from the bmp images. The experiments were done using the acquisition set-up in Fig.1. Color CMOS camera X06c-s (Baumer) recorded images with 780\(\times\)582 pixels at pixel pitch 8.3 \(\mu\)m. The images were recorded with exposure time 20 \(\mu\)sec at At equal to 250 msec. The environmental temperature was 25\({}^{\circ}\)C. A He-Ne laser emitting at 632.8 nm illuminated the objects on a vibration Figure 10: Activity maps from decompressed images for an object with two logos and a uniform background under Gaussian illumination; type and degree of compression are given under each map, \(\tau_{d}=\)10\(\Delta\tau\), \(\tau_{ch}=\) 40\(\Delta\tau\), \(N=\) 256, \(\tau=\) 10 \(\Delta\tau\). Figure 9: Maps of SSI for comparison of activity maps from decompressed and bmp images for an object with two logos and uniform background under uniform illumination; type and degree of compression are given under each map, \(\tau_{cd}=\)10\(\Delta\tau\), \(\tau_{ch}=\) 40\(\Delta\tau\), \(N=\) 256, \(\tau=\) 10 \(\Delta\tau\). isolated table. Linear polarization of the light was checked with PAX5710VIS-T polarimeter (Thorlabs). We present here the results obtained for the PAZO water solution. For the experiment, 20 mg of PAZO were dissolved in 400 ul water to obtain the concentration used usually for deposition of PAZO thin films. A 10 ul droplet was spread on a microscope glass slide, placed on a hot stage THMS 600 (Linkam Scientific). The stage kept the object temperature at a pre-set value. The glass slide stayed for 5 minutes on the hot stage to reach thermal equilibrium before deposition of the droplet. We recorded 10 sets of 256 speckle patterns each at 30, 40, 50 and 60degC with 2 minutes interval between two consecutively recorded sets. The measurement at each temperature was done with a new droplet. Exemplary speckle images for the coin and the droplet of a polymer solution in the bmp format and after decompression of the jpg and ip2 versions are given in Fig.11. The images are plotted with the same scale. The size of the bmp image is 1.29 MB, the jpg and ip2 images are 138 KB and 132 KB respectively, i.e. about 10 times smaller. The GTM and the maps from the decompressed jpg and ip2 images in the coin's experiment are shown in Fig.12 and Fig.13. As is seen in Fig.11, the coin surface in not uniformly illuminated, and to extract properly the activity map, we used the normalized estimate \[S_{1}^{\prime}(k,l,m)\!=\!\frac{1}{\left(N-m\right)}\sum\limits_{i=1}^{N-m} \frac{1}{\sigma_{ii}}\left|I_{k,i}-I_{k,i+m}\right| \tag{5}\] where the estimate of the standard deviation, \(\sigma_{ii}\), is given by formula (3). This normalization provided better results than the estimate (2). The files in jpg format were recorded at Q = 95, 90, 70, 50 and 30 thus producing compressed image sizes equal to 138 KB, 80.8 KB, 36.4 KB, 23.9 KB and 15.0 KB respectively. The obtained activity maps resemble with good spatial resolution the coin relief. Note that this is an activity map, and not a 3D reconstruction of the object. The activity maps are satisfactory for Q \(\geq\) 50. Compression is about 55 times at Q = 50. Further decrease of the compressed image size worsens the map beyond acceptance. The chosen normalization of the estimate masks the blocking artifacts. The activity maps for jp2 in Fig.13 correspond to \(\eta\) = 20, 30, 40, 50 and 60. The maps are plotted with the same scale as in Fig.12: the normalized estimate (5) varies from 0.7 to 1.3. The maps at the highest compression ratios, \(\eta\) = 50 and 60, exhibit acceptable quality of visualization. As a whole, one may conclude that the |PEG2000 outperforms the JPEG for activity visualization. Both compression schemes provide good activity maps at rather high degree of compression We computed the activity maps for the droplet for all sets of images acquired at 30, 40, 50 and 60degC using the estimates \(S_{1}\) and \(S_{2}\). The second estimate is more appropriate in the case due to non-uniformity of intensity in the laser spot and lower reflectivity in the droplet. However, we present in the paper the non-normalized estimator to visualize the artifacts induced by compression. In Fig.14, we compare the GTM for one of the sets acquired at 40degC with the maps from the decompressed jpg and ip2 images with a 10, 20 and 30 times smaller size. The maps from the decompressed images are informative independently of the observed artifacts. The mean SSI for comparing the maps from the decompressed images and the GTM is 0.924, 0.846 and 0.739 for the jpg compression and 0.941, 0.892 and 0.791 for jp2 images at 10, 20 and 30 times decrease of the size. Fig. 11: Image plots for speckle data recorded in bmp, jpg and ip2 formats in the drying experiments with a coin covered with paint (top) and a droplet of a polymer solution on a hot stage (bottom); wavelength 632.8 nm. Fig. 12: Activity maps for a coin covered with paint; maps are built from 256 decompressed jpg images recorded with different quality and compared to the GTM from images in the bmp format; \(\tau\) = 10 Alt Fig. 13: Activity maps for a coin covered with paint; maps are built from 256 decompressed jp2 images recorded with different compression ratios and compared to the GTM from images in the bmp format; \(\tau\) = 10 Alt For all sets of speckle images acquired at 30, 40, 50 and 60\({}^{\circ}\)C, we determined the time dependence of the average MSF estimate \(S_{1}\) at lag \(\tau=10\)\(\Delta\)t by averaging in the area of 100 by 100 pixels on the activity maps around the center of the droplet. This was done for the maps from sequences of 256 bmp and decompressed images of 10 and 20 times smaller size. The 10 000 values of the estimate in the chosen spatial area lead to smooth decrease of the average estimate with time. For illustration of activity maps evolution, we present in Fig.15 the maps from bmp, jpg and jp2 images at three starting times, \(t_{1}\), \(t_{2}\) and \(t_{3}\), of image acquisition. The images were acquired at 60\({}^{\circ}\)C. In Fig.15, the size of the compressed images is 10 times smaller than the size of the bmp images equal to 1.29 MB. At \(\eta=10\), the maps from the decompressed images strongly resemble the GTM. We plotted in Fig.16 the average MSF estimate as a function of time for 60\({}^{\circ}\)C. The values of the average MSF for the bmp and jpg images are rather close, although the difference between the curves increases at \(\eta=10\) and decrease at \(\eta=20\). For the jp2 compressed images, the averaged MSF is higher than for the bmp images. The difference between the curves for the bmp and jp2 images increases with \(\eta\) and slightly with time. Despite the difference between the bmp and jp2 curves, this compression scheme enables more correct characterization of the polymer droplet drying process, because the relative changes of the estimate and not its absolute value are informative. ## 5 Conclusions In summary, we proposed JPEG and JPEG2000 lossy compression in 2D intensity-based dynamic speckle analysis. The input data for this analysis are sequences of correlated in time speckle images acquired for laser illuminated objects. The output is a 2D activity map, which shows the regions on the object surface of fast or slow intensity changes or the speed of a process causing these changes, respectively. The sequence needed for a single activity map comprises dozens of speckle images. A large number of maps is required to analyze evolution of a process. Data compression becomes more than necessary especially taking in view data redundancy characterizing the DSM. How effective JPEG and JPEG2000 formats are for compressing dynamic speckle data is not a trivial question. On the one hand, this analysis is predominantly qualitative: it simply indicates areas with different speed of processes. As we have proven by processing synthetic and experimental speckle data, both Figure 16: Drop of activity estimate \(S_{1}\) in time for a drying droplet of a polymer solution at 60\({}^{\circ}\)C for processing of bmp and decompressed jpg (top) and jp2 (bottom) images at compression ratios 10 and 20. Figure 14: Comparison of \(S_{1}\) activity maps from decompressed images and from bmp images for a droplet of polymer solution; the numbers in the brackets show the ratio between the bmp and the compressed image sizes. Figure 15: Comparison of \(S_{1}\) activity maps from decompressed images and from bmp images for a droplet of polymer solution at compression ratio equal to 10; the time in minutes show the start of acquisition with respect to the beginning of the experiment. JPEG formats provide high quality of activity visualization at comparatively high compression ratios, e.g. 50 times decrease of size for color images. On the other hand, the DSM has a potential for quantitative characterization. Being transform-based approaches, JPEG and JPEG2000 change the temporal correlation between the intensity values at a point to the extent, which depends on the compression ratio. This inevitably changes the functional dependence of the estimate statistics on time when observing a certain process. We limited our analysis of this issue to evaluating the change inflicted by compression on the average value of the activity estimate for a drying droplet of a polymer solution. We observed change in the time-dependence of the average with respect to the result for the bmp images, but this change was rather small. Given the significant gain in computer memory provided by the JPEG or JPEG2000 compression, our assessment is that distortion of time dependencies is of secondary importance. We plan, however, to study more thoroughly the impact of JPEG compression on statistical properties of the estimates for the case of varying in time activity. The main result of the performed study is that JPEG and JPEG2000 formats can be recommended for storage of images in the DSM. Similarly to medical imaging, JPEG2000 slightly outperforms the conventional JPEG. **Funding** Institute of Information and Communications Technology Planning and Evaluation (IITP) grant funded by the Korea Government (MSIT) [2019-0-00001, Development of Holo-TV Core Technologies for Hologram Media Services] **Acknowledgments** E. Stoykova thanks European Regional Development Fund within the Operational Programme "Science and Education for Smart Growth 2014-2020" under the Project CoE "National centre of Mechatronics and Clean Technologies" BG05M2OP001-1.001-0008. M. Levchenko thanks 2020 Plenoptic Imaging project for supporting his PhD training. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 956770. **Discloquers** The authors declare no conflicts of interest related to this article. **Data Availability** Data underlying the results presented in this paper are not publiy available at this time but may be obtained from the authors upon reasonable request.
2310.10711
Double Trouble: Two Transits of the Super-Earth GJ 1132 b Observed with JWST NIRSpec G395H
The search for rocky planet atmospheres with JWST has focused on planets transiting M dwarfs. Such planets have favorable planet-to-star size ratios, enhancing the amplitude of atmospheric features. Since the expected signal strength of atmospheric features is similar to the single-transit performance of JWST, multiple observations are required to confirm any detection. Here, we present two transit observations of the rocky planet GJ 1132 b with JWST NIRSpec G395H, covering 2.8-5.2 $\mu$m. Previous HST WFC3 observations of GJ 1132 b were inconclusive, with evidence reported for either an atmosphere or a featureless spectrum based on analyses of the same dataset. Our JWST data exhibit substantial differences between the two visits. One transit is consistent with either a H$_2$O-dominated atmosphere containing ~1% CH$_4$ and trace N$_2$O ($\chi^{2}_{\nu}$ = 1.13) or stellar contamination from unocculted starspots ($\chi^{2}_{\nu}$ = 1.36). However, the second transit is consistent with a featureless spectrum. Neither visit is consistent with a previous report of HCN. Atmospheric variability is unlikely to explain the scale of the observed differences between the visits. Similarly, our out-of-transit stellar spectra show no evidence of changing stellar inhomogeneity between the two visits - observed 8 days apart, only 6.5% of the stellar rotation rate. We further find no evidence of differing instrumental systematic effects between visits. The most plausible explanation is an unlucky random noise draw leading to two significantly discrepant transmission spectra. Our results highlight the importance of multi-visit repeatability with JWST prior to claiming atmospheric detections for these small, enigmatic planets.
E. M. May, Ryan J. MacDonald, Katherine A. Bennett, Sarah E. Moran, Hannah R. Wakeford, Sarah Peacock, Jacob Lustig-Yaeger, Alicia N. Highland, Kevin B. Stevenson, David K. Sing, L. C. Mayorga, Natasha E. Batalha, James Kirk, Mercedes Lopez-Morales, Jeff A. Valenti, Munazza K. Alam, Lili Alderson, Guangwei Fu, Junellie Gonzalez-Quiles, Joshua D. Lothringer, Zafar Rustamkulov, Kristin S. Sotzen
2023-10-16T18:00:00Z
http://arxiv.org/abs/2310.10711v1
# Double Trouble: Two Transits of the Super-Earth GJ 1132 b Observed with JWST NIRSpec G395H ###### Abstract The search for rocky planet atmospheres with JWST has focused on planets transiting M dwarfs. Such planets have favorable planet-to-star size ratios, enhancing the amplitude of atmospheric features. Since the expected signal strength of atmospheric features is similar to the single-transit performance of _JWST_, multiple observations are required to confirm any detection. Here, we present two transit observations of the rocky planet GJ 1132 b with _JWST_ NIRSpec G395H, covering 2.8-5.2 \(\,\mu\)m. Previous _HST_ WFC3 observations of GJ 1132 b were inconclusive, with evidence reported for either an atmosphere or a featureless spectrum based on analyses of the same dataset. Our _JWST_ data exhibit substantial differences between the two visits. One transit is consistent with either a H\({}_{2}\)O-dominated atmosphere containing \(\sim 1\%\) CH\({}_{4}\) and trace N\({}_{2}\)O (\(\chi^{2}_{\nu}=1.13\)) or stellar contamination from unocculted starspots (\(\chi^{2}_{\nu}=1.36\)). However, the second transit is consistent with a featureless spectrum. Neither visit is consistent with a previous report of HCN. Atmospheric variability is unlikely to explain the scale of the observed differences between the visits. Similarly, our out-of-transit stellar spectra show no evidence of changing stellar inhomogeneity between the two visits -- observed 8 days apart, only 6.5% of the stellar rotation rate. We further find no evidence of differing instrumental systematic effects between visits. The most plausible explanation is an unlucky random noise draw leading to two significantly discrepant transmission spectra. Our results highlight the importance of multi-visit repeatability with _JWST_ prior to claiming atmospheric detections for these small, enigmatic planets. + Footnote †: journal: ApJL 0000-0002-8071-8885]E. M. May 0000-0002-8072-2887]Ryan J. MacDonald 0000-0002-8072-3874]Katherine A. Bennett 0000-0002-8072-3874]Sarah E. Moran 0000-0002-8072-3874]Hannah R. Wakeford 0000-0002-8072-3874]Sarah Peacock 0000-0002-8072-3874]Jaobo Lustig-Yaeger 0000-0002-8072-3874]Kevin B. Stevenson 0000-0002-8072-3874]David K. Sing 0000-0002-8072-3874]L. C. Mayor 0000-0002-8072-3874]Natasha E. Batliah 0000-0002-8072-3874]James Kirk 0000-0002-8072-3874]Meredes Lopez-Morales 0000-0002-8072-3874]Jeff A. Valenti 0000-0002-8072-3874]Munazza K. Alam 0000-0002-8072-3874]Lili Alderson 0000-0002-8072-3874]Guangwei Fu 0000-0002-8072-3874]Junelie Gonzalez-Quiles 0000-0002-8072-3874]Joshua D. Lothringer 0000-0002-8072-3874]Zafar Rustamkulov 0000-002-8072-3874]Kristin S. Sotzer ## 1 Introduction The quest to detect atmospheres on rocky exoplanets requires pushing our observatories to their limits. Even for rocky planets transiting M dwarfs -- the most promising for atmospheric detections -- transmission spectra features have an expected amplitude of \(\lesssim\) 20 ppm. Atmospheric features of such planets are thus perilously close to the pre-launch expected noise floor of JWST instruments (\(\sim\) 20 ppm for NIRISS, \(\sim\) 9 ppm for NIRCam, and \(<\) 10 ppm for NIRSpec; Greene et al., 2016; Schlawin et al., 2021; Rustamkulov et al., 2022). This comparable size of features and noise therefore calls for extra care before claiming an atmospheric detection. In particular, the repeatability of any signal between visits with the same instrument is critical to confirm that observed features are astrophysical in nature. There has been no definitive or non-disputed detection of an atmosphere on a rocky exoplanet to date. Rocky bodies in the solar system exhibit a "cosmic shoreline", which divides bodies with and without atmospheres according to the prevalence of atmospheric escape processes (Zahnle and Catling, 2017). Such a classification scheme may be a logical starting point for selecting promising rocky exoplanets for atmospheric characterization. However, planets orbiting M dwarfs may exhibit a significantly different cosmic shoreline, or no cosmic shoreline at all, due to the higher stellar activity and extreme-UV (1-912 A) flux levels compared to the Sun, which can strip away planetary atmospheres (e.g., Owen and Jackson, 2012; Becker et al., 2020; Dong et al., 2020; do Amaral et al., 2022). In fact, recent secondary eclipse observations with the Mid-Infrared Instrument (MIRI) on _JWST_ of TRAPPIST-1 b and TRAPPIST-1 c, two rocky M-dwarf planets, are consistent with no (or minimal) atmosphere (Greene et al., 2023; Zieba et al., 2023). Several _JWST_ Cycle 1 programs are surveying rocky planets orbiting M dwarfs, aiming to assess the survivability of their atmospheres (e.g., _JWST_ GO #1981, 2512, 2589). Through _JWST_ GO #1981 (PIs: Stevenson and Lustig-Yaeger), we are observing five rocky planets around M dwarfs with orbital and planetary properties close to the Solar System's cosmic shoreline. Previous results from this program include two transits of LHS 475 b, which are consistent with no atmosphere or a high-altitude cloud deck (Lustig-Yaeger and Fu et al., 2023), and two transits of GJ 486 b, which are consistent with a H\({}_{2}\)O-dominated atmosphere or a spectrum contaminated by stellar activity (Moran and Stevenson et al., 2023). Here we present transmission spectra observations for the third planet in the program, GJ 1132 b. GJ 1132 b (Berta-Thompson et al., 2015) is a rocky super-Earth (1.30 R\({}_{\oplus}\), 1.66 M\({}_{\oplus}\), \(T_{\rm eq}=529\) K; Bonfils et al., 2018) orbiting an M dwarf (2.105 R\({}_{\odot}\), 3261 K; Bonfils et al., 2018). With a favorable planet-star radius ratio, corresponding to a transit depth of \(\sim\)0.3%, GJ 1132 b has previously been suggested as a good target for atmospheric characterization (e.g. Schaefer et al., 2016). Assuming a representative atmospheric mean molecular mass of \(\mu=\) 10 AMU, GJ 1132 b should have transmission spectra features of \(\sim\) 20 ppm. Previous transmission spectra observations of GJ 1132 b have a colorful history of claimed atmospheric detections. Southworth et al. (2017) observed nine transits with the MPG 2.2 m telescope and suggested that deeper transit depths in the \(z\) and K bands were caused by a H\({}_{2}\)-dominated atmosphere with H\({}_{2}\)O and/or CH\({}_{4}\). Diamond-Lowe et al. (2018) revisited GJ 1132 b with five optical transits (0.64-1.04 \(\mu\)m) with the Magellan Clay telescope LDSS-3C instrument, finding a featureless transmission spectrum ruling out the previously-claimed atmosphere. Swain et al. (2021) analyzed five near-infrared (1.1-1.7 \(\mu\)m) Hubble Space Telescope Wide Field Camera 3 (WFC3) transits (_HST_ GO #14758, PI: Berta-Thompson), finding evidence for a spectral slope and a feature near 1.53 \(\mu\)m suggestive of a H\({}_{2}\)-dominated atmosphere with aerosols, HCN, and CH\({}_{4}\). However, using the same _HST_ data, Mugnai et al. (2021) and Libby-Roberts et al. (2022) do not find these features and instead prefer featureless near-infrared spectra. In this Letter, we present _JWST_ transmission spectra observations of GJ 1132 b. Our analysis provides a cautionary tale for the challenge of confirming astrophysical signals when searching for rocky exoplanet atmospheres. In Section 2, we describe the observations. In Section 3, we overview the three independent analysis pipelines used to extract the transmission spectrum. Section 4 describes our interpretation. Finally, we discuss the implications of these results in Section 5. ## 2 JWST Observations of GJ 1132 b We observed two transits of GJ 1132 b with the _JWST_ Near Infrared Spectrograph (NIRSpec, Jakobsen et al., 2022; Birkmann et al., 2022) G395H instrument on 2023 February 25 and 2023 March 5 as a part of GO #1981 (PIs: Stevenson and Lustig-Yaeger). Our data have a spectral resolving power of \(\lambda/\Delta\lambda\sim\) 2,700 from 2.8-5.2 \(\mu\)m. Each observation lasted 3.06 hrs, resulting in 814 integrations each with 14 groups up the ramp. The observations were designed to maximize observing efficiency while remaining below the \(\sim\)80% full well threshold to avoid the worst impacts of detector non-linearity. ## 3 Data Reduction To ensure reproducibility of our results, we perform three independent data analyses with the Eureka! (Bell et al., 2022), FIREFLy(Rustamkulov et al., 2022, 2023), and ExoTiC-JEDI(Alderson et al., 2022, 2023) pipelines. Previous analyses within _JWST_ GO #1981 have shown Eureka! and FIREFLy to agree well for small planetary signals (Lustig-Yaeger and Fu et al., 2023; Moran and Stevenson et al., 2023). While Eureka! and ExoTiC-JEDI have been shown to agree well for larger planetary signals (e.g. Alderson et al., 2023), here we compare them for signals pushing detection limits. Below, we provide a high-level overview of the data reduction steps from each pipeline. Figure 1 shows the final R\(\sim\)100 spectra from all three pipelines for both visits. ### Eureka! The Eureka! package reduces _JWST_ time-series data starting from uncal_JWST_ data through light curve fitting. Stages 1 and 2 of the Eureka! pipeline are primarily a wrapper for Stages 1 and 2 of the jwst pipeline (Bushouse et al., 2022), while also implementing several custom steps, most importantly custom group-level background subtraction. This removes the striping due to 1/f noise at the group level to significantly improve precision (e.g., Rustamkulov et al., 2022; Lustig-Yaeger and Fu et al., 2023). This step first identifies the center of light in each column and masks values within 8 pixels on either side. We estimate the background as the mean of all remaining pixels in a column with a 3\(\sigma\) outlier threshold. We skip the jump step detection in Stage 1, as we find it only adds noise to the extracted light curves. In Stage 2, we skip the flat field and photon steps, as they result in a conversion to physical flux units that is unnecessary for the relative flux measurements we require. These steps also add noise to the resulting light curves due to the current level of accuracy provided by the available detector flat fields. Additionally, the limited region of the detector that is converted in these steps also worsens the precision on background removal. Stage 3 of the Eureka! pipeline performs spectral extraction after a second round of background subtraction. We use optimal spectral extraction (Horne, 1986) after correcting the curvature of the trace by measuring the center of light in each column and rolling each column by an integer pixel value to align the trace along the same pixel. The second round of background subtraction considers only the region more than 10 pixels from the trace, with an outlier threshold of 3\(\sigma\) in the spatial direction and 10\(\sigma\) in the temporal direction. When constructing the median frame for optimal spectral extraction, we employ an outlier rejection threshold of 10\(\sigma\) for NRS1 and 15\(\sigma\) for NRS2. For spectral extraction, we use an aperture half-width size of 2 pixels on either side of the center pixel (for a total of 5 pixels), which is selected to achieve the best precision possible by minimizing background noise while maximizing the stellar flux extracted. During spectral extraction, we use an outlier rejection threshold of 7\(\sigma\) for NRS1 and 19\(\sigma\) for NRS2. In Stage 4, we generate light curves at the native pixel resolution and at a lower resolution of R\(\sim\)100. To determine the best orbital parameters in Stage 5 of Eureka!, we first perform joint white light curve fitting across both visits but independently for the two detectors (i.e., NRS1 Visits 1 and 2 are fit jointly). We fit for a linear trend in time combined with a transit function (batman; Kreidberg, 2015). Limb darkening is fixed to quadratic values obtained with the ExoTiC-LD package (Grant and Wakeford, 2022) using 3D stellar models (Magic et al., 2015) and assuming stellar values of T\({}_{\rm eff}\) = 3261 K, log \(g\) = 5.02 (both from Stassun et al., 2019), and a metallicity [Fe/H] = -0.12 (Berta-Thompson et al., 2015). We fit for \(R_{p}/R_{\star}\), the center of transit, \(a/R_{\star}\), orbital inclination, and the linear term of the temporal ramp. The planet orbital period is fixed to 1.628931 days (Bonfils et al., 2018). We then adopt the weighted mean of the fitted orbital parameters from the NRS1 and NRS2 joint visit fits as our orbital parameter solution. The resulting best fit values are given in Appendix A.3, Table 2, and are held constant in all spectroscopic light curve fits. The only free parameters are then \(R_{p}/R_{\star}\) and the linear temporal ramp terms for both the native pixel and R\(\sim\)100 resolutions. All fits use emcee(Foreman-Mackey et al., 2013) and are run sufficiently long to ensure chain convergence.1 Footnote 1: Eureka! control files (ecf) to reproduce these results are available on Zenodo: doi.org/10.5281/zenodo.10002089. ### FIREFLy The FIREFLy package undertakes the complete process of _JWST_ time-series data analysis from uncal data files through spectroscopic light curve fitting. We use Stages 1 and 2 of the jwst reduction pipeline for group-level and integration-level detector and instrument corrections, respectively. In Stage 1, the data quality initialization and saturation steps are first applied to the uncal fits files. Unlike the reduction of GJ 486 b (Moran and Stevenson et al., 2023), we do not apply a custom superbias scaling step. In this previous study, we inves tigated how the bias levels change over the course of the observation. We found that subtracting a scaled superbias file at the group level improved the agreement between the white light curve depths between the two detectors (i.e., helped account for a possible offset between detectors). The scaling factor was calculated by finding the median background per column in each group, dividing this by the median per column superbias value, and averaging all columns to get a single scaling factor per group. For GJ1132b, we instead elect to use the single default just superbias file, as this results in the most consistent white light curve transit depths between the four datasets (NRS1 and NRS2 for Visits 1 and 2). We do, however, implement a custom background subtraction to reduce 1/f noise at the group-level in Stage 1. We then apply the reference pixel correction and linearity step while skipping the dark current step. We also skip the jump step, which is only applied in the FIREFLy pipeline if the number of groups per integration is larger than 25, as fewer groups per integration (here, 14) lowers the risk of cosmic ray hits. After ramp fitting and the gain-scale step, in Stage 2 we use only the assign WCS step (skipping the flat field step) before proceeding to FIREFLy's stellar extraction. For stellar extraction, we first clean bad pixels using lacosmic(van Dokkum, 2001). We determined the bad pixel map by flagging pixels with sharp variance spikes and manually checking known bad pixels in NIRSpec G395H. We apply another background subtraction at the integration level, measure the x- and y-shifts, and finally extract the 1D stellar spectrum. We use a pre-calculated trace and aperture full-width of 5.93 pixels (optimized from previous NIRSpec G395H observations; e.g., Lustig-Yaeger & Fu et al. 2023, Moran & Stevenson et al. 2023) and compute a box extraction. We fit GJ 1132 b's light curves with batman(Kreidberg, 2015), both at the native pixel resolution and at R\(\sim\)100. For the R\(\sim\)100 case, we trim the first 150 Figure 1: **Transmission spectra of GJ 1132 b from all three data reduction pipelines and both visits.** Upper panel: the Eureka! (yellow), FIREFLy (pink), and ExoTiC-JEDI (blue) reductions are shown for both visits at R\(\sim\)100. Also shown are the differences between reductions (solid, dashed, and dotted gray lines) in units of \(\sigma\), demonstrating the agreement between pipelines within a single visit. A \(\pm\)1\(\sigma\) shaded region is overlaid. Bottom panel: the Eureka! reduction for both visits, with the NIRSpec detector gap denoted by the shaded gray region. The shaded yellow regions show two important wavelength ranges that differ the most between the visits (see Sections 4.1 and 4.4.1 for further discussion on these differences). columns of NRS1. To fit the white light curve, we fix the orbital period to 1.629 days (Bonfils et al., 2018) and fit for \(R_{p}/R_{\star}\), the mid-transit time \(T_{0}\), \(a/R_{\star}\), the impact parameter \(b\), the quadratic limb darkening coefficients, and several systematics parameters. We fit each of the four datasets individually, using the Bayesian Information Criterion (BIC) to determine the best-fit systematics model in each case. Therefore, different models can be used for different datasets. Specifically, we use a linear ramp in time for NRS1 (both visits) and NRS2 Visit 1. For NRS2 Visit 2, we use only the y-shift. After light curve fitting using these optimized systematics, we refit all four datasets using a weighted average of \(a/R_{\star}\), \(b\), and the limb darkening coefficients. To extract the transmission spectrum, we fit the spectroscopic light curves (holding the orbital parameters and limb darkening coefficients fixed to their white light curve values) with only \(R_{P}/R_{\star}\) and the corresponding systematics as free parameters. While FIREFLy utilizes emcee(Foreman-Mackey et al., 2013) when fitting the white light curve, the spectroscopic fits use least-squares fitting with lmfit(Newville et al., 2014). This method is much faster and does not impact the resulting spectrum. We extensively compared both techniques at the native pixel level and at R\(\sim\)100 and found no meaningful changes when using least-squares fitting. ### ExoTiC-Jedi The Exoplanet Timeseries Characterisation - JWST Extraction and Diagnostic Investigator (ExoTiC-JEDI) package2 performs a full extraction, reduction, and analysis of _JWST_ time-series data from _JWST_ uncal files to light curve fitting. For NIRSpec G395H observations, we treat the NRS1 and NRS2 datasets independently. We use the jwst pipeline tools to perform linearity, dark current, and saturation corrections, with the jump detection threshold set to 15, and a custom destriping routine to remove 1/f noise at the group level using a second-order polynomial and a threshold of 15\(\sigma\) fit to the background region. These steps are followed by a standard ramp fitting routine. ExoTiC-JEDI is also able to perform custom bias subtraction, but we find that it does not improve the precision of the data in this case. We extract our Stage 2 products, the 2D wavelength array, and exposure times using the standard jwst pipeline. Footnote 2: [https://github.com/Exo-TiC/ExoTiC-JEDI](https://github.com/Exo-TiC/ExoTiC-JEDI) Stage 3 of the ExoTiC-JEDI package performs pixel corrections, additional background and 1/f noise removal, and spectral extraction. Using the data quality flags provided from the jwst pipeline, we replace pixels identified as bad, saturated, low quantum yield, hot, dead, or no gain with the median of the surrounding pixels. To remove additional bad pixels due to cosmic rays or other phenomena, we identify both spatial and time-series outliers in the data cube with a 20\(\sigma\) threshold in time and 6\(\sigma\) threshold spatially, replacing any identified pixels with the median of that pixel in the surrounding 10 integrations or 20 pixels in that row. Any remaining 1/f noise is removed by masking the illuminated region of the detector to calculate the median illuminated pixel value in each column. To extract the 1D stellar spectrum, we fit a Gaussian to each column of the data followed by a fourth-order polynomial to the trace center and widths (\(\sim 0.7\) pixels wide). The trace centers and widths are then smoothed with a median filter and used to determine a simple aperture region 5\(\times\) the trace FWHM (\(\approx 7\) pixels). We use intrapixel extraction to obtain our 1D stellar spectrum. At Stage 3, we also measure the trace position movement on the detector in the x- and y-position for detrending at later stages. We perform light curve fitting on broadband NRS1 and NRS2 spectra, as well as spectroscopically across the full wavelength range. Using the broadband spectra for each detector and visit, we fit for the planetary system inclination and \(a/R_{\star}\) while fixing the period (1.628931 days) and eccentricity (0.0) to literature values presented by Bonfils et al. (2018); as the eccentricity value presented in (Bonfils et al., 2018) is an upper limit we test the fit fixed at both e=0.0 and e=0.22 and find no impact on the resultant transmission spectrum. These parameters, along with the center of transit time, are held constant in the spectroscopic light curve analysis. Stellar limb-darkening coefficients are calculated using the ExoTiC-LD package with a custom model input using a Phoenix stellar model (Husser et al., 2013, \(\mathrm{Teff}\)=3300 K, logg=5.0, [Fe/H]=0.0) and the non-linear limb-darkening law. Limb darkening values are then fixed in our light curve analysis. We use a least-squares optimizer with a batman(Kreidberg, 2015) transit model to fit for the transit depth in each bin. We simultaneously fit a series of systematic models to the data and determine the optimal model based on the negative log-likelihood, which incorporates a penalization in complexity based on the AIC (Akaike Information Criterion). We find that the best systematic model, \(S(\lambda)\), corrects for a linear trend in time, \(t\), plus the change in x-position, \(x_{s}\), multiplied by the absolute magnitude of the y-positional change, \(|y_{s}|\), such that \(S(\lambda)=s0+(s1\times x_{s}|y_{s}|)+(s2\times t)\), (where \(s0,s1,s2\) are coefficient terms). ### Agreement between Pipelines and Resolutions Figure 1 shows the R\(\sim\)100 transmission spectra for all three reductions and both visits, demonstrating excellent agreement between the pipelines. We find that our three R\(\sim\)100 reductions agree better than the native resolution light curve fits, particularly at the red end of the spectrum where the signal-to-noise (SNR) decreases due to a combination of decreased throughput and decreased stellar signal (not shown here). While previous work has found that noise is reduced by performing light curve fits at native pixel resolution prior to spectrally binning the data (Espinoza et al., 2023), our results suggest that low-SNR transit signals can impart biases that can change the shape of the spectrum when fit at native pixel resolution. Lustig-Yaeger and Fu et al. (2023) showed that the precision improvement from fitting at the native pixel resolution can be negated by sufficiently removing 1/f noise at the group level, suggesting that binning light curves prior to fitting is acceptable. Our results here show that binning light curves prior to fitting may be preferable for low-SNR targets (specifically when the spectrophotometric scatter approaches the transit depth). We further discuss the impact of native resolution light curve fitting in Section A.1. ## 4 Interpretation To interpret our GJ 1132 b transmission spectra, we first explore the consistency between visits then conduct stellar and planetary atmosphere forward modeling and retrievals on both visits independently. ### Differences Between Visits While the data reduction pipelines show good agreement, the transmission spectra show notable differences between Visit 1 and Visit 2 (see Figure 1). We begin our analysis by investigating the statistical significance of these differences against the null hypothesis of a flat, featureless transmission spectrum. We approach this in two ways, as described below. First, we assess how expected or unexpected our measurements would be under the assumption that the spectrum is featureless. We report in Table 1 the reduced chi-squared, \(\chi^{2}_{\nu}\), for the featureless spectrum (flat line model) that best fits each dataset. We then calculate the distribution of expected \(\chi^{2}_{\nu}\), under the assumption that the null hypothesis is true, using 100,000 randomly generated synthetic featureless spectra with the same uncertainties as the observed data. We calculate the probability that \(\chi^{2}_{\nu}\) would be at least as extreme as the observed value under the assumption that the null hypothesis is correct (i.e., the "p-value"). Table 1 reports the p-values and corresponding "sigma" significance with which the null hypothesis is disfavored by the test. Second, following Moran and Stevenson et al. (2023), we fit each spectrum with a Gaussian model (representing an agnostic spectral feature) and compare it to the featureless model. Both the Gaussian and featureless models are fit using the Dynesty nested sampling code (Speagle, 2020), which returns the Bayesian evidence for each fit. From the evidence, we calculate a Bayes factor and convert it into a "sigma" value (Trotta, 2008) representing the significance of the Gaussian feature model over the featureless model. These results are reported in Table 1, where positive values denote evidence favoring the Gaussian model and negative values favor the featureless model. We note that these results can differ from the first statistical test, depending on how well a single Gaussian with varying wavelength center, width, and amplitude can actually fit the spectrum. The conclusions from our two statistical tests are consistent, despite having slightly different numerical results. In general, Visit 1 contains marginal evidence to reject the null hypothesis and favor a non-flat spectrum, while Visit 2 is statistically consistent with a flat line. For Visit 1, the FIREFLy reduction has a \(\chi^{2}_{\nu}\) near unity (indicating it favors the null hypothesis), which disagrees with that of Eureka! and ExoTiC-JEDI--likely owing to the slightly larger uncertainties in the FIREFLy reduction. However, all three reductions agree well for Visit 2 and favor a featureless spectrum. We also investigated the sensitivity of our results to a potential transit depth offset between the NRS1 and NRS2 detectors for NIRSpec G395H. The second set of rows in Table 1 show the results for the same tests as the top three rows, but now allowing for NRS2 to shift vertically relative to NRS1 to account for a potential systematic offset between the two detectors. These results show that including an offset for NRS2 erodes the statistical significance with which to reject the null hypothesis for Visit 1, while leaving the results for Visit 2 largely unchanged. In Appendix B, Figure 8, we provide a visual example of these null hypothesis tests. These results demonstrate that the two visits are inconsistent unless we allow for a transit depth offset between the two detectors. There is, however, no strong evidence for the necessity of a significant detector offset, since there was no need for a superbias correction step (unlike for GJ 486 b, where this was the primary driver for a detector offset, see Moran and Stevenson et al. 2023). Even when including an offset, Visit 1 is still more consistent with spectral features than Visit 2. ### Evidence for Variable Star Heterogeneity? A natural physical explanation for different transmission spectra for a planet orbiting an M dwarf is stellar variability (e.g., Rackham et al., 2018). To investigate the possibility of changing starspot coverage, we first perform forward modeling of the out-of-transit extracted, flux-calibrated stellar spectra for both visits. We compute multi-component stellar forward models using the Allard et al. (2012) PHOENIX models. We seek plausible evidence of evolution or rotation of features onto or off of the observed disk by quantifying the spot and faculae covering fractions of the stellar surface for Visits 1 and 2. We followed a similar procedure to Moran and Stevenson et al. (2023), employing a weighted linear combination of three PHOENIX models to represent the background photosphere, spots (\(T_{\rm eff}\leq T_{\rm eff,\,photosphere}\) - 100 K), and faculae (\(T_{\rm eff}\geq T_{\rm eff,\,photosphere}\) + 100 K). We assume that all spots have a common \(T_{\rm eff}\), log(\(g\)), and metallicity (as do the faculae), and constrain each feature to not exceed 45% of the stellar surface. The grid of PHOENIX models used for our analysis covers \(T_{\rm eff}\) = 2500-4500 K, log(\(g\)) = 4-5.5 cm s\({}^{-2}\), and [Fe/H] = -0.5-0, which provides extensive coverage of possible spot and faculae temperatures for GJ 1132. Finally, we assume photospheric values near the literature quoted \(T_{\rm eff}\) = 3270 \(\pm\) 140 K; (Bonfils et al., 2018), log(\(g\)) = 4.88 \(\pm\) 0.07 cm s\({}^{-2}\)(Southworth et al., 2017), and [Fe/H] = -0.12 \(\pm\) 0.15 (Berta-Thompson et al., 2015). To compare with the observed baseline spectra, we first convert the native model wavelengths (\(\rm\AA\)) and flux densities (erg s\({}^{-1}\) cm\({}^{-2}\) cm\({}^{-1}\)) to \(\mu\)m and mJy. We also scaled the models by R\({}_{\star}^{2}/d^{2}\) using literature values for GJ 1132: R\({}_{\star}\)=0.21 R\({}_{\odot}\)(Bonfils et al., 2018) and \(d\) = 12.61 pc (Gaia Collaboration et al., 2021). We smoothed and interpolated the models the same resolution as the observations before calculating a \(\chi_{\nu}^{2}\). In our \(\chi_{\nu}^{2}\) calculations, we considered 3206 wavelength points for each visit and eight fitted parameters (the \(T_{\rm eff}\), log(\(g\)), and [Fe/H] of the photosphere, the \(T_{\rm eff}\) and coverage fraction of both spots and faculae, and a scaling factor). The scaling factor was multiplied by the R\({}_{\star}^{2}/d^{2}\) term to account for uncertainty in either measured quantity and varied from 0.9 to 1.1. Our out-of-transit stellar spectra and best-fitting models for both visits are shown in Figure 2. The preferred models for Visit 1 and Visit 2 both have a background photosphere with \(T_{\rm eff}\) = 3200 K, log(\(g\)) = 4.5 cgs, and [Fe/H] = 0, spots with \(T_{\rm eff}\) = 2900 K, and faculae with \(T_{\rm eff}\) = 3500 K (top panels, Figure 2). The preferred model for Visit 1 is 33% photosphere, 40% spots, and 27% faculae (\(\chi_{\nu}^{2}\) = 1.22). The preferred model for Visit 2 is 35% photosphere, 39% spots and 26% faculae (with a \(\chi_{\nu}^{2}\) of 1.16). The largest spectral deviations between visits occur from 2.75-3.3 \(\mu\)m and 4.3-5.3 \(\mu\)m for both the observations and the models (bottom panel, Figure 2). However, a 1% difference in spot and faculae coverage is negligible, considering general model uncertainties and error bars on the observations. Therefore, it is unlikely that the differences between visits are caused by the evolution of surface features or rotation onto or off of the visible disk. We further note that there is no evidence for occulted starspots in either of the visits, which could otherwise explain the different transmission spectra due to the impacts on the morphology of the transit light curves themselves. Figure 6 in Appendix A.2 shows the Eureka! white light curves for both visits, showing the lack of ob \begin{table} \begin{tabular}{l||c|c|c||c|c||c|c||c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{**Visit 1**} & \multicolumn{3}{c}{**Visit 2**} & \multicolumn{3}{c}{**Combined**} \\ Reduction & \(\chi_{\nu}^{2}\) & p-value (\(\sigma\)) & Feature? & \(\chi_{\nu}^{2}\) & p-value (\(\sigma\)) & Feature? & \(\chi_{\nu}^{2}\) & p-value (\(\sigma\)) & Feature? \\ \hline Eureka! & 1.57 & 0.32\% (2.9\(\sigma\)) & 3.4\(\sigma\) & 0.78 & 89\% (0.13\(\sigma\)) & -2.5\(\sigma\) & 1.48 & 0.92\% (2.6\(\sigma\)) & 1.0\(\sigma\) \\ FIREFLy & 1.08 & 32\% (1.0\(\sigma\)) & -2.4\(\sigma\) & 0.76 & 91\% (0.11\(\sigma\)) & -2.7\(\sigma\) & 1.14 & 21\% (1.3\(\sigma\)) & -2.4\(\sigma\) \\ ExoTiC-JEDI & 1.54 & 0.47\% (2.8\(\sigma\)) & 4.3\(\sigma\) & 0.89 & 72\% (0.36\(\sigma\)) & -2.6\(\sigma\) & 1.50 & 0.84\% (2.6\(\sigma\)) & 3.8\(\sigma\) \\ \hline Eureka! & 1.23 & 10.5\% (1.6\(\sigma\)) & 1.4\(\sigma\) & 0.78 & 90\% (0.13\(\sigma\)) & -2.3\(\sigma\) & 1.27 & 7.7\% (1.8\(\sigma\)) & -1.0\(\sigma\) \\ FIREFLy\({}^{a}\) & 1.02 & 44\% (0.77\(\sigma\)) & -2.0\(\sigma\) & 0.76 & 92\% (0.10\(\sigma\)) & -2.6\(\sigma\) & 1.11 & 26\% (1.1\(\sigma\)) & -2.4\(\sigma\) \\ ExoTiC-JEDI\({}^{a}\) & 1.03 & 42\% (0.8\(\sigma\)) & -1.8\(\sigma\) & 0.86 & 76\% (0.30\(\sigma\)) & -2.5\(\sigma\) & 1.12 & 25\% (1.1\(\sigma\)) & -2.3\(\sigma\) \\ \hline \end{tabular} Note. –\(\chi_{\nu}^{2}\) is the reduced chi-squared resulting from the best fitting featureless fit (null hypothesis) to the observed spectrum, “p-value” refers to the probability that \(\chi_{\nu}^{2}\) would be at least as extreme as the observed value under the assumption that the null hypothesis is correct and is displayed along with the corresponding “sigma” value, and the “Feature?” column shows the level of confidence in the detection of an agnostic Gaussian absorption feature in the spectrum over the null hypothesis (negative values denote preference for the featureless model). \end{table} Table 1: Is it Flat? vious spot occultations in either visit (occulted starspots would result in a brief decrease in transit depth during the transit event). However, because occulted starspots typically have a minimal impact on the visible shape of a light curve at these wavelengths, we also consider how the fitted orbital parameters may change due to such features. FIREFLy and ExoTiC-JEDI fit the white light curves from the two visits independently, and while the independent fit values are not reported here, they are consistent within uncertainties. ### An Atmosphere Around GJ 1132 b? We next assess possible atmospheric explanations for GJ 1132 b's transmission spectrum. As in our previous studies (Lustig-Yaeger & Fu et al. 2023; Moran & Stevenson et al. 2023), we compare each reduction to a set of simple forward models to compare possible atmospheric compositions and a no-atmosphere scenario. We first generate forward model atmospheres using either thermochemical equilibrium CHIMERA (Line & Yung 2013; Line et al. 2014) models or simple one-or-two gas isothermal atmosphere PICASO(Batalha et al. 2019) models. We then compute model transmission spectra using PICASO's radiative transfer module. We bin the resulting model spectra to the resolution of each reduction and compute a \(\chi^{2}_{\nu}\) (with 60 degrees of freedom (dof) for the Eureka! and FIREFLy reductions and 59 for the ExoTiC-JEDI reduction) to assess goodness of fit. We summarize our results in Figure 3. First, we run a set of thermochemical equilibrium models with CHIMERA. We include opacities from H, collision-induced absorption (CIA), H\({}_{2}\), He, H\({}_{2}\)O, CH\({}_{4}\), CO, CO\({}_{2}\), NH\({}_{3}\), N\({}_{2}\), HCN, and H\({}_{2}\)S and use the parameterized temperature-pressure profile of Guillot (2010) with an equilibrium temperature of 530 K. We run these forward models at 100\(\times\) to 1000\(\times\) solar metallicities, Figure 2: **Out-of-transit stellar spectra of GJ 1132 compared to heterogeneous stellar models**. Top panel: the extracted GJ 1132 spectra from Visit 1 (blue errors) and Visit 2 (black errors) are compared to best-fitting multi-component PHOENIX models (yellow for Visit 1; pink for Visit 2). The best-fitting Visit 1 model has 40% spot coverage and 27% faculae coverage, compared to 39% spot coverage and 26% faculae coverage for Visit 2. Middle panels: zoom-ins on the gray highlighted regions in the top panel. Bottom panel: deviations between visits for the models (yellow) and observations (black). The largest deviations for both models and observations occur near 3 \(\mu\)m and long-ward of 4.3 \(\mu\)m. finding that the scale height of the atmosphere is so large in all cases that we obtain poor fits for all reductions in Visit 1 (\(\chi^{2}_{\nu}\gtrsim 1.43\)). Therefore, we rule out clear hydrogen-dominated atmospheres in thermochemical equilibrium at moderate confidence (\(\gtrsim\)2.5\(\sigma\)). For Visit 2, these confidences are decreased for the 1000\(\times\) solar metallicity case (down to 1.3\(\sigma\) for the FIREFLy reduction), but a hydrogen-dominated atmosphere is never the statistically preferred scenario compared to our forward models. A long-lived hydrogen-dominated atmosphere would be unexpected given the planet's density and radius (Luger et al., 2015; Rogers, 2015; Estrela et al., 2020; Rogers et al., 2021), so our forward model limits here fit expectations better than some previous results for GJ 1132 b (Southworth et al., 2017; Swain et al., 2021). For our simpler, non-self-consistent PICASO models, we examine whether 1 bar, isothermal atmospheres of pure CH\({}_{4}\), pure CO\({}_{2}\), or pure H\({}_{2}\)O agree with the data from each reduction. We find that CH\({}_{4}\)-dominated atmospheres (dark blue dashed line in Figure 3) are the most strongly ruled out for both visits, to at least 4.2\(\sigma\) across all reductions. Although in Visit 1 there is a rise in transit depth at the strong CH\({}_{4}\) absorption feature centered at 3.3 \(\mu\)m, the lack of strong CH\({}_{4}\) absorption at the wavelengths probed by NRS2 result in a poor fit. Similarly, CO\({}_{2}\)-dominated atmospheres poorly fit Visit 1. However, H\({}_{2}\)O-rich atmospheres (thick solid lines in Figure 3) provide better fits to Visit 1 due to the broad H\({}_{2}\)O absorption slope at the bluest wavelengths probed by NIRSpec G395H. In Visit 1, an uptick in transit depth at \(\sim\) 4.5 \(\mu\)m is also noticeable. Multiple molecules, including O\({}_{3}\), CS\({}_{2}\), and N\({}_{2}\)O, have an absorption band around this wavelength (e.g., Schwieterman et al., 2022), but of these N\({}_{2}\)O has the best-matching feature center and width. Therefore, in addition to our pure H\({}_{2}\)O atmosphere, we also generate atmospheric models with H\({}_{2}\)O as the background gas and either 10% N\({}_{2}\)O or 10% CH\({}_{4}\). These result in visually better fits at the two increases in transit depth at 3.3 \(\mu\)m and 4.5 \(\mu\)m. A flat-line fit is slightly disfavored for Visit 1 (rejected between 1-3\(\sigma\)) -- consistent with our previous statistical tests in Section 4.1 -- in favor of H\({}_{2}\)O-rich atmospheres. In summary, our forward models prefer a H\({}_{2}\)O-dominated atmosphere for Visit 1. While adding 10% CH\({}_{4}\) or N\({}_{2}\)O visually explain the observed features, they do not improve \(\chi^{2}_{\nu}\) (but we only consider a single mixing ratio for both species). We explore the full range of possible mixing ratios consistent with Visit 1 with atmospheric retrievals in Section 4.4. Figure 3: **Transmission spectra of GJ 1132 b compared to atmospheric forward models.** The Eureka! reduction (black circles) is shown for both visits at an R\(\sim\)100, with Visit 1 on the left and Visit 2 on the right. Also shown are a series of end-member atmospheric forward models generated with PICASO to illustrate goodness of fit (\(\chi^{2}{}_{\nu}\)) of various scenarios for each visit, shown after each model for Visit 1 and 2, respectively. Dashed lines show poorer overall fits including a 1000\(\times\) solar atmosphere (dashed purple), a pure 1 bar CH\({}_{4}\) atmosphere (dashed navy), and a pure 1 bar CO\({}_{2}\) atmosphere (dashed orange). Better fits include pure H\({}_{2}\)O atmospheres (blue) or water atmospheres with 10% N\({}_{2}\)O (yellow) or 10% CH\({}_{4}\) (pink), or a flat line indicative of no atmosphere or a high altitude opaque aerosol layer (gray dashed). In Visit 1, water-rich atmospheres with other species are preferred, but the features driving this fit disappear in Visit 2 so that an atmosphere-free model is the best fit. For Visit 2, we find that a flat line -- indicative of either no atmosphere or a high altitude aerosol layer -- produces the lowest \(\chi^{2}_{\nu}\) (consistent with Section 4.1). At GJ 1132 b's \(\sim\)500 K equilibrium temperature, while condensate clouds are unlikely (given the lack of condensible species) photochemical hazes could form in a variety of atmospheres (e.g., Horst et al., 2018; He et al., 2018; Gao et al., 2020). Moreover, we cannot rule out the H\({}_{2}\)O-dominated atmosphere preferred by Visit 1 from the Visit 2 data, given its low \(\chi^{2}_{\nu}\) (\(<1\)). ### Retrieval Analysis: An Atmosphere or Starspots? Our analysis thus far has yielded two key results: (i) GJ 1132's out-of-transit stellar spectrum strongly favors a heterogeneous star with constant spot and faculae properties between the two Visits, and (ii) GJ 1132 b's Visit 1 spectrum can be explained by an atmosphere, but Visit 2 is statistically flat. Here we attempt to reconcile these insights via retrieval modeling of GJ 1132 b's transmission spectrum considering both atmospheric and unocculted starspot scenarios. Our retrieval results are summarized in Figure 4. #### 4.4.1 Atmosphere Scenario We first explore the range of atmospheres consistent with GJ 1132 b's transmission spectrum via separate retrievals of our two visits using the open source POSEIDON code (MacDonald & Madhusudhan, 2017; MacDonald, 2023). We considered 11 potential gases to span a wide parameter space of plausible atmospheric compositions: N\({}_{2}\), H\({}_{2}\), H\({}_{2}\)O, CO\({}_{2}\), CH\({}_{4}\), N\({}_{2}\)O, NO\({}_{2}\), HCN, NH\({}_{3}\), SO\({}_{2}\), and PH\({}_{3}\). The opacities used for the retrieval forward model are described in MacDonald & Lewis (2022). The mixing ratios of these gases can range from \(10^{-12}\) to 1, using centered log-ratio priors as in Lustig-Yaeger & Fu et al. (2023) and Moran & Stevenson et al. (2023). The other free parameters are (priors in brackets) the atmospheric temperature (\(\mathcal{U}\)(400 K, 900 K)), the atmosphere radius at the 10 bar reference pressure (\(\mathcal{U}\)(0.85 \(R_{\rm p,\,obs}\), 1.15 \(R_{\rm p,\,obs}\))), the haze power-law exponent (\(\mathcal{U}\)(-20, 2)) and log-Rayleigh enhancement factor (\(\mathcal{U}\)(-4, 8)) -- defined as in MacDonald & Madhusudhan (2017) -- and the log-pressure of an opaque cloud/surface (\(\mathcal{U}\)(-7, 2), in bar). We calculate transmission spectra via opacity sampling at a resolving power of \(R\) = 20,000 from 0.6-5.2 \(\mu\)m, before convolving the model with the instrument point spread function and binning to the resolution of the observations. We sample this 15-parameter space using the PyMultiNest(Feroz et al., 2009; Buchner et al., 2014) package with 2,000 live points. We perform retrievals on each GJ 1132 b visit separately to consider how their different morphology (discussed in Section 4.1) affects atmospheric inferences. Our retrievals here focus on the Eureka! reduction since we found consistent retrieval results with ExoTiC-JEDI and FIREFLy. We do not consider a free detector offset between the NRS1 and NRS2 spectra in these retrievals, given the lack of evidence for a superbias correction during the data reduction. Our Visit 1 retrieval (left panels of Figure 4) favors a H\({}_{2}\)O-dominated atmosphere (Bayes factor = 6.5 / 2.5\(\sigma\)) with trace amounts of CH\({}_{4}\) (Bayes factor = 27 / 3.1\(\sigma\)). Figure 4 demonstrates that the evidence for H\({}_{2}\)O is driven by the slope seen in the NRS1 data, while a feature near 3.3 \(\mu\)m is attributed to CH\({}_{4}\). A weak feature near 4.5 \(\mu\)m is best fit by N\({}_{2}\)O absorption, but our Bayesian model comparison indicates insufficient evidence for N\({}_{2}\)O (Bayes factor = 1.2 / 1.4\(\sigma\)). The volume mixing ratio posterior distributions in Figure 4 show a H\({}_{2}\)O abundance consistent with 100% (\(\log X_{\rm H_{2}O}=-0.01^{+0.01}_{-0.43}\)), a CH\({}_{4}\) abundance of \(\sim\) 1% (\(\log X_{\rm CH_{4}}=-2.37^{+1.13}_{-0.69}\)), and an N\({}_{2}\)O abundance of \(\sim\) 100 ppm (\(\log X_{\rm N_{2}O}=-4.64^{+1.54}_{-4.24}\)). We note that our Visit 1 retrieval finds no evidence of HCN, which was previously suggested from the _HST_ WFC3 analysis of GJ 1132 b by Swain et al. (2021). However, our G395H data does not rule out a scattering slope below 2 \(\mu\)m (as shown by the wide 2\(\sigma\) confidence region in Figure 4) similar to that inferred by Swain et al. (2021) (but cf. Libby-Roberts et al., 2022 and Mugnai et al., 2021). In contrast, our Visit 2 retrieval (right panels of Figure 4) is consistent with a flat line with no constraints on the atmospheric composition. Such a flat spectrum can be explained by many degenerate atmospheric properties, including a high mean molecular weight, low temperature, low surface pressure, and/or a high-altitude aerosol layer. Given that our data are sufficiently precise to differentiate between several high mean molecular weight atmospheres with high surface pressure / deep clouds (as shown by our inference of a H\({}_{2}\)O-dominated atmosphere from Visit 1), our favored explanation for the featureless Visit 2 spectrum is a high-altitude aerosol layer. However, a wide range of cloud top pressures are permitted (\(\log P_{\rm cloud}<-1.3\) to 1\(\sigma\); see Figure 4) after marginalization over all the possible combinations of atmospheric temperature and background gases with higher mean molecular weight than H\({}_{2}\)O. Assuming GJ 1132 b's transmission spectra are explained by a planetary atmosphere, our retrievals thus suggest that the cloud opacity would need to significantly increase between Visit 1 and 2 to explain our different spectra. As further discussed in Section 4.5, it is highly improbable that an atmosphere can change from a relatively clear state in one visit to host a global high-altitude cloud in the next visit. Furthermore, GJ 1132 b's equilibrium temperature places it in a parameter space without obvious condensable material for clouds to form (e.g., Gao et al., 2021), which would suggest a photochemical haze as the cause of this aerosol layer. Similarly, a transition from a relatively clear atmosphere to one with such high haze opacity would be highly improbable given no change in radiative forcing. However, given the wide uncertainty on our retrieved cloud-top pressure, we note that both Visits 1 and 2 are consistent with an intermediate cloud pressure at \(\sim 10\,\)mbar to \(1\sigma\). We next turn to consider an alternative explanation that does not require an atmosphere. #### 4.4.2 Unoculted Starspot Scenario We next investigate the alternative explanation of unocculted starspots shaping the observed transmission spectra, adopting the same retrieval configuration as Moran & Stevenson et al. (2023). This retrieval model assumes no planetary atmosphere, with unocculted stellar heterogeneities producing any wavelength-dependent features in the transmission spectrum (see Rackham et al., 2023, for a review of the impact of starspots on transmission spectra). Four parameters define this model (priors in brackets): the heterogeneity temperature, \(T_{\rm het}\) (\(\mathcal{U}(2300\,\mathrm{K},\,1.2\,T_{*,\mathrm{eff}})\)), the heterogeneity covering fraction, \(f_{\rm het}\) (\(\mathcal{U}(0.0,\,0.6)\)), the photospheric temperature, \(T_{\rm phot}\) (\(\mathcal{N}(T_{*,\mathrm{eff}},\,\sigma_{T_{*,\mathrm{eff}}})\)), and the planetary radius, \(R_{p}\) (\(\mathcal{U}(0.9\,R_{p,\,\mathrm{obs}},\,1.1\,R_{p,\,\mathrm{obs}})\)). The priors are specified in terms of literature properties of the host star: \(T_{*,\mathrm{eff}}=3270\,\mathrm{K}\) and \(\sigma_{T_{*,\mathrm{eff}}}=140\,\mathrm{K}\)(Bonfils et al., 2018). We calculate the impact of the transit light source effect (Rackham et al., 2018) by interpolating PHOENIX models (Husser et al., 2013) using the PyMSG package (Townsend & Lopez, 2023). We verified that a more complex parameterization of stellar contamination, including both faculae and spots, does not improve the fit or alter the retrieved spot properties, so we focus here on results assuming a single-heterogeneity population. Figure 4: **Atmospheric and starspot retrieval results for GJ 1132 b.** Top panels: comparison between the retrieved transmission spectra for the Eureka! reduction of Visit 1 (left) and Visit 2 (right) adopting two distinct retrieval models: (i) a planetary atmosphere with no unocculted starspots (green contours), and (ii) no atmosphere with unocculted starspots (orange contours). The median retrieved spectrum (solid lines) and \(1\sigma\) and \(2\sigma\) model confidence intervals (dark and light contours) for each scenario are overlaid. Labels indicate the locations of H\({}_{2}\)O, CH\({}_{4}\), and N\({}_{2}\)O absorption bands. Middle panels: posterior histograms for the atmosphere scenario, highlighting the volume mixing ratios of the three molecules tentatively inferred from Visit 1: H\({}_{2}\)O, CH\({}_{4}\), and N\({}_{2}\)O. Bottom panels: posterior histograms for the unocculted starspot scenario, defined by the spot coverage fraction, spot temperature, and the background stellar photosphere temperature. Visit 1 requires either a water-rich atmosphere with trace CH\({}_{4}\) (and possibly N\({}_{2}\)O) or unocculted starspots, but the scenarios cannot be differentiated without observations shortwards of \(3\,\mu\)m. Visit 2 is, however, consistent with no atmosphere and no unocculted starspots. As in the previous section, we perform independent retrievals for the two visits using the Eureka! reduction and without a free offset between the detectors. Our Visit 1 retrieval is well-explained by unocculted starspots covering \(\sim 20\%\) of the stellar surface with a spot temperature \(\sim\)400 K cooler than the photosphere. These starspot properties are consistent with the stellar spectrum fits described in Section 4.2. As shown in Figure 4, the starspot scenario explains the Visit 1 data with a spectral slope shortwards of 4 \(\mu\)m. The posterior distributions in Figure 4 demonstrate that a wide range of spot coverage fractions are consistent with Visit 1 (\(f_{\rm het}=0.23^{+0.21}_{-0.10}\)) -- due to the \(f_{\rm het}\)-\(T_{\rm het}\) degeneracy (e.g. Rathcke et al., 2021, their Figure 10). However, once again our Visit 2 retrieval is consistent with a flat, featureless spectrum. Under the starspot model, this requires either a low spot coverage fraction or a spot temperature similar to the stellar photosphere. However, the retrieved starspot fraction for Visit 2 is formally consistent with Visit 1 within 1 \(\sigma\), which agrees with the out-of-transit stellar modeling in Section 4.2 that found a negligible difference between visits. #### 4.4.3 An Atmosphere vs. Starspots At first glance, the available statistical evidence equally supports the atmosphere scenario or the starspot scenario for Visit 1. The Bayesian evidences (\(\ln\mathcal{Z}=490.1\) and 490.3 for the atmosphere vs. starspot models, respectively) and minimum reduced \(\chi^{2}\) (\(\chi^{2}_{\nu}=1.35\) and 1.36 for the atmosphere vs. starspot models, respectively) are indistinguishable. However, this is largely a manifestation of the higher dimensionality of our reference atmosphere model compared to the starspot model (15 parameters vs. 4 parameters for the starspot model). Comparing the \(\chi^{2}\) directly, we find the maximum likelihood atmospheric scenario model (\(\chi^{2}=61\) with 45 dof) achieves a better quality fit than the starspot scenario (\(\chi^{2}=76\) with 56 dof). This better fit arises from the spectral slope caused by H\({}_{2}\)O absorption in combination with the CH\({}_{4}\) feature near 3.3 \(\mu\)m providing a better fit than the slope produced by unocculted starspots. We, therefore, ran an additional, simplified, atmosphere scenario retrieval focusing only on those properties indicated by the data. Our motivation is to account for the 'Occam penalty' disfavoring models with redundant free parameters (i.e., the non-detected molecules in the atmosphere scenario). We thus consider a H\({}_{2}\)O-dominated atmosphere with trace CH\({}_{4}\) and N\({}_{2}\)O, constituting a 4-parameter model defined by the planetary reference radius, temperature, and the CH\({}_{4}\) and N\({}_{2}\)O abundances (the H\({}_{2}\)O abundance determined via abundances summation). This simplified model obtains an excellent fit to Visit 1 (\(\chi^{2}=63\) with 56 dof; \(\chi^{2}_{\nu}=1.13\); \(\ln\mathcal{Z}=494.2\)), providing tentative evidence favoring an atmosphere over starspots for Visit 1. The lack of observed starspot variability in the out-of-transit stellar spectrum (Figure 2), in small tension with the low spot fraction required to render Visit 2 flat (Figure 4), may also support the atmosphere interpretation. However, the inferred atmosphere from Visit 1 -- a H\({}_{2}\)O-dominated atmosphere with trace CH\({}_{4}\) and N\({}_{2}\)O -- would be unstable against a runaway greenhouse, as all three molecules are highly effective greenhouse gases and highly susceptible to photolysis (e.g., Rugheimer et al., 2015). Thus, such an atmosphere would rapidly be lost to space (e.g., Goldblatt et al., 2013), requiring ongoing outgassing, a very high (\(>\)5 wt%) initial H\({}_{2}\)O inventory at formation, and a present-day magma ocean (Schaefer et al., 2016) to replenish the H\({}_{2}\)O to the levels suggested by our Visit 1 atmospheric retrieval. Moreover, an H\({}_{2}\)O-dominated atmosphere would be expected to also include ample oxidized carbon species, such as CO\({}_{2}\) or CO, given outgassed abundances suggested from previous studies (Sossi et al., 2023; Tian & Heng, 2023). While we do not see evidence for CO\({}_{2}\), our NIRSpec G395H data cannot constrain the presence or absence of CO. The co-presence of \(\sim\) percent levels of CH\({}_{4}\) with CO in an H\({}_{2}\)O-dominated atmosphere would require finely tuned carbon abundances and oxygen fugacities of the planetary interior (Tian & Heng, 2023). Given the disagreement between our two visits, we suggest it is premature to claim a clear preference for an atmosphere or starspots. Nevertheless, our JWST observations do rule out the H\({}_{2}\)-dominated atmosphere with HCN previously suggested by Swain et al. (2021) (see Appendix C). One path to a resolution is the significant predicted deviations between the atmosphere and starspot scenarios at shorter wavelengths (see Figure 4), which could be probed with future observations. ### Potential For Planetary Variability The impact of atmospheric variability on transmission spectra of tidally locked terrestrial planets has been explored by numerous teams (e.g. May et al., 2021; Song & Yang, 2021; Cohen et al., 2022; Rotman et al., 2023), with the general conclusion being that such variability is below the detectability limit of _JWST_ instruments. This is because, while the cloud cover of any one region of the terminator can be highly variable, we are observing limb averaged spectra that simultaneously probe cloudy and cloud-free regions. With the difference between GJ 1132 b's two spectra of order \(\sim\)100 ppm at the 4.5 \(\mu\)m "feature" (see Figure 1), any planetary variability would be required to be at least an order of magni tude larger than predicted by current GCMs of temperate, tidally locked terrestrial planets in the above works. While GJ 1132 b is hotter than the models in the above works, it remains physically unlikely that such variability is the cause of the observed spectral differences. This is because at the equilibrium temperature of GJ 1132 b the planetary dayside is more likely to be cloud free. Further, such large scale variability in transmission has yet to be detected on even the most optimal targets - hot Jupiters (e.g. Kilpatrick et al., 2020). ## 5 Conclusions We presented two transit observations of the super-Earth GJ 1132 b with _JWST_ NIRSpec G395H that yield distinctly different transmission spectra. Barring other possibilities, the differences between the visits may be explained by random noise fluctuations that took the unfortunate shape of spectral features for one visit. Without a third observation of GJ 1132 b at these wavelengths, it is impossible to determine which of the two visits most accurately reflects the true nature of the planet. Our conflicting observations demonstrate the potential risk of claiming atmospheric detections for rocky exoplanets based on a single _JWST_ observation. Should Visit 1 represent the "truth" for GJ 1132 b our retrievals exhibit a slight preference for a H\({}_{2}\)O-dominated atmosphere with trace CH\({}_{4}\) and N\({}_{2}\)O (\(\chi^{2}_{\nu}=1.13\)) compared to contamination from unocculted starspots (\(\chi^{2}_{\nu}=1.36\)). These two scenarios would produce significantly different transmission spectra at shorter wavelengths (see Figure 4). While _HST_ WFC3 and ground-based optical data exist for GJ 1132 b, their lack of wavelength overlap with our NIRSpec G395H observations results in too much freedom when accounting for a possible inter-instrument transit depth offset, resulting in either scenario being allowed (see Appendix C). However, our Visit 1 observations do rule out a H\({}_{2}\)-dominated atmosphere containing HCN, as previously suggested by the _HST_ WFC3 analysis of Swain et al. (2021). See Appendix C for further comparison of our new _JWST_ observations to the existing _HST_ WFC3 and ground-based data. Instrument systematics could provide an alternative explanation for our divergent transmission spectra. Allowing for a possible offset between the \(\mathrm{NRS1}\) and \(\mathrm{NRS2}\) detectors would eliminate the statistical significance of the bluewards slope in the Visit 1 data (see Section 4.1 and Appendix B), which drives our inference of an atmosphere or unocculted starspots. Larger detector offsets were seen between \(\mathrm{NRS1}\) and \(\mathrm{NRS2}\) in Moran and Stevenson et al. (2023), but were manually corrected based on differences between the visits and not explored in their equivalent Gaussian feature tests. However, we note that we do not see evidence during our data reduction process for the need for a similar superbias correction, and find equivalent spectra regardless of applying such a correction. Future observations of GJ 1132 b with NIRSpec G395M would cover the same wavelengths -- without the detector gap -- to determine the true shape of GJ 1132 b's transmission spectrum at these wavelengths. With the growing evidence for detector offsets within NIRSpec G395H data, it may be preferable to observe planets with small predicted spectral features using NIRSpec G395M instead, especially if the star is dim enough to allow it, to ensure that detector offsets are not mistaken for atmospheric features. Additional future observations at wavelengths below 3 \(\mu\)m, but still overlapping with the NIRSpec G395H wavelength range, are crucial to breaking the degeneracy between the atmospheric and unocculted starspot explanations. This atmosphere-starspot degeneracy has now been seen in multiple NIRSpec G395H observations of small planets orbiting M dwarfs (see also Moran and Stevenson et al., 2023). For example, NIRISS SOSS observations combined with our existing NIRSpec G395H data, or new NIRSpec G395M data (avoiding any potential detector offsets), will be crucial to determine if (1) GJ 1132 b's spectral features are real and repeatable, and (2) if these features are best described by a planetary atmosphere or stellar contamination. While the quest continues for an unambiguous atmospheric detection on a rocky planet, our results for GJ 1132 b provide an important reminder of the necessity of repeat observations to confirm the reliability of potential detections. This work is based in part on observations made with the NASA/ESA/CSA _JWST_. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Support for _JWST_ program #1981 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. We thank the anonymous referee for their constructive and timely feedback, alongside fruitful discussions that consequently improved our study. R.J.M. is supported by NASA through the NASA Hubble Fellowship grant HST-HF2-51513.001, also awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. Work by S.P. is supported by NASA under award number 80GSFC21M0002. J.K. acknowledges financial support from Imperial College London through an Imperial College Research Fellowship grant. This material is based upon work performed as part of NASA's CHAMPs team, supported by the National Aeronautics and Space Administration (NASA) under Grant No. 80NSSC21K0905 issued through the Interdisciplinary Consortia for Astrobiology Research (ICAR) program. The authors thank Raissa Estrela and the Excalibur team at JPL for useful discussions on the HST model presented in Swain et al. (2021). JWST (NIRSpec) The _JWST_ data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via DOI: 10.17909/0njr-8110.
2305.05814
Isospectral Local Hamiltonians for Perturbative PT-symmetric Hamiltonians
A new method to work out the Hermitian correspondence of a PT-symmetric quantum mechanical Hamiltonian is proposed. In contrast to the conventional method, the new method ends with a local Hamiltonian of the form p^2/2+m^2x^2/2+v(x) without any higher-derivative terms. This method is demonstrated in the perturbative regime. Possible extensions to multi-variable quantum mechanics and quantum field theories are discussed.
Yi-Da Li, Qing Wang
2023-05-09T23:53:54Z
http://arxiv.org/abs/2305.05814v1
# Isospectral Local Hamiltonians for Perturbative \(\mathcal{PT}\)-symmetric Hamiltonians ###### Abstract A new method to work out the Hermitian correspondence of a \(\mathcal{PT}\)-symmetric quantum mechanical Hamiltonian is proposed. In contrast to the conventional method, the new method ends with a local Hamiltonian of the form \(\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+v(x)\) without any higher-derivative terms. This method is demonstrated in the perturbative regime. Possible extensions to multi-variable quantum mechanics and quantum field theories are discussed. ## I Introduction The discovery of real-spectra \(\mathcal{PT}\)-symmetric Hamiltonians[1] has inspired a lot of researches beyond conventional Hermitian quantum theories[2; 3]. Originally, in [1] it was found that Hamiltonians of the form \(H=p^{2}+m^{2}-(ix)^{N}(N\geq 2)\) have real spectra. Later, the general framework to describe a \(\mathcal{PT}\)-symmetric quantum theory was established[4; 5; 6; 7]. A nontrivial metric operator \(\eta=e^{-Q}\) satisfying \(\eta H\eta^{-1}=H^{\dagger}\) is necessary[5] for the unitary evolution generated by a non-Hermitian \(\mathcal{PT}\)-symmetric Hamiltonian \(H\), which differs from Hermitian quantum mechanics. With the help of this metric operator, a real-spectra \(\mathcal{PT}\)-symmetric Hamiltonian \(H\) can be recast to an isospectral Hermitian Hamiltonian \(h=e^{-Q/2}He^{Q/2}\) equipped with the ordinary Dirac inner product. A remarkable example is the isospectral Hermitian Hamiltonian for \(H=p^{2}-gx^{4}\), as described in [8; 9]. The stability for \(-x^{4}\) potential is essential to guarantee the stability of Higgs vacuum[3]. Moreover, a generic method[10] has been developed to calculate the metric operator for a perturbative \(\mathcal{PT}\)-symmetric Hamiltonian of the form \(H=H_{0}+eH_{1}\) where \(H_{0}\) is Hermitian and \(H_{1}\) is anti-Hermitian. In this case, \(Q\) has the form \(Q=\epsilon Q_{1}+\epsilon Q_{3}+\cdots\) and each term can be determined perturbatively as follows[3] \[[H_{0},Q_{1}]=-2H_{1},\ [H_{0},Q_{3}]=-\frac{1}{6}[[H_{1},Q_{1}],Q_{1}],\ \cdots \tag{1}\] The isospectral Hermitian Hamiltonian \(h\) acquired from this procedure is in general nonlocal in the sense of containing terms in arbitrarily high order of momentum \(p\), which render the physical meaning of \(h\) rather obscure. However, there are vast degrees of freedom in generating \(h\) as demonstrated in [11]. In this paper we give an explicit method to calculate the local version of \(h\) for perturbative \(\mathcal{PT}\)-symmetric Hamiltonians whose free parts are non-degenerate. In contrast to the nonlocal \(h\) from above conventional method, we believe a local form has apparent physical meanings and will bring inspirations to the research of \(\mathcal{PT}\)-symmetric theories. Here we summarize main procedures of our new methods and the structure of this paper. In Sec. II, we start from a single-variable Hamiltonian \(H_{V}=\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+V(x,p)\), where \(V(x,p)=\sum_{n=1}^{\infty}g^{n}V_{n}(x,p)\) is the sum of various polynomial functions \(V_{n}(x,p)\) of \(x\) and \(p\) with coupling constant \(g\). We assume \(H_{V}\) respects unbroken \(\mathcal{PT}\) symmetry. Then we show a similarity transformation of \(H_{V}\) leads to a manifestly diagonal Hermitian Hamiltonian \(H_{N}=m(N+\frac{1}{2})+F(N)\), where \(F(N)=\sum_{n=1}^{\infty}g^{n}f_{n}(N)\) is the sum of various polynomial functions \(f_{n}(N)\) of \(N\) and \(N=a^{\dagger}a\) in which \(a=\sqrt{\frac{m}{2}}x+i\sqrt{\frac{1}{2m}}p\) is the standard annihilation operator1. In Sec. III, we transform \(H_{N}\) to \(h_{v}=\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+v(x)\), where \(v(x)=\sum_{n=1}^{\infty}g^{n}v_{n}(x)\) is the sum of various polynomial functions \(v_{n}(x)\) of \(x\) only. The tranformation from \(H_{V}\) to \(H_{N}\) is a typical diagonalization procedure. And the key point of the transformation from \(H_{N}\) to \(h_{v}\) is the existence of a one-to-one correspondence between \(n\)-th order polynomials of \(N\) and \(x^{2}\). In Sec. IV, we calculate \(h_{v}\) in the \(ix^{3}\) model as an example. When generalizing to multi-variable Hamiltonians, the one-to-one map exists only in the case where the free part of \(H_{V}\) is non-degenerate, and this is discussed in Sec. V together with generalization to quantum field theories. We conclude in Sec. VI. Footnote 1: \(\hbar=1\) is assumed. ## II Diagonalization of a Hamiltonian with the \(d\)-operation Consider a single-variable Hamiltonian with one2 real coupling constant \(g\) Footnote 2: Generalization to multi-coupling Hamiltonians is straightforward. \[H_{V}=\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+\sum_{n=1}^{\infty}g^{n}V_{n}(x,p). \tag{2}\] As stated in Sec. I, \(V_{n}(x,p)\) is a polynomial function of \(x\) and \(p\) respecting \(\mathcal{PT}\) symmetry. Creation and annihilation operators can be defined as usual \[a^{\dagger}=\sqrt{\frac{m}{2}}x-i\sqrt{\frac{1}{2m}}p,\;a=\sqrt{\frac{m}{2}}x+i \sqrt{\frac{1}{2m}}p. \tag{3}\] In the Fock space defined by \(a^{\dagger}\) and \(a\), diagonal operators are in the form \(\sum_{n=0}^{\infty}c_{n}N^{n}\) because of the commutation relation \([a,a^{\dagger}]=1\). We define a linear operation \(D()\) on any operator \(\mathcal{O}\) to take out the diagonal part of \(\mathcal{O}\) such that \(D(\mathcal{O})=\sum_{n=0}^{\infty}c_{n}^{\mathcal{O}}N^{n}\). For example, \[\begin{split}& D(1)=1,\;D(x)=D(p)=0,\\ & D(x^{2})=\frac{1}{m^{2}}D(p^{2})=\frac{1}{2m}(2N+1),\cdots\end{split} \tag{4}\] A diagonal operator \(\mathcal{O}\) satisfies \(\mathcal{O}=D(\mathcal{O})\). If we want to diagonalize \(H_{V}\) with a similarity transformation \(e^{-R}\), it is enough to satisfy the condition \[e^{-R}H_{V}e^{R}=D(e^{-R}H_{V}e^{R}). \tag{5}\] Assume \(H_{V}\) can be evaluated in the perturbative regime, then \(R\) can be written as a perturbation series \(R=\sum_{n=1}^{\infty}g^{n}R_{n}\). Taking out \(n\)-th order terms on both sides of (5), we have \[\begin{split}=& D([H_{0},R_{n}])+D(V_{n})-V_{n}\\ &+D\left(\sum_{j=2}^{n}\sum_{\begin{subarray}{c}\{k_{1},\cdots, k_{j}\}\\ k_{1}+\cdots+k_{j}=n\end{subarray}}\frac{[[H_{0},R_{k_{1}}],\cdots,R_{k_{j}}]}{j!}+ \sum_{\ell=1}^{n-1}\sum_{j=1}^{n-\ell}\sum_{\begin{subarray}{c}\{k_{1},\cdots,k_{j}\}\\ k_{1}+\cdots+k_{j}=n-\ell\end{subarray}}\frac{[[V_{\ell},R_{k_{1}}],\cdots,R_{k_ {j}}]}{j!}\right)\\ &-\left(\sum_{j=2}^{n}\sum_{\begin{subarray}{c}\{k_{1},\cdots, k_{j}\}\\ k_{1}+\cdots+k_{j}=n\end{subarray}}\frac{[[H_{0},R_{k_{1}}],\cdots,R_{k_{j}}]}{j!}+ \sum_{\ell=1}^{n-1}\sum_{j=1}^{n-\ell}\sum_{\begin{subarray}{c}\{k_{1},\cdots,k_{j}\}\\ k_{1}+\cdots+k_{j}=n-\ell\end{subarray}}\frac{[[V_{\ell},R_{k_{1}}],\cdots,R_{k_ {j}}]}{j!}\right),\end{split} \tag{6}\] where \(H_{0}\equiv\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}=m\left(N+\frac{1}{2}\right)\). Because \(H_{0}\) is diagonal, \([H_{0},R_{n}]\) has vanishing diagonal components and \(D([H_{0},R_{n}])=0\). \([H_{0},R_{n}]\) is thus determined completely by lower-order \(R_{\text{s}}\). It is obvious that \(D(\mathcal{O})-\mathcal{O}\) has vanishinig diagonal components such that it is in the form \(\sum_{k,\ell(k\neq\ell)}c_{\;k\ell}^{\mathcal{O}}a^{\dagger k}a^{\ell}\). We also have the relation \([H_{0},a^{\dagger k}a^{\ell}/(m(k-\ell))+\alpha_{k\ell}(N)]=a^{\dagger k}a^{\ell}\) where \(\alpha_{k\ell}(N)\) is an arbitrary function of \(N\), so that all \(R_{n}\) can be solved iteratively from (6). \(H_{N}\equiv e^{-R}H_{V}e^{R}\) is thus in the form \(H_{N}=m(N+\frac{1}{2})+\sum_{n=1}^{\infty}g^{n}f_{n}(N)\) as stated in Sec. I, where \(f_{n}(N)\) is given by \[f_{n}(N)=D\left(V_{n}+\sum_{j=2}^{n}\sum_{\begin{subarray}{c}\{k_{1},\cdots,k_ {j}\}\\ k_{1}+\cdots+k_{j}=n\end{subarray}}\frac{[[H_{0},R_{k_{1}}],\cdots,R_{k_{j}}]}{j!}+\sum_{\ell=1}^{n-1}\sum_{\begin{subarray}{c}\{k_{1},\cdots,k_{j}\}\\ k_{1}+\cdots+k_{j}=n-\ell\end{subarray}}\frac{[[V_{\ell},R_{k_{1}}],\cdots,R_{k _{j}}]}{j!}\right). \tag{7}\] ## III The local potential from a diagonal Hamiltonian The diagonalization of \(H_{V}\) makes use of the \(D\)-operation, and one may think that \(H_{V}\) can be recovered from the diagonal \(H_{N}\) by some \(D^{-1}\)-operation. However, the \(D\)-operation is not bijective as shown by (4) such that \(D^{-1}\) does not exist. The non-existence of \(D^{-1}\) indicates that there are many different Hamiltonians which is similar to the same diagonal \(H_{N}\). As we are going to show, there exists a local Hermitian \(h_{v}\) similar to \(H_{N}\) serving as the Hermitian correspondences of \(H_{V}\). To invert the diagonalization procedure, we make use of the fact that \(D(x^{2n})\) is a polynomial function of \(N\) written as \[D(x^{2n})=\sum_{k=0}^{n}X_{nk}N^{k}, \tag{8}\] where \(X_{nn}\neq 0\). Then we can define a linear operation \(L()\) on any operator \(\mathcal{O}\) as follows \[L(\mathcal{O})=L(D(\mathcal{O})),\;L(1)=1,L(N^{n})=\frac{1}{X_{nn}}\left(x^{2n}- \sum_{k=0}^{n-1}X_{nk}L(N^{k})\right)\;(n\geq 1), \tag{9}\] and \(L(N^{n})\) can be solved iteratively resulting a \(2n\)-th order polynomial function of \(x\). The requirement that \(h_{v}\equiv e^{-K}H_{N}e^{K}\) is local, is simply \[e^{-K}H_{N}e^{K}-H_{0}=L\left(e^{-K}H_{N}e^{K}-H_{0}\right). \tag{10}\] Assume \(K\) has a perturbative expansion \(K=\sum_{n=1}^{\infty}g^{n}K_{n}\). Taking out \(n\)-th order terms on both sides of (10), we have \[= L([H_{0},K_{n}])+L(f_{n}(N))-f_{n}(N)\] \[+L\left(\sum_{j=2}^{n}\sum_{\{k_{1},\cdots,k_{j}\}\atop k_{1}+ \cdots+k_{j}=n}\frac{[[H_{0},K_{k_{1}}],\cdots,K_{k_{j}}]}{j!}+\sum_{\ell=1}^{ n-1}\sum_{j=1}^{n-\ell}\sum_{\{k_{1},\cdots,k_{j}\}\atop k_{1}+\cdots+k_{j}=n- \ell}\frac{[[f_{\ell}(N),K_{k_{1}}],\cdots,K_{k_{j}}]}{j!}\right)\] \[- \left(\sum_{j=2}^{n}\sum_{\{k_{1},\cdots,k_{j}\}\atop k_{1}+ \cdots+k_{j}=n}\frac{[[H_{0},K_{k_{1}}],\cdots,K_{k_{j}}]}{j!}+\sum_{\ell=1}^{ n-1}\sum_{j=1}^{n-\ell}\sum_{\{k_{1},\cdots,k_{j}\}\atop k_{1}+\cdots+k_{j}=n- \ell}\frac{[[f_{\ell}(N),K_{k_{1}}],\cdots,K_{k_{j}}]}{j!}\right).\] Because \([H_{0},K_{n}]\) has vanishing diagonal components, we have \(L([H_{0},K_{n}])=0\) by using \(D([H_{0},K_{n}])=0\) and (9). \([H_{0},K_{n}]\) is thus determined completely by lower-order \(K_{k}\)s. From (8) and (9) it is obvious that \(D(L(\mathcal{O}))=D(\mathcal{O})\) which is to say \(L(\mathcal{O})-\mathcal{O}\) has vanishing diagonal components, for any operator \(\mathcal{O}\). \(K_{n}\) can thus be solved iteratively by the same reason of \(R_{n}\)'s as in Sec. II. From (9), \(h_{v}\) is finally written in the form \(h_{v}=\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+\sum_{n=1}^{\infty}g^{n}v_{n}(x)\) where \(v_{n}(x)\) is given by \[v_{n}(x)=L\left(f_{n}(N)+\sum_{j=2}^{n}\sum_{\{k_{1},\cdots,k_{j}\}\atop k_{1 }+\cdots+k_{j}=n}\frac{[[H_{0},K_{k_{1}}],\cdots,K_{k_{j}}]}{j!}+\sum_{\ell=1} ^{n-1}\sum_{j=1}^{n-\ell}\sum_{\{k_{1},\cdots,k_{j}\}\atop k_{1}+\cdots+k_{j} =n-\ell}\frac{[[f_{\ell}(N),K_{k_{1}}],\cdots,K_{k_{j}}]}{j!}\right). \tag{12}\] ## IV \(ix^{3}\) as an example The \(ix^{3}\) model is a popular toy model for studying \(\mathcal{PT}\)-symmetric theories[2; 3; 10; 12]. However, a local form of the isospectral Hermitian Hamiltonian has not been given yet. Here we calculate the \(h_{v}\) for \(H_{V}=\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+igx^{3}\) up to \(\mathcal{O}(g^{3})\) and show that \(h_{v}\) is indeed local. Higher-order calculation is systematic as shown by (6)(7)(9)(11)(12) but rather tedious. Higher-order terms can be calculated whenever needed and will not be presented in this paper. Various quantities entailed in the calculation of \(h_{v}\) for \(H_{V}=\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+igx^{3}\) is as follows, up to \(\mathcal{O}(g^{3})\), and we take all homogeneous terms when solving for \(R_{n}\) from (6) to be zero. \[R_{1}= \frac{-i}{m(2m)^{3/2}}\left(\frac{a^{\dagger 3}}{3}+3a^{\dagger}+3a^{ \dagger 2}a-3a^{\dagger}a^{2}-3a-\frac{a^{3}}{3}\right),\] \[R_{2}= \frac{1}{m(2m^{4})}\left(\frac{3}{2}a^{\dagger 4}-12a^{\dagger 2}-6a^{ \dagger 3}a+6a^{\dagger}a^{3}+12a^{2}-\frac{3}{2}a^{4}\right),\] \[f_{1}(N)= 0,\ f_{2}(N)=\frac{1}{8m^{4}}(30N^{2}+30N+11), \tag{13}\] \[L(N)= mx^{2}-\frac{1}{2},\ L(N^{2})=\frac{2}{3}m^{2}x^{4}-mx^{2},\] \[v_{1}(x)= 0,\ v_{2}(x)=\frac{5}{2m^{2}}x^{4}-\frac{1}{2m^{4}}.\] The expression for \(h_{v}\) is thus \[h_{v}=\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+\frac{5g^{2}}{2m^{2}}x^{4}-\frac{ g^{2}}{2m^{4}}+\mathcal{O}(g^{3}). \tag{14}\] A typical result of \(h\) using the conventional method proposed in [10] is[12] \[h=\frac{1}{2}p^{2}+\frac{1}{2}m^{2}x^{2}+\frac{3g^{2}}{2m^{4}}\left(\left\{x^{ 2},p^{2}\right\}+m^{2}x^{2}+\frac{2}{3}\right)+\mathcal{O}(g^{3}), \tag{15}\] where the appearance of \(\left\{x^{2},p^{2}\right\}=x^{2}p^{2}+p^{2}x^{2}\) makes the physical interpretation of \(h\) rather complicated. Generalization to multi-variable quantum mechanics and quantum field theories Consider a multi-variable Hamiltonian with coupling constant \(g\) \[H_{V}=\sum_{i}\left(\frac{1}{2}p_{i}^{2}+\frac{1}{2}m_{i}^{2}x_{i}^{2}\right)+ \sum_{n=1}g^{n}V_{n}(\{x_{j}\},\{p_{k}\}). \tag{16}\] If there is no degeneracy in the free part \(H_{0}=\sum_{i}\left(\frac{1}{2}p_{i}^{2}+\frac{1}{2}m_{i}^{2}x_{i}^{2}\right)\), which is to say that all linear combinations of integral multiple of \(m_{i}\) in the form \(\sum_{i}n_{i}m_{i}\) (\(n_{i}\in\mathbb{Z},\ \exists n_{j}\neq 0\)) is nonzero, there is no obstacle in calculating \(h_{v}\) from \(H_{V}\). First, the \(D\)-operation is generalized trivially resulting functions of \(N_{i}=a_{i}^{\dagger}a_{i}\), and \(D(\mathcal{O})-\mathcal{O}\) is a linear combination of \(\prod_{i,j}a_{i}^{\dagger n_{i}}a_{j}^{\ell_{j}}\ \left(\sum_{i,j}\left(n_{i}m_{i}-\ell_{j}m_{j}\right)\neq 0\right)\) for any operator \(\mathcal{O}\). Second, \(R_{n}\) is guaranteed to have solutions by the explicit commutation relation \(\left[H_{0},\left(\prod_{i,j}a_{i}^{\dagger n_{i}}a_{j}^{\ell_{j}}\right)/ \left(\sum_{i,j}\left(n_{i}m_{i}-\ell_{j}m_{j}\right)\right)+\alpha_{\{n_{i}, \ell_{j}\}}(\{N_{k}\})\right]=\prod_{i,j}a_{i}^{\dagger n_{i}}a_{j}^{\ell_{j}}\). Next, the \(L\)-operation is also generalized trivially resulting functions of \(x_{i}\), and \(K_{n}\) is soluble as the same as \(R_{n}\). Finally, we get a local \(h_{v}=\sum_{i}\left(\frac{1}{2}p_{i}^{2}+\frac{1}{2}m_{i}^{2}x_{i}^{2}\right) +\sum_{n=1}g^{n}v_{n}(x_{i})\) as the isospectral Hermitian Hamiltonian of \(H_{V}\). However, if degeneracy does occur in \(H_{0}\), \(D(\mathcal{O})-\mathcal{O}\) has terms in the form of \(\prod_{i,j}a_{i}^{\dagger n_{i}}a_{j}^{\ell_{j}}\) where \(\sum_{i,j}\left(n_{i}m_{i}-\ell_{j}m_{j}\right)=0\). Consequently, \(R_{n}\) has no solution and the whole procedure breaks down. A quantum field theory is multi-variable, of course. However, Lorentz symmetry requires that all relativistic quantum field theories have the same spectra as free theories. Therefore, any perturbatively well-defined relativistic quantum field theory is equivalent to its corresponding free theory up to a similarity transformation which is constructed explicitly in textbooks such as [13]. A \(\mathcal{PT}\)-symmetric relativistic quantum field theory is thus isospectral to any local Hermitian theories having the same mass, and restricting conditions other than equivalent spectra are needed to isolate a meaningful one for a \(\mathcal{PT}\)-symmetric relativistic quantum field theory. ## VI Summary and outlook In this paper we propose a new method to calculate isospectral Hermitian Hamiltonians of \(\mathcal{PT}\)-symmetric Hamiltonians and local expressions are acquired for those whose free parts are non-degenerate. In summary, we diagonalize a quantum mechanical Hamiltonian and transform the diagonalized one into a Hermitian Hamiltonian with a local potential making use of a correspondence between \(n\)-th order polynomials of \(N\) and \(x^{2}\). However, this correspondence, which is denoted as \(L\)-operation, is not unique. There are many polynomials that lead to the same result as \(x^{2n}\) under the \(D\)-operation because \(D(x^{2k+1})=0\) is satisfied for any rational number \(k\). Therefore, various definitions of \(L(N^{n})\) can differ by arbitrary functions of \(x^{2k+1}\) thus resulting different \(h_{v}\)s which differ from each other by arbitrary functions of \(x^{2k+1}\), too. This nonuniqueness reflects spectral equivalence of different potentials and disappears once we specify the parity property of \(h_{v}\). Our method is incapable of dealing with theories degenerate in their free parts as discussed in Sec. V. However, the conventional method is also invalid in this case. For example, consider a \(\mathcal{PT}\)-symmetric Hamiltonian \(H=\frac{1}{2}p_{1}^{2}+\frac{1}{2}p_{2}^{2}+\frac{1}{2}m_{i}^{2}x_{1}^{2}+ \frac{1}{2}(2m)^{2}x_{2}^{2}+igx_{1}^{2}x_{2}\), and the first-order equation needed to calculate the metric operator \(\exp\left(\sum_{n=1}^{\infty}g^{2n+1}Q_{2n+1}\right)\) is \([H_{0},Q_{1}]=-2ix_{1}^{2}x_{2}\), which has no solution because \(\langle 2,0|[H_{0},Q_{1}]|0,1\rangle=0\) is not consistent with \(-2i\langle 2,0|x_{1}^{2}x_{2}|0,1\rangle=-2i/(2m)^{3/2}\) where \(|2,0\rangle\) and \(|0,1\rangle\) are bases in the Fock space of \(H_{0}=\frac{1}{2}p_{1}^{2}+\frac{1}{2}p_{2}^{2}+\frac{1}{2}m^{2}x_{1}^{2}+ \frac{1}{2}(2m)^{2}x_{2}^{2}\). We hope more powerful methods can be developed to handle degeneracy problems. Although degeneracy also occurs in quantum field theories, Lorentz symmetry makes all quantum field theories with the same physical mass equivalent to each other. While conventional method picks up a Hermitian \(h\) for a \(\mathcal{PT}\)-symmetric Hamiltonian \(H\) by its explicit calculation procedure, we point out that there is no special choice of \(h\) if we consider only the spectrum of a \(\mathcal{PT}\)-symmetric Hamiltonian \(H\) and further constraints must be added to select a meaningful Hermitian \(h\). We hope to extract more physical information from \(\mathcal{PT}\)-symmetric quantum field theories thus being able to construct a special Hermitian Hamiltonian \(h\) for a \(\mathcal{PT}\)-symmetric Hamiltonian \(H\), which carries the same physical information as \(H\). ###### Acknowledgements. \({}^{\dagger}\) Corresponding author: [email protected]
2308.00990
Contact formalism for dissipative mechanical systems on Lie algebroids
In this paper, we introduce a geometric description of contact Lagrangian and Hamiltonian systems on Lie algebroids in the framework of contact geometry, using the theory of prolongations. We discuss the relation between Lagrangian and Hamiltonian settings through a convenient notion of Legendre transformation. We also discuss the Hamilton-Jacobi problem in this framework and introduce the notion of a Legendrian Lie subalgebroid of a contact Lie algebroid.
Alexandre Anahory Simoes, Leonardo Colombo, Manuel de Leon, Modesto Salgado, Silvia Souto
2023-08-02T07:48:30Z
http://arxiv.org/abs/2308.00990v1
# Contact formalism for dissipative mechanical systems ###### Abstract In this paper, we introduce a geometric description of contact Lagrangian and Hamiltonian systems on Lie algebroids in the framework of contact geometry, using the theory of prolongations. We discuss the relation between Lagrangian and Hamiltonian settings through a convenient notion of Legendre transformation. We also discuss the Hamilton-Jacobi problem in this framework and introduce the notion of a Legendrian Lie subalgebra of a contact Lie algebroid. **Keywords:** Contact geometry, dissipative systems, Lie algebroids, Herglotz equations. **MSC 2020 codes:** 37J55, 53D10, 37C79, 37J37, 70H03, 70H05, 70H20 ## 1 Introduction The study of contact Hamiltonian systems has been experiencing enormous interest in recent years, due to their applications in fields such as thermodynamics, cosmology and neuroscience, to name but a few [2, 13, 36, 52, 53] (see also [10, 11, 15, 17, 20, 21, 23, 24, 25, 26, 29, 42] and the references therein). Their key properties lie in the fact that they model dissipative systems, as opposed to symplectic Hamiltonian systems, which serve as conservative models. Although familiar in thermodynamics (the geometric model is a contact manifold and equilibrium states are interpreted as Legendrian submanifolds), they were mostly used in their Hamiltonian formalism. However, the recovery of Herglotz's variational principle [37] (a generalisation of Hamilton's principle) has allowed the development of a Lagrangian formalism, which corresponds to what in physics are called action-dependent Lagrangians. This interest has led to the extension of Lagrangian and Hamiltonian contact formalisms to other geometric contexts, such as Lie algebroids. This extension, already known in the case of symplectic Lagrangian and Hamiltonian systems, is not a mere mathematical formalism, but proves to be very useful for treating, for example, systems with symmetries, where the reduced system is no longer defined on a tangent or cotangent bundles but on a quotient of them. Indeed, the Lie algebroid context is a unifying concept (see [3, 16, 19, 30, 31, 40, 45, 48, 50, 51, 54]); the goal is to develop the program proposed by A. Weinstein in the early 1990s [58]. There are two ways to extend these contact formalisms to algebroids. One is to extend the canonical Jacobi structure on \(T^{*}Q\times\mathbb{R}\) to the case of \(E^{*}\times\mathbb{R}\), where \(E^{*}\) is the dual vector bundle of an algebroid \(E\) over the configuration space \(Q\). We have developed this approach in [4]. The advantage of this method is its simplicity, and the disadvantage is that we do not recover a contact structure, but a Jacobi structure (we also do not get a direct Lagrangian formulation). The second method is to use the notion of prolongation of a Lie algebroid [30, 43, 46, 47, 49]. This has allowed us to define contact Lie algebroids, using the differential naturally associated to it, and therefore also the concept of a Legendrian Lie subalgebroid. We obtain the Euler-Lagrange and Hamilton equations, and relate them by a Legendre transformation. As in the case of usual contact Hamiltonian systems, associated to a Hamiltonian function we have an evolution section in addition to the Hamiltonian section. As an application of the results, we obtain the Hamilton-Jacobi equation for both the Hamiltonian section and the evolution section. The organization of the paper is as follows. In section 2, we recall some basic elements from contact geometry. In section 3, we remember some basic facts about Lie algebroids and the differential geometric aspects associated to them (see [44] for instance). In this section, we also describe a particular example of a Lie algebroid, called the _prolongation of a Lie algebroid over a fibration_. This Lie algebroid will be necessary for further developments. In sections 4 and 5, the contact formalism is extended to the setting of Lie algebroids; indeed, section 4 describes the Lagrangian approach and section 5 describes the Hamiltonian approach. These formalisms are developed in an analogous way to the standard contact Lagrangian and Hamiltonian formalisms. We finish this section defining the Legendre transformation on the context of Lie algebroids and we establish the equivalence between both formalisms-Lagrangian and Hamiltonian-when the Lagrangian function is hyperregular. Several examples are also studied in this section. In section 6, we introduce the notion of a Legendrian Lie subalgebroid of a contact Lie algebroid. The Hamilton-Jacobi theory [1] is an alternative formulation of classical mechanics, equivalent to other formulations such as Hamiltonian mechanics, and Lagrangian mechanics for regular systems. For contact systems, it was introduced in [28, 33], and it has been extended to many other settings [12, 18, 34, 56]; this theory is also closely related to the integrability of Hamiltonian systems [27, 55]. So, we think that it is relevant to extend the theory to the Lie algebroid setting. Therefore, this is done in section 7, and reinterpreted in terms of Legendrian subalgebroids. In this paper, we do not deal explicitly with reduction issues (see for instance, [14, 41]) which will be discussed in a future paper. All manifolds and maps are \(C^{\infty}\) and sum over crossed repeated indices is understood. ## 2 Contact geometry In this section we review the geometric structures necessary to describe the contact formalism of dissipative mechanical systems. For more details, see [24, 35] (see also [10, 32]). For general studies of contact geometry in the Riemannian setting, see [6, 7, 8]. **Definition 2.1**.: _Consider a smooth manifold \(M\) of odd dimension \(2n+1\). A differential form \(\eta\in\Omega^{1}(M)\) such that \(\eta\wedge(\mathrm{d}\eta)^{n}\) is a volume form in \(M\) is a **contact form**. In this case, \((M,\eta)\) is said to be a **contact manifold**._ **Remark 2.2**.: There is a more general definition of a contact structure on a connected oriented manifold \(M\), which it can be seen as an equivalence class of 1-forms satisfying \(\eta\wedge(\mathrm{d}\eta)^{n}\neq 0\) everywhere on \(M\), where two 1-forms \(\eta\), \(\eta^{\prime}\) are equivalent if there exists a nowhere vanishing function \(f\) such that \(\eta^{\prime}=f\eta\) (see Section 2.1 in [9]). \(\diamond\) Notice that the condition \(\eta\wedge(\mathrm{d}\eta)^{n}\neq 0\) implies that the contact form \(\eta\) induces a decomposition of the tangent bundle \(TM\) in the form \(TM=\ker\eta\oplus\ker\mathrm{d}\eta\equiv D^{\mathrm{C}}\oplus D^{\mathcal{R}}\). **Proposition 2.3**.: _Given a contact manifold \((M,\eta)\), there exists a unique vector field \(\mathcal{R}\in\mathfrak{X}(M)\), called **Reeb vector field**, such that_ \[i_{\mathcal{R}}\mathrm{d}\eta=0\,,\qquad i_{\mathcal{R}}\eta=1\,. \tag{1}\] The Reeb vector field \(\mathcal{R}\) generates the distribution \(D^{\mathcal{R}}\), called the **Reeb distribution**. **Theorem 2.4** (Darboux theorem for contact manifolds).: _Consider a contact manifold \((M,\eta)\) of dimension \(2n+1\). Then, around every point \(p\in M\) there exists a local chart \((U,q^{i},p_{i},s)\), \(i=1,\ldots,n\), such that_ \[\eta\big{|}_{U}=\mathrm{d}s-p_{i}\mathrm{d}q^{i}\,.\] _These coordinates are called **Darboux**, **natural** or **canonical coordinates** of the contact manifold \((M,\eta)\)._ Notice that Darboux coordinates are a particular case of adapted coordinates and hence, in Darboux coordinates, the Reeb vector field is \[\mathcal{R}|_{U}=\frac{\partial}{\partial s}\,.\] **Example 2.5** (Canonical contact structure).: Let \(Q\) be a smooth manifold of dimension \(n\). Then, the product manifold \(T^{*}Q\times\mathbb{R}\) has a canonical contact structure given by the 1-form \(\eta=\mathrm{d}s-\theta\), where \(s\) is the canonical coordinate of \(\mathbb{R}\) and \(\theta\) is the pull-back of the Liouville 1-form \(\theta_{\circ}\in\Omega^{1}(T^{*}Q)\) by the projection \(T^{*}Q\times\mathbb{R}\to T^{*}Q\). Taking coordinates \((q^{i})\) on \(Q\) and natural coordinates \((q^{i},p_{i})\) on \(T^{*}Q\), the local expression of the contact 1-form is \[\eta=\mathrm{d}s-p_{i}\mathrm{d}q^{i}\,. \tag{2}\] We also have that \(\mathrm{d}\eta=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\) and hence, the Reeb vector field is \(\mathcal{R}=\partial/\partial s\). Given a contact manifold \((M,\eta)\), we have the \(\mathrm{C}^{\infty}(M)\)-module isomorphism \[\flat\colon\quad\mathfrak{X}(M) \longrightarrow \Omega^{1}(M)\] \[X \longmapsto i_{X}\mathrm{d}\eta+(i_{X}\eta)\,\eta\] **Remark 2.6**.: Notice that with this isomorphism in mind, we can define the Reeb vector field in an alternative way as \(\mathcal{R}=\flat^{-1}(\eta)\). \(\diamond\) ### Contact Hamiltonian systems This section reviews the concept of a contact Hamiltonian system and gives two different characterizations of the contact Hamiltonian vector field. **Definition 2.7**.: _Given a contact manifold \((M,\eta)\), for every \(H\in\mathrm{C}^{\infty}(M)\), we define its contact Hamiltonian vector field (or just Hamiltonian vector field) as the unique vector field \(X_{H}\) satisfying_ \[\flat(X_{H})=\mathrm{d}H-(\mathcal{R}(H)+H)\eta. \tag{3}\] In Darboux coordinates, this is written as follows \[X_{H}=\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\left( \frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial s}\right) \frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial H}{\partial p_{i}} -H\right)\frac{\partial}{\partial s}\,.\] An integral curve \(\gamma(t)=(q^{i}(t),p_{i}(t),s(t))\) of this vector field \(X_{H}\) satisfies the contact Hamilton equations \[\dot{q}^{i}=\frac{\partial H}{\partial p_{i}}\,,\quad\dot{p}_{i}=-\left(\frac {\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial s}\right)\,, \quad\dot{s}=p_{i}\frac{\partial H}{\partial p_{i}}-H\,. \tag{4}\] These equations are a generalization of the conservative Hamilton equations. We recover this particular case when \(\mathcal{R}(H)=0\). That is, when \(H\) does not depend on \(s\). The following proposition gives us two equivalent ways of writing equations (2.7): **Proposition 2.8**.: _Let \(H:M\to\mathbb{R}\) be a Hamiltonian function. The following statements are equivalent:_ 1. \(X_{H}\) _is the Hamiltonian vector field of_ \(H\)_,_ 2. \(X_{H}\) _satisfies_ \[i_{X_{H}}\mathrm{d}\eta=\mathrm{d}H-\mathcal{R}(H)\eta,\qquad i_{X_{H}}\eta=-H.\] **Definition 2.9**.: _A contact Hamiltonian system is a triple \((M,\eta,H)\), where \((M,\eta)\) is a contact manifold and \(H:M\to\mathbb{R}\) is a smooth real function on \(M\) that we will refer to the Hamiltonian function._ The contact Hamiltonian vector fields model the dynamics of dissipative mechanical systems. As opposed to the case of symplectic Hamiltonian systems, the evolution does not preserve the energy since \[X_{H}(H)=-\mathcal{R}(H)H\,,\] which expresses the dissipation of the Hamiltonian function. ### Contact Lagrangian systems Let \(Q\) be a manifold with dimension \(n\) and coordinates \((q^{i})\) and consider the product manifold \(TQ\times\mathbb{R}\) with natural coordinates \((q^{i},v^{i},s)\). The **vertical endomorphism**\(\mathcal{J}\colon T(TQ\times\mathbb{R})\to T(TQ\times\mathbb{R})\) and the **Liouville vector field**\(\Delta\in\mathfrak{X}(TQ\times\mathbb{R})\) are the natural extensions of the vertical endomorphism and the Liouville vector field on \(TQ\) to \(TQ\times\mathbb{R}\) (see [32] for more details). The local expressions of these objects in Darboux coordinates are \[\mathcal{J}=\frac{\partial}{\partial v^{i}}\otimes\mathrm{d}q^{i}\;,\quad \Delta=v^{i}\frac{\partial}{\partial v^{i}}\,.\] **Definition 2.10**.: _Consider a path \(c_{s}\colon I\subset\mathbb{R}\to Q\times\mathbb{R}\), where \(c_{s}(t)=(q^{i}(t),s(t))\). The **prolongation** of \(c_{s}\) to \(TQ\times\mathbb{R}\) is the path_ \[c_{s}^{\prime}=(\dot{c},s)\colon I\subset\mathbb{R}\to TQ\times\mathbb{R}\,, \quad c_{s}^{\prime}(t)=(q^{i}(t),\dot{q}^{i}(t),s(t))\,.\] _The path \(c_{s}^{\prime}(t)\) is said to be **holonomic**._ **Definition 2.11**.: _A vector field \(\Gamma\in\mathfrak{X}(TQ\times\mathbb{R})\) is said to satisfy the **second-order condition** or to be a sode if all its integral curves are holonomic._ The following proposition gives an alternative characterization of sodes using the canonical structures defined above: **Proposition 2.12**.: _A vector field \(\Gamma\in\mathfrak{X}(TQ\times\mathbb{R})\) is a sode if, and only if, \(\mathcal{J}\circ\Gamma=\Delta\)._ The local expression of a sode is \(\Gamma=v^{i}\frac{\partial}{\partial q^{i}}+f^{i}\frac{\partial}{\partial v^{ i}}+g\frac{\partial}{\partial s}\,.\) Hence, in coordinates, a sode defines a system of differential equations of the form \[\frac{\mathrm{d}^{2}q^{i}}{\mathrm{d}t^{2}}=f^{i}(q^{i},q^{i},s)\,,\quad\frac {\mathrm{d}s}{\mathrm{d}t}=g(q^{i},\dot{q}^{i},s)\,.\] **Definition 2.13**.: _Let \(L\colon TQ\times\mathbb{R}\to\mathbb{R}\) be a Lagrangian function._ * \(L\) _is said to be regular if its Hessian matrix with respect to the velocities_ \[(W_{ij}=\frac{\partial^{2}L}{\partial v^{j}\partial v^{i}})\] _is non singular._ * _The associated_ **Lagrangian energy** _is_ \(E_{L}=\Delta(L)-L\in\mathrm{C}^{\infty}(TQ\times\mathbb{R})\)_._ * _The_ **Cartan forms** _associated to_ \(L\) _are_ \[\theta_{L}=\mathrm{d}L\circ J\in\Omega^{1}(TQ\times\mathbb{R})\,\quad\omega_{L}=- \mathrm{d}\theta_{L}\in\Omega^{2}(TQ\times\mathbb{R})\,.\] * _The_ \(1\)_-form_ \[\eta_{L}=\mathrm{d}s-\theta_{L}\in\Omega^{1}(TQ\times\mathbb{R})\,,\] _is a_ **contact form** _on_ \(TQ\times\mathbb{R}\) _if, and only if,_ \(L\) _is regular. In this case, the triple_ \((TQ\times\mathbb{R},\eta_{L},L)\) _is called a_ **contact Lagrangian system**_._ In natural coordinates \((q^{i},v^{i},s)\) on \(TQ\times\mathbb{R}\), the contact Lagrangian form \(\eta_{L}\) is \[\eta_{L}=\mathrm{d}s-\frac{\partial L}{\partial v^{i}}\mathrm{d}q^{i}\,,\] and hence \(\mathrm{d}\eta_{L}=\omega_{L}\), is given by \[\mathrm{d}\eta_{L}=-\frac{\partial^{2}L}{\partial s\partial v^{i}}\mathrm{d}s \wedge\mathrm{d}q^{i}-\frac{\partial^{2}L}{\partial q^{j}\partial v^{i}} \mathrm{d}q^{j}\wedge\mathrm{d}q^{i}-\frac{\partial^{2}L}{\partial v^{j} \partial v^{i}}\mathrm{d}v^{j}\wedge\mathrm{d}q^{i}\,.\] Every contact Lagrangian system \((TQ\times\mathbb{R},\eta_{L},L)\) has associated a contact Hamiltonian system \((TQ\times\mathbb{R},\eta_{L},E_{L})\). From (2.3), we have that the Reeb vector field \(\mathcal{R}_{L}\in\mathfrak{X}(TQ\times\mathbb{R})\) for this contact Hamiltonian system is given by the conditions \[i_{\mathcal{R}_{L}}\mathrm{d}\eta_{L}=0\,,\quad i_{\mathcal{R}_{L}}\eta_{L}=1\,.\] Its local expression in natural coordinates \((q^{i},v^{i},s)\) is \[\mathcal{R}_{L}=\frac{\partial}{\partial s}-W^{ji}\frac{\partial^{2}L}{ \partial s\partial v^{j}}\frac{\partial}{\partial v^{i}}\,,\] where \((W^{ij})\) is the inverse of the Hessian matrix of the Lagrangian \((W_{ij})\), that is, \(W^{ij}W_{jk}=\delta^{i}_{k}\). **Definition 2.14**.: _Consider a contact Lagrangian system \((TQ\times\mathbb{R},\eta_{L},L)\)._ _The **contact Lagrangian equations** for a vector field \(X\in\mathfrak{X}(TQ\times\mathbb{R})\) are_ \[i_{X}\mathrm{d}\eta_{L}=\mathrm{d}E_{L}-\mathcal{R}_{L}(E_{L})\eta_{L}\,, \quad i_{X}\eta_{L}=-E_{L}\,. \tag{5}\] _The vector field \(X_{L}\in\mathfrak{X}(TQ\times\mathbb{R})\) solution to these equations is a sode, and it is called the **contact Lagrangian vector field** (it is a contact Hamiltonian vector field for the function \(E_{L}\))._ Let us observe that if \(\gamma(t)=(q^{i}(t),v^{i}(t),s(t))\) is an integral curve of \(X_{L}\), from (2.14) we obtain \[v^{i}(t) =\dot{q}^{i}(t)\,,\] \[\frac{\partial^{2}L}{\partial v^{j}\partial v^{i}}\dot{v}^{j}+ \frac{\partial^{2}L}{\partial q^{j}\partial v^{i}}v^{j}+\frac{\partial^{2}L}{ \partial s\partial v^{i}}\dot{s}-\frac{\partial L}{\partial q^{i}}=\frac{d}{ dt}\left(\frac{\partial L}{\partial v^{i}}\right)-\frac{\partial L}{\partial q^{i}} =\frac{\partial L}{\partial s}\frac{\partial L}{\partial v^{i}}\,,\] \[\dot{s} =L\,,\] which coincide with the generalized Euler-Lagrange equations stated in [37]. Observe that \(\gamma\) is holonomic, that is, \(\gamma(t)=c^{\prime}_{s}(t)=(q^{i}(t),\dot{q}^{i}(t),s(t))\). ## 3 Lie algebroids In this section, we present some basic facts about Lie algebroids, including features of the associated differential calculus and results on Lie algebroid morphisms that will be necessary for the rest of the paper. For further information on groupoids and Lie algebroids, and their roles in differential geometry, see [38, 43, 44]. ### Generalities on Lie algebroids Let \(E\) be a vector bundle of rank \(m\) over a manifold \(Q\) of dimension \(n\), and let \(\tau:E\to Q\) be the vector bundle projection. Denote by \(Sec(E)\) the \(C^{\infty}(Q)\)-module of sections of \(\tau\). A _Lie algebroid structure_\(([\![\cdot,\cdot]\!]_{E},\rho)\) on \(E\) is a Lie bracket \([\![\cdot,\cdot]\!]_{E}\) on the space \(Sec(E)\) together with an _anchor map_\(\rho:E\to TQ\) and its, identically denoted, induced \(C^{\infty}(Q)\)-module homomorphism \(\rho:Sec(E)\to\mathfrak{X}(Q)\), such that the _compatibility condition_ \[[\![\sigma_{1},f\sigma_{2}]\!]_{E}=f[\![\sigma_{1},\sigma_{2}]\!]_{E}+(\rho( \sigma_{1})f)\sigma_{2}\,,\] holds for any smooth functions \(f\) on \(Q\) and sections \(\,\sigma_{1},\sigma_{2}\) of \(E\) (here \(\rho(\sigma_{1})\) is the vector field on \(Q\) given by \(\rho(\sigma_{1})(q)=\rho(\sigma_{1}(q))\)). The triple \((E,[\![\cdot,\cdot]\!]_{E},\rho)\) is called a _Lie algebroid over \(Q\)_. From the compatibility condition and the Jacobi identity, it follows that \(\rho:Sec(E)\to\mathfrak{X}(Q)\) is a homomorphism between the Lie algebras \((Sec(E),[\![\cdot,\cdot]\!]_{E})\) and \((\mathfrak{X}(Q),[\cdot,\cdot])\). Throughout this paper, the role played by a Lie algebroid is the same as the tangent bundle of \(Q\). In this way, one regards an element \(e\) of \(E\) as a generalized velocity, and the actual velocity \(v\) is obtained when we apply the anchor map to \(e\), i.e. \(v=\rho(e)\). Let \((q^{i})\) be local coordinates on a neighborhood \(U\) of \(Q\), \(i=1,\ldots,n\), and \(\{e_{\alpha}\}\) be a local basis of sections of \(\tau\), \(\alpha=1,\ldots,m\). Given an element \(a_{q}\in E\) such that \(\tau(a_{q})=q\), we can write \(a_{q}=y^{\alpha}(a_{q})e_{\alpha}(q)\in E_{q}\), i.e. each section \(\sigma\) is given locally by \(\sigma\big{|}_{U}=y^{\alpha}e_{\alpha}\) and the coordinates of \(a_{q}\) are \((q^{i}(q),y^{\alpha}(a_{q}))\). For _the anchor map_\(\rho:E\to TQ\) and its, identically denoted, induced \(C^{\infty}(Q)\)-module homomorphism \(\rho:Sec(E)\to\mathfrak{X}(Q)\,,\ [\rho(\sigma)]\,(q)=\rho(\sigma(q))\), we have \[[\rho(e_{\alpha})](q)=\rho(e_{\alpha}(q))=\rho^{i}_{\alpha}(q)\frac{\partial }{\partial q^{i}}\big{|}_{q}\,. \tag{6}\] A Lie algebroid structure on \(Q\) is locally determined as a set of local _structure functions_\(\rho^{i}_{\alpha}\), \(\mathcal{C}^{\gamma}_{\alpha\beta}:Q\to\mathbb{R}\) on \(Q\) that are defined by \[\rho(e_{\alpha})=\rho^{i}_{\alpha}\frac{\partial}{\partial q^{i}},\quad[\![e _{\alpha},e_{\beta}]\!]_{E}=\mathcal{C}^{\gamma}_{\alpha\,\beta}e_{\gamma}\,, \tag{7}\] and satisfy the relations \[\sum_{cyclic(\alpha,\beta,\gamma)}\left(\rho^{i}_{\alpha}\frac{\partial \mathcal{C}^{\nu}_{\beta\gamma}}{\partial q^{i}}+\mathcal{C}^{\nu}_{\alpha \mu}\mathcal{C}^{\mu}_{\beta\gamma}\right)=0\,\quad\rho^{j}_{\alpha}\frac{ \partial\rho^{i}_{\beta}}{\partial q^{j}}-\rho^{j}_{\beta}\frac{\partial\rho^ {i}_{\alpha}}{\partial q^{j}}=\rho^{i}_{\gamma}\mathcal{C}^{\gamma}_{\alpha \beta}\,. \tag{8}\] These relations, which are a consequence of the compatibility condition and Jacobi's identity, are usually called _the structure equations_ of the Lie algebroid \(E\). **Definition 3.1**.: _A curve \(\widetilde{c}\colon I\subseteq\mathbb{R}\to Q\) is called an integral curve of a section \(\xi\) of \(\tau:E\to Q\) if \(\widetilde{c}(t)\) is an integral curve of the vector field \(\rho(\xi)\), that is,_ \[\rho(\xi)(\widetilde{c}(t))=\widetilde{c}_{*}(t)\left(\frac{\mathrm{d}}{ \mathrm{d}t}\Big{|}_{t}\right). \tag{9}\] If \(\widetilde{c}\) is written locally as \(\widetilde{c}(t)=(q^{i}(t))\) and \(\xi\) as \(\xi=y^{\alpha}e_{\alpha}\), then we deduce that (3.1) is written in local coordinates as \[\left.\frac{\mathrm{d}q^{i}}{\mathrm{d}t}\right|_{t}=\rho_{\alpha}^{i}( \widetilde{c}(t))y^{\alpha}(\widetilde{c}(t))\;.\] A Lie algebroid structure on \(E\) allows us to define _the exterior differential of \(E\)_, \(\mathrm{d}^{E}:Sec(\bigwedge^{k}E^{*})\to Sec(\bigwedge^{k+1}E^{*})\), as follows: \[\mathrm{d}^{E}\mu\;(\sigma_{1},\ldots,\sigma_{k+1}) = \sum_{i=1}^{k+1}(-1)^{i+1}\rho(\sigma_{i})\mu(\sigma_{1},\ldots, \widehat{\sigma_{i}},\ldots,\sigma_{k+1})\] \[+ \sum_{i<j}(-1)^{i+j}\mu([\sigma_{i},\sigma_{j}]_{E},\sigma_{1}, \ldots,\widehat{\sigma_{i}},\ldots,\widehat{\sigma_{j}},\ldots\sigma_{k+1})\;,\] for \(\mu\in Sec(\bigwedge^{k}E^{*})\) and \(\sigma_{1},\ldots,\sigma_{k+1}\in Sec(E)\). It follows that \(\mathrm{d}^{E}\) is a cohomology operator, that is, \((\mathrm{d}^{E})^{2}=0\). In particular, if \(f:Q\to\mathbb{R}\) is a smooth real function then \(\mathrm{d}^{E}f\in E^{*}\) is given by \(\mathrm{d}^{E}f(\sigma)=\rho(\sigma)f\), for \(\sigma\in Sec(E)\), so we have that \[\mathrm{d}^{E}f=\rho_{\alpha}^{i}\,\frac{\partial f}{\partial q^{i}}\,e^{ \alpha}, \tag{10}\] where \(\{e^{\alpha}\}\) is the dual basis of \(\{e_{\alpha}\}\). Locally, the exterior differential is determined by \[\mathrm{d}^{E}q^{i}=\rho_{\alpha}^{i}e^{\alpha}\quad\mbox{and}\quad\mathrm{d }^{E}e^{\gamma}=-\frac{1}{2}\mathcal{C}_{\alpha\beta}^{\gamma}e^{\alpha}\wedge e ^{\beta}\,. \tag{11}\] Indeed, from (3.1) we deduce \(\mathrm{d}^{E}q^{i}(e_{\alpha})=\rho(e_{\alpha})(q^{i})=\rho_{\alpha}^{j}\frac {\partial}{\partial q^{j}}(q^{i})=\rho_{\alpha}^{i}\). Then \(\mathrm{d}^{E}q^{i}=\rho_{\alpha}^{i}\,e^{\alpha}\). Similarly, \(\mathrm{d}^{E}e^{\gamma}(e_{\alpha},e_{\beta})=\rho(e_{\alpha})(e^{\gamma}(e _{\beta}))-\rho(e_{\beta})(e^{\gamma}(e_{\alpha}))-e^{\gamma}([e_{\alpha},e_{ \beta}]_{E})=-\mathcal{C}_{\alpha\beta}^{\gamma}\). The usual Cartan calculus extends to the case of Lie algebroids: for every section \(\sigma\) of \(E\) we have a derivation \(\imath_{\sigma}\) (contraction) of degree \(-1\) and a derivation \(\mathcal{L}_{\sigma}=\imath_{\sigma}\circ\mathrm{d}+\mathrm{d}\circ\imath_{\sigma}\) (the Lie derivative) of degree \(0\); for more details, see [43, 44]. Let \((E,[\![\cdot,\cdot]\!]_{E},\rho)\) and \((E^{\prime},[\![\cdot,\cdot]\!]_{E^{\prime}},\rho^{\prime})\) be two Lie algebroids over \(Q\) and \(Q^{\prime}\) respectively, then a morphism of vector bundles \((F,f)\) of \(E\) on \(E^{\prime}\) is said to be a _Lie algebroid morphism_ if \[\mathrm{d}^{E}((F,f)^{*}\sigma^{\prime})=(F,f)^{*}(\mathrm{d}^{E^{\prime}} \sigma^{\prime})\,,\quad\mbox{ for all }\sigma^{\prime}\in Sec(\bigwedge^{k}(E^{\prime})^{*})\mbox{ and for all }k. \tag{12}\] Here \((F,f)^{*}\sigma^{\prime}\) is the section of the vector bundle \(\bigwedge^{k}E^{*}\to Q\) defined by \[((F,f)^{*}\sigma^{\prime})_{q}(a_{1},\ldots,a_{k})=\sigma^{\prime}_{f(q)}(F(a _{1}),\ldots,F(a_{k}))\,, \tag{13}\] for \(q\in Q\) and \(a_{1},\ldots,a_{k}\in E_{q}\). In particular, if \(Q=Q^{\prime}\) and \(f=id_{Q}:Q\to Q\) then the pair \((F,f)\) is a Lie algebroid morphism if, and only if, \[[\![F\circ\sigma_{1},F\circ\sigma_{2}]\!]_{E^{\prime}}=F[\![\sigma_{1},\sigma_ {2}]\!]_{E},\quad\rho^{\prime}(F\circ\sigma)=\rho(\sigma),\] for \(\sigma,\sigma_{1},\sigma_{2}\in Sec(E)\). Finally, we review the notion of a Lie subalgebroid. **Definition 3.2**.: _Let \((E,\llbracket\cdot,\cdot\rrbracket_{E},\rho)\) and \((F,\llbracket\cdot,\cdot\rrbracket_{F},\rho^{\prime})\) be two Lie algebroids over the manifolds \(Q\) and \(N\), respectively. A Lie subalgebroid is a morphism of Lie algebroids \(j:F\to E\), \(i:N\to Q\) such that the pair \((j,i)\) is a monomorphism of vector bundles and \(i\) is an injective immersion (see [38])._ ### Examples of Lie algebroids and Lie subalgebroids **Example 3.3**.: **(Tangent bundle)** The standard example of a Lie algebroid is the tangent bundle of a manifold \(Q\). In this case, the space of sections is just the set of vector fields on \(Q\) and the Lie bracket of sections is induced by the standard Lie bracket of vector fields on \(Q\). The anchor map is the identity. Let \(N\) be a submanifold of \(Q\), then \(TN\) is a Lie subalgebroid of \(TQ.\) Now, let \(\mathcal{D}\) be a completely integrable distribution on a manifold \(Q\). \(\mathcal{D}\) equipped with the bracket of vector fields is a Lie algebroid over \(Q\) since \(\tau_{TQ}\mid_{\mathcal{D}}:\mathcal{D}\to Q\) is a vector bundle. The anchor map is the inclusion \(i_{\mathcal{D}}:\mathcal{D}\to TQ\) (\(i_{\mathcal{D}}\) is a Lie algebroid monomorphism). Hence, \(\mathcal{D}\) is a Lie subalgebroid of the Lie algebroid \(\tau_{TQ}:TQ\to Q\). Likewise, if \(N\) is an integrable manifold of \(\mathcal{D}\), then \(\mathcal{D}|_{N}\) is a Lie subalgebroid of \(\mathcal{D}\). **Example 3.4**.: **(Lie algebra)** Let \(\mathfrak{g}\) be a _finite dimensional real Lie algebra_ and \(Q=\{q\}\) be a unique point. The vector bundle \(\tau_{\mathfrak{g}}:\mathfrak{g}\to Q\) is a Lie algebroid. The sections of this bundle can be identified with the elements of \(\mathfrak{g}\), and therefore we can consider as the Lie bracket the structure of the Lie algebra induced by \(\mathfrak{g}\), and denoted by \([\cdot,\cdot]_{\mathfrak{g}}\). Since \(TQ=\{0\}\) one may consider the anchor map \(\rho\equiv 0\). Moreover, if \(\mathfrak{h}\) is a Lie subalgebra of \(\mathfrak{g}\) and we consider the Lie algebroid induced by \(\mathfrak{g}\) and \(\mathfrak{h}\) over a point, then \(\mathfrak{h}\) is a Lie subalgebroid of \(\mathfrak{g}\). **Example 3.5**.: **(Action Lie Algebroid)** Let \(\phi:Q\times G\to Q\) be an action of \(G\) on the manifold \(Q\), where \(G\) is a Lie group. _The vector bundle \(\tau_{Q\times\mathfrak{g}}:Q\times\mathfrak{g}\to Q\) is a Lie algebroid over \(Q\)._ The induced anti-homomorphism between the Lie algebras \(\mathfrak{g}\) and \(\mathfrak{X}(Q)\) by the action is determined by \(\Phi:\mathfrak{g}\to\mathfrak{X}(Q)\), \(\xi\mapsto\xi_{Q}\), where \(\xi_{Q}\) is the infinitesimal generator of the action for \(\xi\in\mathfrak{g}\). The anchor map \(\rho:Q\times\mathfrak{g}\to TQ\) is defined by \(\rho(q,\xi)=-\xi_{Q}(q)\), and the Lie bracket of sections is given by the Lie algebra structure on \(Sec(\tau_{Q\times\mathfrak{g}})\) as \[\llbracket\hat{\xi},\hat{\eta}\rrbracket_{Q\times\mathfrak{g}}(q)=(q,[\xi, \eta])=\widehat{[\xi,\eta]}(q),\] for \(q\in Q\), where \(\hat{\xi}(q)=(q,\xi)\), \(\hat{\eta}(q)=(q,\eta)\) for \(\xi,\eta\in\mathfrak{g}\). The triple \((Q\times\mathfrak{g},\llbracket\cdot,\cdot\rrbracket_{Q\times\mathfrak{g}},\rho)\) is called _Action Lie algebroid_. Let \(N\) be a submanifold of \(Q\) and \(\mathfrak{h}\) be a Lie subalgebra of \(\mathfrak{g}\) such that the infinitesimal generators of the elements of \(\mathfrak{h}\) are tangent to \(N\); that is, the application \[\mathfrak{h} \to\mathfrak{X}(N)\] \[\xi \mapsto\xi_{N}\] is well defined. Thus, the action Lie algebroid \(N\times\mathfrak{h}\to N\) is a Lie subalgebroid of \(Q\times\mathfrak{g}\to Q\). **Example 3.6**.: **(Atiyah (gauge) algebroid)** Let \(G\) be a Lie group and assume that \(G\) acts freely and properly on \(Q\) and we denote by \(\pi:Q\to Q/G\) the associated principal bundle. The tangent lift of the action gives a free and proper action of \(G\) on \(TQ\). Thus, we can consider the fibration \(\tau:TQ/G\to Q/G\) given by \(\tau([v_{q}])=\pi(q)\). It can be proved that \(\tau\) is a vector bundle whose fiber over a point \(\pi(q)\in Q/G\) is isomorphic \(T_{q}Q\). The sections of \(\tau:TQ/G\to Q/G\) may be identified with the vector fields on \(Q\) which are invariant by the action \(\phi:G\times Q\to Q\), that is, \[Sec(TQ/G)=\{X\in\mathfrak{X}(Q)\mid X\text{ is $G$-invariant}\}.\] Since all \(G\)-invariant vector fields are \(\pi\)-projectable and the standard Lie bracket on vector fields is closed with respect to \(G\)-invariant vector fields, we can define a Lie algebroid structure on \(\widehat{TQ}:=TQ/G\to\widehat{Q}:=Q/G\), where the anchor map \(\rho:\widehat{TQ}\to T(\widehat{Q})\) is given by \(\rho([v_{q}])=T_{q}\pi(v_{q})\) Additionally, let \(N\) be a \(G\)-invariant submanifold of \(Q\) and \(\mathcal{D}_{N}\) be a \(G-\)invariant integrable distribution over \(N.\) We may consider the vector bundle \(\widehat{\mathcal{D}_{N}}=\mathcal{D}_{N}/G\to N/G=\widehat{N}\) and endow it with a Lie algebroid structure. The sections of \(\widehat{\mathcal{D}_{N}}\) are \[Sec(\widehat{\mathcal{D}}_{N})=\{X\in\mathfrak{X}(N)\mid X\text{ is $G$-invariant and }X(q)\in\mathcal{D}_{N}(q),\forall q\in N\}.\] The standard bracket of vector fields on \(N\) induces a Lie algebra structure on \(Sec(\widehat{\mathcal{D}}_{N}).\) The anchor map is the canonical inclusion of \(\widehat{\mathcal{D}}_{N}\) on \(T\widehat{N}\) and \(\widehat{\mathcal{D}}_{N}\) is a Lie subalgebroid of \(TQ/G\to Q/G.\) ### The prolongation of a Lie algebroid over a fibration. In this subsection we recall a particular kind of Lie algebroid that will be used later (see [38], for more details). If \((E,[\![\cdot,\cdot]\!]_{E},\rho)\) is a Lie algebroid of rank \(m\) over a smooth manifold \(Q\) of dimension \(n\), and \(\pi:P\to Q\) is a fibration, then \[\widetilde{\tau}_{P}\colon\mathcal{T}^{E}P=\bigcup_{p\in P}\mathcal{T}_{p}^{E }P\to P,\] where \[\mathcal{T}_{p}^{E}P=\{(a_{\pi(p)},v_{p})\in E\times TP\,/\,\rho(a_{\pi(p)})= T\pi(v_{p})\}\] is a Lie algebroid called _the prolongation of the Lie algebroid \(E\) over \(\pi:P\to Q\)_, where \(T\pi:TP\to TQ\) denotes the tangent map to \(\pi\). The anchor map of this Lie algebroid is \[\begin{array}{rcl}\rho^{\pi}\colon&\mathcal{T}^{E}P\equiv E\times_{TQ}TP& \rightarrow&TP\\ &(a_{\pi(p)},v_{p})&\mapsto&\rho^{\pi}(a_{\pi(p)},v_{p})=v_{p}\,.\end{array}\] The Lie bracket structure on the space of sections of \(\mathcal{T}^{E}P\) will be given shortly. In this paper we consider two particular prolongations, one over \(P=E\times\mathbb{R}\to Q\) and the other over \(P=E^{*}\times\mathbb{R}\to Q\). The following diagram collect the different projections defined from \(\mathcal{T}^{E}P\), that will be used throughout the chapter where \[\tau_{1}(a_{\pi(p)},v_{p})=a_{\pi(p)}\,,\qquad\rho^{\pi}(a_{\pi(p)},v_{p})=v_{p} \,,\qquad\widetilde{\tau}_{P}(a_{\pi(p)},v_{p})=p,\] being \(a_{\pi(p)}\in E,\ v_{p}\in T_{p}P\) and \(p\in P\). Now we will describe some objects related to \(\mathcal{T}^{E}P\). If \((q^{i},u^{\ell}\,)\) are local coordinates on \(P\) and \((q^{i},y^{\alpha})\) are local coordinates on \(E\) adapted to the local basis of section \(\{e_{\alpha}\}\) of \(\tau:E\to Q\), then the induced local coordinate system \((q^{i},z^{\alpha},u^{\ell},\dot{u}^{\ell})\) on \(\mathcal{T}^{E}P\), \(i=1,\ldots,n\), \(\alpha=1,\ldots,m\), \(\ell=1,\ldots,n^{\prime}\), is \[q^{i}(a_{\pi(p)},v_{p}) = q^{i}(\pi(p))\;,\qquad u^{\ell}(a_{\pi(p)},v_{p}) = u^{\ell}(p)\;,\] \[z^{\alpha}(a_{\pi(p)},v_{p}) = y^{\alpha}(a_{\pi(p)})\;,\qquad\dot{u}^{\ell}(a_{\pi(p)},v_{p}) = v_{p}(u^{\ell})\;.\] where \[a_{\pi(p)}=y^{\alpha}(a_{\pi(p)})e_{\alpha}(\pi(p))\,,\qquad v_{p}=v^{i}\frac {\partial}{\partial q^{i}}\Big{|}_{p}+\dot{u}^{\ell}\frac{\partial}{\partial u ^{\ell}}\Big{|}_{p}\,,\] and since \(\rho(a_{\pi(p)})=T_{p}\pi(v_{p})\,,\) from (3.1) we have \[v^{i}=y^{\alpha}(a_{\pi(p)})\rho^{i}_{\alpha}(\pi(p))\;, \tag{14}\] where \(\rho^{i}_{\alpha}\) is the local expression of the anchor map \(\rho:E\to TQ\). A local basis of sections of \(\widetilde{\tau}_{P}\colon\mathcal{T}^{E}P\to P\) is given by the family \(\mathcal{X}_{\alpha},\mathcal{V}_{\ell}\colon P\to\mathcal{T}^{E}P\), where \[\mathcal{X}_{\alpha}(p)=\left(e_{\alpha}(\pi(p)),\rho^{i}_{\alpha}(\pi(p)) \frac{\partial}{\partial q^{i}}\Big{|}_{p}\right)\,,\qquad\mathcal{V}_{\ell}( p)=\left(0_{\pi(p)},\frac{\partial}{\partial u^{\ell}}\Big{|}_{p}\right). \tag{15}\] From now we will denote by \(Sec(\mathcal{T}^{E}P)\) the set of sections of the projection \(\widetilde{\tau}_{P}:\mathcal{T}^{E}P\to P\). Locally, if a section \(Z\in Sec(\mathcal{T}^{E}P)\) writes as \(Z=Z^{\alpha}\mathcal{X}_{\alpha}+V^{\ell}\mathcal{V}_{\ell}\), then the expression of the associated vector field is \[\rho^{\pi}(Z)=\rho^{i}_{\alpha}Z^{\alpha}\frac{\partial}{\partial q^{i}}+V^{ \ell}\frac{\partial}{\partial u^{\ell}}\in\mathfrak{X}(P)\,.\] Thus, one can observe that the map \(\rho^{\pi}\) induces a \(\mathcal{C}^{\infty}(P)\)-modules homomorphism \[\rho^{\pi}\colon Sec(\mathcal{T}^{E}P)\to\mathfrak{X}(P).\] The Lie bracket structure on \(Sec(\mathcal{T}^{E}P)\) can be defined from its value on the elements of the local basis \(\{\mathcal{X}_{\alpha},\,\mathcal{V}_{\ell}\}\), which it is characterized by the relations \[[\![\mathcal{X}_{\alpha},\mathcal{X}_{\beta}]\!]^{\pi}=\mathcal{C}^{\gamma}_{ \alpha\beta}\mathcal{X}_{\gamma}\,,\ \ [\![\mathcal{X}_{\alpha},\mathcal{V}_{\ell}]\!]^{\pi}=0\,,\ \ [\![ \mathcal{V}_{\ell},\mathcal{V}_{\varphi}]\!]^{\pi}=0\,, \tag{16}\] where \(\mathcal{C}^{\gamma}_{\alpha\beta}\) are the structure functions associated with the Lie bracket of sections of \(E\to Q\). The exterior differential \[\mathrm{d}^{\mathcal{T}^{EP}}\colon\mathit{Sec}(\bigwedge^{l}(\mathcal{T}^{EP}P) ^{\,*})\to\mathit{Sec}(\bigwedge^{l+1}(\mathcal{T}^{EP}P)^{\,*})\] is therefore determined by \[\mathrm{d}^{\mathcal{T}^{EP}}q^{i} = \rho_{\alpha}^{i}\mathcal{X}^{\alpha}\,,\qquad\qquad\qquad\mathrm{ d}^{\mathcal{T}^{EP}}u^{\ell}, = \mathcal{V}^{\ell}\,, \tag{17}\] \[\mathrm{d}^{\mathcal{T}^{EP}}\mathcal{X}^{\gamma} = -\frac{1}{2}\mathcal{C}^{\gamma}_{\alpha\beta}\mathcal{X}^{\alpha }\wedge\mathcal{X}^{\beta}\;,\qquad\qquad\mathrm{d}^{\mathcal{T}^{EP}} \mathcal{V}^{\ell} = 0\,,\] where \(\{\mathcal{X}^{\alpha},\mathcal{V}^{\ell}\}\) is the dual basis of \(\{\mathcal{X}_{\alpha},\mathcal{V}_{\ell}\}\). **Example 3.7**.: In the case of \(E=TQ\), the prolongation \(\mathcal{T}^{TQ}TQ\) of this Lie algebroid over the projection \(\tau_{Q}:TQ\to Q\) may be identify with \(TTQ\) with the standard Lie algebroid structure over \(TQ\). **Example 3.8**.: Let \(\mathfrak{g}\) be a real Lie algebra of finite dimension. \(\mathfrak{g}\) is a Lie algebroid over a single point \(Q=\{q\}\). We will describe the prolongation \(\mathcal{T}^{\mathfrak{g}}\mathfrak{g}\) of the Lie algebroid \(\tau_{\mathfrak{g}}:\mathfrak{g}\to Q=\{q\}\) over the proper fibration \(\tau_{\mathfrak{g}}:\mathfrak{g}\to Q=\{q\}\). We have the identification \[\mathcal{T}^{\mathfrak{g}}\mathfrak{g}=\{(\xi_{1},v_{\xi_{2}}) \in\mathfrak{g}\times T\mathfrak{g}\} \equiv \mathfrak{g}\times(\mathfrak{g}\times\mathfrak{g})\] \[(\xi_{1},v_{\xi_{2}}) \equiv (\xi_{1},\xi_{2},\xi_{3}),\] where \(v_{\xi_{2}}\simeq(\xi_{2},\xi_{3})\). The vector bundle projection \(\tilde{\tau}_{\mathfrak{g}}:\mathcal{T}^{\mathfrak{g}}\mathfrak{g}\equiv 3 \mathfrak{g}\rightarrow\mathfrak{g}\) is given by \(\tilde{\tau}_{\mathfrak{g}}(\xi_{1},\xi_{2},\xi_{3})=\xi_{1}\), and the anchor map is \(\rho^{\tau}:\mathfrak{g}\times(\mathfrak{g}\times\mathfrak{g})\to T \mathfrak{g}\), \(\rho^{\tau}(\xi_{1},\xi_{2},\xi_{3})=(\xi_{2},\xi_{3})\in T_{\xi_{2}} \mathfrak{g}\). Let \(\{e_{A}\}\) be a basis of the Lie algebra \(\mathfrak{g}\) and \(y^{A}\) the induced local coordinates on \(\mathfrak{g}\), that is, \(\xi=y^{A}e_{A}\). Also this basis induces a basis of sections of \(\tilde{\tau}_{\mathfrak{g}}:\mathcal{T}^{\mathfrak{g}}\mathfrak{g}\equiv 3 \mathfrak{g}\rightarrow\mathfrak{g}\) as \[\mathcal{X}_{A}(\xi)=(\xi,e_{A},0),\qquad\mathcal{V}_{A}(\xi)=\left(\xi,0, \frac{\partial}{\partial y^{A}}\Big{|}_{\xi}\right),\] see (3.3), since the anchor map of \(\mathfrak{g}\) is the zero constant function. The Lie bracket structure on \(\mathit{Sec}(\mathcal{T}^{\mathfrak{g}}\mathfrak{g})\) is characterized by the relations (3.3), that is, \[[\![\mathcal{X}_{A},\mathcal{X}_{B}]\!]^{\tau}=\mathcal{X}_{[e_{A},e_{B}]}, \quad[\![\mathcal{X}_{A},\mathcal{V}_{B}]\!]^{\tau}=0,\quad[\![\mathcal{V}_{A },\mathcal{V}_{B}]\!]^{\tau}=0.\] **Example 3.9**.: Consider a Lie algebra \(\mathfrak{g}\) acting on a manifold \(Q\). Thus, we have a Lie algebra homomorphism \(\mathfrak{g}\rightarrow\mathfrak{X}(Q)\) mapping every element \(\xi\) of \(\mathfrak{g}\) to the associated fundamental vector field \(\xi_{Q}\) on \(Q\). We consider the Lie algebroid \(\tau_{Q\times\mathfrak{g}}:Q\times\mathfrak{g}\to Q\) with anchor map \[\rho:(q,\xi)\in Q\times\mathfrak{g}\longmapsto\rho(q,\xi)=-\xi_{Q}(q)\in TQ.\] Identifying \[T(Q\times\mathfrak{g})=TQ\times T\mathfrak{g}=TQ\times 2\mathfrak{g}\] (where \(2\mathfrak{g}=\mathfrak{g}\times\mathfrak{g}\)), an element of the prolongation of the Lie algebroid \(\tau_{Q\times\mathfrak{g}}:Q\times\mathfrak{g}\to Q\) \[\mathcal{T}^{Q\times\mathfrak{g}}(Q\times\mathfrak{g})=(Q\times\mathfrak{g}) \times_{TQ}T(Q\times\mathfrak{g})=(Q\times\mathfrak{g})\times_{TQ}(TQ\times \mathfrak{g}\times\mathfrak{g})\] over the bundle projection \(\tau_{Q\times\mathfrak{g}}\) is of the form \[((q,\xi),(v_{q},\eta,\tilde{\eta})),\] where \(q\in Q,\)\(v_{q}\in T_{q}Q\) and \((\xi,\eta,\tilde{\eta})\in 3\mathfrak{g},\) together with the condition \(T\tau_{Q\times\mathfrak{g}}(v_{q},\eta,\tilde{\eta}))=\rho(q,\xi)\) which implies that \(v_{q}=-\xi_{Q}(q).\) Therefore, we can identify \(\mathcal{T}^{Q\times\mathfrak{g}}(Q\times\mathfrak{g})\) with the vector bundle \(\tilde{\tau}_{Q\times\mathfrak{g}}:(Q\times\mathfrak{g})\times(\mathfrak{g} \times\mathfrak{g})\to Q\times\mathfrak{g}\) as follows \[\mathcal{T}^{Q\times\mathfrak{g}}(Q\times\mathfrak{g}) \equiv (Q\times\mathfrak{g})\times(\mathfrak{g}\times\mathfrak{g})\] \[((q,\xi),(v_{q},\eta,\tilde{\eta})) \equiv (q,\xi,\eta,\tilde{\eta})\,.\] Under this identification, the anchor map is given by \[\rho^{\tau} : (Q\times\mathfrak{g})\times(\mathfrak{g}\times\mathfrak{g}) \longrightarrow TQ\times\mathfrak{g}\times\mathfrak{g}\] \[(q,\xi,\eta,\tilde{\eta}) \longmapsto \rho^{\tau}(q,\xi,\eta,\tilde{\eta})=(-\xi_{Q}(q),\eta,\tilde{ \eta})\,.\] Given a basis \(\{e_{A}\}\) of \(\mathfrak{g},\) the basis \(\{\mathcal{X}_{A},\mathcal{V}_{A}\}\) of sections of \(\mathcal{T}^{Q\times\mathfrak{g}}(Q\times\mathfrak{g})\to Q\times \mathfrak{g}\) is given by \[\mathcal{X}_{A}(q,\xi)=(q,\xi,e_{A},0),\quad\mathcal{V}_{A}(q,\xi)=(q,\xi,0,e_ {A}).\] Finally, the Lie bracket structure on \(Sec(\mathcal{T}^{Q\times\mathfrak{g}}(Q\times\mathfrak{g}))\) is characterized by \[[\![\mathcal{X}_{A},\mathcal{X}_{B}]\!]^{\tau}=\mathcal{X}_{[e_{A},e_{B}]}, \quad[\![\mathcal{X}_{A},\mathcal{V}_{B}]\!]^{\tau}=0,\quad[\![\mathcal{V}_{A},\mathcal{V}_{B}]\!]^{\tau}=0.\] **Example 3.10**.: Let us describe the \(E\)-tangent bundle to \(E\) in the case of \(E\) being an Atiyah algebroid induced by a trivial principal \(G-\)bundle \(\pi:G\times Q\to Q.\) In such case, by left trivialization we get the Atiyah algebroid, the vector bundle \[\tau_{\mathfrak{g}\times TQ}:\mathfrak{g}\times TQ\to Q.\] For \(X\in\mathfrak{X}(Q)\) and \(\xi\in\mathfrak{g},\) we may consider sections \(X^{\xi}:Q\to\mathfrak{g}\times TQ\) of the Atiyah algebroid given by \[X^{\xi}(q)=(\xi,X(q))\text{ for }q\in Q.\] Moreover, the anchor map \(\rho:\mathfrak{g}\times TQ\to TQ\) is defined by \(\rho(X^{\xi}(q))=X(q)\) and the Lie bracket of sections is given by \([\![X^{\xi},Y^{\xi}]\!]_{\mathfrak{g}\times TQ}=([X,Y]_{TQ},[\xi,\eta]_{ \mathfrak{g}}).\) Identifying \[T(\mathfrak{g}\times TQ)=T\mathfrak{g}\times TTQ=\mathfrak{g}\times\mathfrak{ g}\times TTQ\,,\] an element of the prolongation of the Lie algebroid \(\tau_{\mathfrak{g}\times TQ}:\mathfrak{g}\times TQ\to Q\) \[\mathcal{T}^{\mathfrak{g}\times TQ}(\mathfrak{g}\times TQ)=(\mathfrak{g}\times TQ )\times_{TQ}T(\mathfrak{g}\times TQ)=(\mathfrak{g}\times TQ)\times_{TQ}( \mathfrak{g}\times\mathfrak{g}\times TTQ)\] over the bundle projection \(\tau_{\mathfrak{g}\times TQ}\) is of the form \[((\xi,v_{q}),(\eta,\tilde{\eta},X_{u_{q}})),\] together with the condition \(T\tau_{\mathfrak{g}\times TQ}(\eta,\tilde{\eta},X_{u_{q}}))=\rho(\xi,v_{q}),\) which implies that \(u_{q}=v_{q}.\) Thus, we may identify \(\mathcal{T}^{\mathfrak{g}\times TQ}(\mathfrak{g}\times TQ)\) with the vector bundle \(\tilde{\tau}_{\mathfrak{g}\times TQ}:\mathfrak{g}\times 2\mathfrak{g}\times TTQ \to\mathfrak{g}\times TQ\) as follows \[\mathcal{T}^{\mathfrak{g}\times TQ}(\mathfrak{g}\times TQ) \equiv \mathfrak{g}\times 2\mathfrak{g}\times TTQ\] \[((\xi,v_{q}),(\eta,\tilde{\eta},X_{v_{q}})) \equiv (\xi,(\eta,\tilde{\eta}),X_{v_{q}}),\] whose vector bundle projection is \(\tilde{\tau}_{\mathfrak{g}\times TQ}(\xi,((\eta,\tilde{\eta}),X_{v_{q}}))=(\xi,v_ {q})\). Under this identification, the anchor map is given by \[\begin{array}{rcl}\rho^{\tau}&:&\mathfrak{g}\times 2\mathfrak{g}\times TTQ &\longrightarrow&\mathfrak{g}\times\mathfrak{g}\times TTQ\\ &&(\xi,((\eta,\tilde{\eta}),X))&\longmapsto&((\eta,\tilde{\eta}),X)\,.\end{array}\] If \((\eta,\tilde{\eta})\in 2\mathfrak{g}\) and \(X\in\mathfrak{X}(TQ)\), then one may consider the section \(((\eta,\tilde{\eta}),X):\mathfrak{g}\times TQ\to\mathcal{T}^{\mathfrak{g} \times TQ}(\mathfrak{g}\times TQ)\) given by \[((\eta,\tilde{\eta}),X)(\xi,v_{q})=(\xi,((\eta,\tilde{\eta}),X(v_{q}))),\ \mbox{for}\ (\xi,v_{q})\in \mathfrak{g}\times T_{q}Q.\] Moreover, the Lie bracket of these sections is given by \[[\![((\eta,\tilde{\eta}),X),((\xi,\tilde{\xi}),Y)]^{\tau}=(([\eta,\xi]_{ \mathfrak{g}},0),[X,Y]_{TQ}).\] ## 4 Contact Lagrangian formalism on Lie algebroids In this section, the contact Lagrangian formalism is extended to the general setting of Lie algebroids. First, we will introduce some geometric ingredients which are necessary to develop the contact Lagrangian formalism on Lie algebroids. **Definition 4.1**.: _A Lie algebroid \((E,[\![\cdot,\cdot]\!]_{E},\rho)\) of rank \(2k+1\) over a manifold \(M\) of dimension \(n\) is said to be contact if it admits a \(1\)-section \(\eta\) of the vector bundle \(\Lambda^{1}E^{*}\to M\) such that_ \[\eta\wedge(\mathrm{d}^{E}\eta)^{k}\neq 0\mbox{ everywhere on }M,\] _where \(\mathrm{d}^{E}:Sec(\bigwedge^{l}E^{*})\to Sec(\bigwedge^{l+1}E^{*})\) is the exterior differential of \(E\). We say that \(\eta\) defines a contact structure on \(E\)._ The above definition is equivalent to say that the fibres \((E_{x},\eta_{x})\) have a contact structure, and therefore they have odd dimension \(2k+1\). We also notice that \((\mathrm{d}^{E}\eta)_{|_{ker\,\eta}}\) is a non-degenerate \(2\)-section. **Proposition 4.2**.: _Given a contact Lie algebroid \((E,\eta)\), there exists a unique section \(\mathcal{R}\in Sec(E)\), called the Reeb section, such that_ \[i_{\mathcal{R}}\,\mathrm{d}^{E}\eta=0\,,\qquad i_{\mathcal{R}}\,\eta=1\,. \tag{18}\] The standard contact Lagrangian formalism is developed on the bundle \(TQ\times\mathbb{R}\). Since we are thinking of a Lie algebroid \(\tau:E\to Q\) as a substitute of the tangent bundle, it is natural to consider the projection map \(\pi\colon E\times\mathbb{R}\to Q\) given by \(\pi(a_{q},s)=q\). If \((q^{i},y^{\alpha})\) are local coordinates on \(\tau^{-1}(U)\subseteq E\), where \(U\) is an open subset of \(Q\), adapted to the local basis of section \(\{e_{\alpha}\}\), then the induced local coordinates \((q^{i},y^{\alpha},s)\) on \(\pi^{-1}(U)\subseteq E\times\mathbb{R}\) are given by \[q^{i}(a_{q},s)=q^{i}(q),\quad y^{\alpha}(a_{q},s)=y^{\alpha}(a_{q}),\quad s(a _{q},s)=s,\] where the projection \(\pi\colon E\times\mathbb{R}\to Q\) is locally given by \(\pi(q^{i},y^{\alpha},s)=(q^{i})\). ### The contact Lagrangian prolongation Let us consider the prolongation of a Lie algebroid \(E\) over the fibration \(\pi\colon E\times\mathbb{R}\to Q\). \(\mathcal{T}^{E}(E\times\mathbb{R})\) is the vector bundle defined by \[\mathcal{T}^{E}(E\times\mathbb{R})=\{(a_{q},v_{(b_{q},s)})\in E_{q}\times T_{(b_ {q},s)}(E\times\mathbb{R})/\ \rho(a_{q})=T\pi(b_{q},s)(v_{(b_{q},s)})\}\,.\] Therefore by (3.3), we know that \((a_{q},v_{(b_{q},s)})\in\mathcal{T}^{E}(E\times\mathbb{R})\) if, and only if, \[v_{(b_{q},s)}=y^{\alpha}(a_{q})\,\rho^{i}_{\alpha}(q)\frac{\partial}{\partial q ^{i}}+\dot{y}^{\alpha}\frac{\partial}{\partial y^{\alpha}}+\delta\frac{ \partial}{\partial s},\] being \((q^{i},y^{\alpha},s,z^{\alpha},\dot{y}^{\alpha},\dot{s})\), \(i=1,\ldots,n\), \(\alpha=1,\ldots,m\), the induced local coordinates on \(\mathcal{T}^{E}(E\times\mathbb{R})\), where \[q^{i}(a_{q},v_{(b_{q},s)}) = q^{i}(q)\;,\qquad z^{\alpha}(a_{q},v_{(b_{q},s)}) = y^{\alpha}(b_{q})\;,\] \[y^{\alpha}(a_{q},v_{(b_{q},s)}) = y^{\alpha}(a_{q})\;,\qquad\dot{y}^{\alpha}(a_{q},v_{(b_{q},s)}) = v_{(b_{q},s)}(y^{\alpha})\;,\] \[s(a_{q},v_{(b_{q},s)}) = s\;,\qquad\qquad\dot{s}(a_{q},v_{(b_{q},s)}) = v_{(b_{q},s)}(s)\;.\] From Section 3.3, we deduce the following properties of \(\mathcal{T}^{E}(E\times\mathbb{R})\). 1. The vector bundle \(\mathcal{T}^{E}(E\times\mathbb{R})\) with projection \(\widetilde{\tau}_{E\times\mathbb{R}}\colon\mathcal{T}^{E}(E\times\mathbb{R}) \to E\times\mathbb{R}\) given by \(\widetilde{\tau}_{E\times\mathbb{R}}(a_{q},v_{(b_{q},s)})=(b_{q},s)\) has a Lie algebroid structure \((\llbracket\cdot,\cdot\rrbracket^{\pi},\rho^{\pi})\), where the anchor map \(\rho^{\pi}\colon\mathcal{T}^{E}(E\times\mathbb{R})\to T(E\times\mathbb{R})\) given by \(\rho^{\pi}(a_{q},v_{(b_{q},s)})=v_{(b_{q},s)}\) is the canonical projection on the second factor. We refer to this Lie algebroid as the _contact Lagrangian prolongation_. The following diagram shows the different projections defined from \(\mathcal{T}^{E}(E\times\mathbb{R})\) \[\mathcal{T}^{E}(E\times\mathbb{R})\equiv\raisebox{-2.0pt}{\includegraphics[scale=0. 5]{figures/2.eps}}\times_{TQ}T(E\times\mathbb{R})\raisebox{-2.0pt}{\includegraphics[scale=0. 5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}} \raisebox{-2.0pt}{\includegraphics[scale=0.5]{figures/2.eps}}\raisebox{-2.0pt}{ \includegraphics[scale=0. 4. The Lie bracket of two sections of \(\widetilde{\tau}_{E\times\mathbb{R}}\) is characterized by (see (3.3)): \[\begin{array}{rclrcl}[\![\mathcal{X}_{\alpha},\mathcal{X}_{\beta}]\!]^{\pi}&=& \mathcal{C}_{\alpha\beta}^{\gamma}\mathcal{X}_{\gamma}\;,&[\![\mathcal{X}_{ \alpha},\mathcal{V}_{\beta}]\!]^{\pi}&=&0\;,&[\![\mathcal{X}_{\alpha},\mathcal{ V}_{s}]\!]^{\pi}&=&0\;,\\ [\![\mathcal{V}_{\alpha},\mathcal{V}_{\beta}]\!]^{\pi}&=&0\;,&[\![\mathcal{V}_{ \alpha},\mathcal{V}_{s}]\!]^{\pi}&=&0\;,&[\![\mathcal{V}_{s},\mathcal{V}_{s}]\!] ^{\pi}&=&0\;.\end{array}\] (21) 5. If \(\{\mathcal{X}^{\alpha},\mathcal{V}^{\alpha},\mathcal{V}^{s}\}\) is the dual basis of \(\{\mathcal{X}_{\alpha},\mathcal{V}_{\alpha},\mathcal{V}_{s}\}\), then the exterior differential is given locally, see (3.3), by \[\begin{array}{rclrcl}\mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}q^{i}= \rho_{\alpha}^{i}\mathcal{X}^{\alpha}\,,&\mathrm{d}^{\mathcal{T}^{E}(E\times \mathbb{R})}y^{\alpha}=\mathcal{V}^{\alpha}\,,&\mathrm{d}^{\mathcal{T}^{E}(E \times\mathbb{R})}s=\mathcal{V}^{s}\\ \mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}f=\rho_{\alpha}^{i}\frac{ \partial f}{\partial q^{i}}\mathcal{X}^{\alpha}+\frac{\partial f}{\partial y^{ \alpha}}\mathcal{V}^{\alpha}+\frac{\partial f}{\partial s}\mathcal{V}^{s}\,, \quad\text{for all }f\in\mathcal{C}^{\infty}(E\times\mathbb{R})\\ \mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}\mathcal{X}^{\gamma}=-\frac{1}{ 2}\mathcal{C}_{\alpha\beta}^{\gamma}\mathcal{X}^{\alpha}\wedge\mathcal{X}^{ \beta}\quad,\quad\mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}\mathcal{V}^{ \alpha}=0,\quad\mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}\mathcal{V}^{s}= 0\,.\end{array}\] (22) From now on we are going to set the notation \(\mathrm{d}=\mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}\). **Remark 4.3**.: Note that in the particular case \(E=TQ\), the Lie algebroid \(\mathcal{T}^{E}(E\times\mathbb{R})\) reduces to \(T(TQ\times\mathbb{R})\). \(\diamond\) ### Liouville sections and vertical endomorphisms One can define on \(\mathcal{T}^{E}(E\times\mathbb{R})\) two families of canonical objects: _Liouville section_ and _vertical endomorphism_; which correspond to the _Liouville vector field_ and _canonical tensor field_ on \(TQ\times\mathbb{R}\) of Section 2.2. \(\diamond\) The vertical liftingWe consider the projection on the first factor \(\tau_{1}:\mathcal{T}^{E}(E\times\mathbb{R})\to E\), \(\tau_{1}(a_{q},v_{(b_{q},s)})=a_{q}\). An element \((a_{q},v_{(b_{q},s)})\) of \(\mathcal{T}^{E}(E\times\mathbb{R})\) is said to be vertical if \(\tau_{1}(a_{q},v_{(b_{q},s)})=0_{q}\in E\). Thus, the vertical elements are of the form \((0_{q},v_{(b_{q},s)})\). In particular, the tangent vector \(v_{(b_{q},s)}\in T_{(b_{q},s)}(E\times\mathbb{R})\) is \(\pi\)-vertical, since by (4.1) we have \(\rho(a_{q})=T\pi(v_{(b_{q},s)}))\in T_{q}Q\) and \(a_{q}=0_{q}\). In a local coordinate system \((q^{i},y^{\alpha},s)\) on \(E\times\mathbb{R}\), if \((a_{q},v_{(b_{q},s)})\in\mathcal{T}^{E}(E\times\mathbb{R})\) is vertical, then \(a_{q}=0_{q}\) and \[v_{(b_{q},s)}=\dot{y}^{\alpha}\frac{\partial}{\partial y^{\alpha}}\Big{|}_{(b_ {q},s)}+\dot{s}\frac{\partial}{\partial s}\Big{|}_{(b_{q},s)}\in T_{(b_{q},s)} (E\times\mathbb{R})\,.\] **Definition 4.4**.: _The vertical lifting is defined as the mapping_ \[\begin{array}{rclrcl}\Upsilon^{\mathbf{V}}:E\times_{Q}(E\times\mathbb{R})& \longrightarrow&\mathcal{T}^{E}(E\times\mathbb{R})\\ (a_{q},(b_{q},s))&\longmapsto&\Upsilon^{\mathbf{V}}(a_{q},(b_{q},s))=\Big{(} 0_{q},(a_{q})_{(b_{q},s)}^{V}\Big{)}\end{array}\] _where \((a_{q})_{(b_{q},s)}^{V}\in T_{(b_{q},s)}(E\times\mathbb{R})\) is given by_ \[(a_{q})_{(b_{q},s)}^{V}f=\frac{d}{dt}\Big{|}_{t=0}f(b_{q}+ta_{q})\,,\] _for an arbitrary function \(f\in\mathcal{C}^{\infty}(E\times\mathbb{R})\)._ The local expression of \((a_{q})^{V}_{(b_{q},s)}\) is \[(a_{q})^{V}_{(b_{q},s)}=y^{\alpha}(a_{q})\frac{\partial}{\partial y^{\alpha}} \Big{|}_{(b_{q},s)}\in T_{(b_{q},s)}(E\times\mathbb{R})\,,\] and therefore \((a_{q})^{V}_{(b_{q},s)}\in T_{(b_{q},s)}(E\times\mathbb{R})\) is \(\pi\)-vertical, since \(\pi(q^{i},y^{\alpha},s)=(q^{i})\). ### The vertical endomorphism The vertical endomorphism \(S\) on \(\mathcal{T}^{E}(E\times\mathbb{R})\) is the map defined by \[S: \mathcal{T}^{E}(E\times\mathbb{R}) \longrightarrow \mathcal{T}^{E}(E\times\mathbb{R})\] \[(a_{q},v_{(b_{q},s)}) \longmapsto S(a_{q},v_{(b_{q},s)})=\Upsilon^{\mathbf{V}}(a_{q},(b_{q},s))\,,\] which locally writes \[S(a_{q},v_{(b_{q},s)})=\left(0_{q},y^{\alpha}(a_{q})\frac{\partial}{\partial y ^{\alpha}}\Big{|}_{(b_{q},s)}\right)=y^{\alpha}(a_{q})\mathcal{V}_{\alpha}(b_ {q},s).\] Now, from (2) we have \[S(\mathcal{X}_{\alpha}(b_{q},s))=\mathcal{V}_{\alpha}(b_{q},s),\qquad S( \mathcal{V}_{\alpha}(b_{q},s))=0,\qquad S(\mathcal{V}_{s}(b_{q},s))=0,\] and then \(S\) has the local expression \[S=\mathcal{V}_{\alpha}\otimes\mathcal{X}^{\alpha}\;. \tag{23}\] **Remark 4.5**.: The endomorphism \(S\) defined above will allow us to introduce the concept of _Lagrangian section_ when we develop the contact Lagrangian formalism on Lie algebroids. Moreover, this mapping will give a characterization of certain sections of \(\mathcal{T}^{E}(E\times\mathbb{R})\) which we consider later. \(\diamond\) The Liouville section The Liouville section \(\Delta\) is the section of \(\widetilde{\tau}_{E\times\mathbb{R}}:\mathcal{T}^{E}(E\times\mathbb{R})\to E \times\mathbb{R}\) given by \(\Delta(b_{q},s)=\Upsilon^{\mathbf{V}}(b_{q},(b_{q},s))\). Locally \[\Delta(b_{q},s)=\left(0_{q},y^{\alpha}(b_{q})\frac{\partial}{\partial y^{ \alpha}}\Big{|}_{(b_{q},s)}\right)=y^{\alpha}(b_{q})\left(0_{q},\frac{\partial }{\partial y^{\alpha}}\Big{|}_{(b_{q},s)}\right)=y^{\alpha}(b_{q})\mathcal{V} _{\alpha}(b_{q},s),\] and thus \(\Delta\) has the local expression \[\Delta=y^{\alpha}\mathcal{V}_{\alpha}\;. \tag{24}\] In the standard contact Lagrangian formalism, the Liouville vector field \(\Delta\) allows us to define the energy function. Analogously as we will see below, the energy function can be defined in the Lie algebroid setting using the Liouville section \(\Delta\). ### Second order differential equations (sode's). As we saw in Section 2.2, in the standard contact Lagrangian formalism one obtains the solutions of the Herglotz equations as integral curves of certain second-order differential equation (sode) on \(TQ\times\mathbb{R}\). Now we introduce the analogous object on Lie algebroids. **Definition 4.6**.: _A second order differential equation (sode) \(\Gamma\) is a section of \(\widetilde{\tau}_{E\times\mathbb{R}}\) which satisfies the equation \(S(\Gamma)=\Delta\)._ The local expression of a sode is \(\Gamma=y^{\alpha}\mathcal{X}_{\alpha}+f^{\alpha}\mathcal{V}_{\alpha}+g \mathcal{V}_{s}\), where \(f^{\alpha},g\in\mathcal{C}^{\infty}(E\times\mathbb{R})\), and the associated vector field \(\rho^{\pi}(\Gamma)\in\mathfrak{X}(E\times\mathbb{R})\) is given by \[\rho^{\pi}(\Gamma)=\rho^{i}_{\alpha}y^{\alpha}\frac{\partial}{\partial q^{i}}+ f^{\alpha}\frac{\partial}{\partial y^{\alpha}}+g\frac{\partial}{\partial s}\,. \tag{25}\] Suppose that the curve \(\widetilde{c}\colon I\subseteq\mathbb{R}\to E\times\mathbb{R}\) is an integral curve of a sode\(\Gamma\) (that is, it satisfies Equation (3.1)). If \(\widetilde{c}\) is written locally as \(\widetilde{c}(t)=(q^{i}(t),y^{\alpha}(t),s(t))\), then from (4.4) we deduce that (3.1) is locally equivalent to the identities \[\frac{\mathrm{d}q^{i}}{\mathrm{d}t}\Big{|}_{t}=\rho^{i}_{\alpha}(q^{i}(t))y^ {\alpha}(t)\;,\quad\frac{\mathrm{d}y^{\alpha}}{\mathrm{d}t}\Big{|}_{t}=f^{ \alpha}(\widetilde{c}(t))\;,\quad\frac{\mathrm{d}s}{\mathrm{d}t}\Big{|}_{t}=g (\widetilde{c}(t)).\] ### Lagrangian formalism In the remainder of this section, we will develop an intrinsic geometric framework, which allows us to write the Herglotz equations associated with a Lagrangian function \(L\colon E\times\mathbb{R}\to\mathbb{R}\) on a Lie algebroid. We first introduce some geometric elements associated with \(L\). Let us consider \[(\mathcal{T}^{E}(E\times\mathbb{R}))_{(b_{q},s)}=\{(a_{q},v_{(b_{q},s)})\in \mathcal{T}^{E}(E\times\mathbb{R})\,/\,a_{q}\in E,\rho(a_{q})=T\pi(b_{q},s)(v_ {(b_{q},s)})\}\] the fibre of \(\mathcal{T}^{E}(E\times\mathbb{R})\to E\times\mathbb{R}\) over the point \((b_{q},s)\). Poincare-Cartan and contact sectionsThe _Poincare-Cartan \(1\)-section_\(\Theta_{L}:E\times\mathbb{R}\to(\mathcal{T}^{E}(E\times\mathbb{R}))^{*}\), where \[\Theta_{L}(b_{q},s)\colon(\mathcal{T}^{E}(E\times\mathbb{R}))_{(b_{q},s)}\to \mathbb{R}\] is the linear map defined by \[\Theta_{L}(b_{q},s)(a_{q},v_{(b_{q},s)})=\mathrm{d}L(b_{q},s)(S_{(b_{q},s)}(a _{q},v_{(b_{q},s)}))=[\rho^{\pi}(S_{(b_{q},s)}(a_{q},v_{(b_{q},s)}))]L, \tag{26}\] since the last identity follows from (3), (5) and (4.3). One can define the following \(1\)-form \(\eta_{L}\) associated with \(L\) as follows \[\eta_{L}=\mathcal{V}^{s}-\Theta_{L},\] then, its differential \(\mathrm{d}\eta_{L}:E\times\mathbb{R}\to\Lambda^{2}(\mathcal{T}^{E}(E\times \mathbb{R}))^{*}\), is given by \[\mathrm{d}\eta_{L}=\mathrm{d}(\mathcal{V}^{s}-\Theta_{L})=\mathrm{d}\mathcal{ V}^{s}-\mathrm{d}\Theta_{L}=-\mathrm{d}\Theta_{L}\,.\] From (3), (4.3) and (4.5), we deduce the local expressions of \(\Theta_{L}\) and \(\eta_{L}\) \[\Theta_{L}=\frac{\partial L}{\partial y^{\alpha}}\mathcal{X}^{\alpha}\;,\qquad \eta_{L}=\mathcal{V}^{s}-\frac{\partial L}{\partial y^{\alpha}}\mathcal{X}^{ \alpha}, \tag{27}\] and from the local expressions (3), (4), (5) and (4.5), we obtain \[\mathrm{d}\eta_{L}=\left(\rho^{i}_{\beta}\frac{\partial^{2}L}{\partial q^{i} \partial y^{\alpha}}+\frac{1}{2}\mathcal{C}^{\gamma}_{\alpha\beta}\frac{ \partial L}{\partial y^{\gamma}}\right)\mathcal{X}^{\alpha}\wedge\mathcal{X}^{ \beta}+\frac{\partial^{2}L}{\partial y^{\beta}\partial y^{\alpha}}\mathcal{X}^{ \alpha}\wedge\mathcal{V}^{\beta}+\frac{\partial^{2}L}{\partial s\partial y^{ \alpha}}\mathcal{X}^{\alpha}\wedge\mathcal{V}^{s}\,. \tag{28}\] **Definition 4.7**.: _We say that the Lagrangian function \(L\) is regular if the matrix \(\left(\frac{\partial^{2}L}{\partial y^{\alpha}\partial y^{\beta}}\right)\) is non-singular._ **Remark 4.8**.: If the Lagrangian function \(L\) is regular, then from (4.5) and (4.5) we deduce that \(\eta_{L}\) defines a contact structure in the sense of Definition 4.1, since \[\eta_{L}\wedge(\mathrm{d}\eta_{L})^{m}=\det\left(\frac{\partial^{2}L}{\partial y ^{\alpha}\partial y^{\beta}}\right)\mathcal{X}^{1}\wedge\cdots\wedge\mathcal{ X}^{m}\wedge\mathcal{V}^{1}\wedge\cdots\wedge\mathcal{V}^{m}\wedge\mathcal{V}^{s},\] and the Reeb section \(\mathcal{R}_{L}\) associated to \(L\), characterized by the two conditions (4.2), is locally given by \[\mathcal{R}_{L}=\mathcal{V}_{s}-\frac{\partial^{2}L}{\partial s\partial y^{ \beta}}\,\left(\frac{\partial^{2}L}{\partial y^{\alpha}\partial y^{\beta}} \right)^{-1}\mathcal{V}_{\alpha}.\] \(\diamond\) **Energy function** The _energy function_\(E_{L}:E\times\mathbb{R}\to\mathbb{R}\) associated to the Lagrangian \(L\) is \[E_{L}=\rho^{\pi}(\Delta)L-L\,. \tag{29}\] From (3) and (4.3) one deduces its local expression \[E_{L}=y^{\alpha}\frac{\partial L}{\partial y^{\alpha}}-L\;. \tag{30}\] ### Herglotz equations **Theorem 4.9**.: _Given a regular Lagrangian \(L\colon E\times\mathbb{R}\to\mathbb{R}\), since \(\eta_{L}\) is a contact section, there exists a unique section \(\Gamma_{L}:E\times\mathbb{R}\to\mathcal{T}^{E}(E\times\mathbb{R})\) of \(\widetilde{\tau}_{E\times\mathbb{R}}\), called the Lagrangian section, satisfying_ \[\imath_{\Gamma_{L}}\eta_{L}=-E_{L}\,,\qquad\imath_{\Gamma_{L}}\mathrm{d}\eta_ {L}=\mathrm{d}E_{L}+\rho^{\pi}(\mathcal{R}_{L})(E_{L})\,\eta_{L}\,. \tag{31}\] _Moreover,_ 1. \(\Gamma_{L}\) _is a_ sode_._ 2. _If_ \(\widetilde{c}:I\subset\mathbb{R}\to E\times\mathbb{R}\,,\ \widetilde{c}(t)=(q^{i}(t),y^{\alpha}(t),s(t))\) _is an integral curve of_ \(\Gamma_{L}\)_, then_ \(\widetilde{c}\) _is a solution of the following system of differential equations_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial L}{\partial y ^{\alpha}}\Big{|}_{\widetilde{c}(t)}\right) = \rho_{\alpha}^{i}(q^{i}(t))\,\frac{\partial L}{\partial q^{i}} \Big{|}_{\widetilde{c}(t)}-y^{\beta}(t)\,\mathcal{C}_{\alpha\beta}^{\gamma}\, \frac{\partial L}{\partial y^{\gamma}}\Big{|}_{\widetilde{c}(t)}+\frac{ \partial L}{\partial y^{\alpha}}\Big{|}_{\widetilde{c}(t)}\frac{\partial L}{ \partial s}\Big{|}_{\widetilde{c}(t)},\] \[\frac{\mathrm{d}q^{i}}{\mathrm{d}t}\Big{|}_{t} = y^{\alpha}(t)\,\rho_{\alpha}^{i}\;,\qquad\frac{\mathrm{d}s}{ \mathrm{d}t}\Big{|}_{t}=L(\widetilde{c}(t)),\] _for_ \(i=1,\ldots,n\) _and_ \(\alpha=1,\ldots,m\)_, where_ \(\rho_{\alpha}^{i}\) _and_ \(\mathcal{C}_{\alpha\beta}^{\gamma}\) _are the structure functions of the Lie algebroid_ \(E\) _with respect to the coordinates_ \((q^{i})\) _and the local basis_ \(\{e_{\alpha}\}\)_._ _These equations are the_ **Herglotz equations** _on Lie algebroids._ Proof.: As \(\Gamma_{L}\in Sec(\mathcal{T}^{E}(E\times\mathbb{R}))\) can be locally written as \[\Gamma_{L}=A^{\alpha}\mathcal{X}_{\alpha}+B^{\alpha}\mathcal{V}_{\alpha}+C \mathcal{V}_{s}, \tag{32}\] for some functions \(A^{\alpha},B^{\alpha},C\in\mathcal{C}^{\infty}(E\times\mathbb{R})\). Now, from (4.3) and (4.6) we obtain \[\eta_{L}(\Gamma_{L})=\mathcal{V}^{s}(\Gamma_{L})-\frac{\partial L}{\partial y ^{\alpha}}\mathcal{X}^{\alpha}(\Gamma_{L})=C-\frac{\partial L}{\partial y^{ \alpha}}A^{\alpha}=-E_{L}=L-y^{\alpha}\frac{\partial L}{\partial y^{\alpha}}.\] On the other hand, from the local expression (4.5), a straightforward computation in local coordinates shows that \[\begin{array}{ll}\mbox{\rm\rm\scriptsize tr}_{L}\mbox{\rm\scriptsize d} \eta_{L}=&A^{\alpha}\,\frac{\partial^{2}L}{\partial s\partial y^{\alpha}}\, \mathcal{V}^{s}+A^{\alpha}\frac{\partial^{2}L}{\partial y^{\beta}\partial y^{ \alpha}}\mathcal{V}^{\beta}\\ &-\left(C\frac{\partial^{2}L}{\partial s\partial y^{\alpha}}+A^{\beta} \left(\rho^{i}_{\beta}\frac{\partial^{2}L}{\partial q^{i}\partial y^{\alpha}} -\rho^{i}_{\alpha}\frac{\partial^{2}L}{\partial q^{i}\partial y^{\beta}}+ \mathcal{C}^{\gamma}_{\alpha\beta}\frac{\partial L}{\partial y^{\gamma}} \right)+B^{\beta}\frac{\partial^{2}L}{\partial y^{\beta}\partial y^{\alpha}} \right)\mathcal{X}^{\alpha}\;,\end{array}\] and from the local expressions (5) and (4.5), we obtain \[\begin{array}{ll}\mbox{\rm\scriptsize d}E_{L}+\frac{\partial L}{\partial s }\eta_{L}=\left(y^{\alpha}\frac{\partial^{2}L}{\partial s\partial y^{\alpha}} +\frac{\partial L}{\partial s}\right)\mathcal{V}^{s}+y^{\alpha}\frac{\partial^ {2}L}{\partial y^{\alpha}\partial y^{\beta}}\mathcal{V}^{\beta}+\left[\rho^{i }_{\alpha}\left(y^{\beta}\frac{\partial^{2}L}{\partial q^{i}\partial y^{\beta}} -\frac{\partial L}{\partial q^{i}}\right)-\frac{\partial L}{\partial y^{\alpha }}\frac{\partial L}{\partial s}\right]\mathcal{X}^{\alpha}\,.\end{array}\] Whence it follows that \(\Gamma_{L}:E\times\mathbb{R}\to\mathcal{T}^{E}(E\times\mathbb{R})\) is a solution of the system (4.9) if, and only if, \[\begin{array}{ll}A^{\alpha}\,\frac{\partial^{2}L}{\partial s \partial y^{\alpha}}&=&y^{\alpha}\frac{\partial^{2}L}{\partial s \partial y^{\alpha}}+\frac{\partial L}{\partial s}\;,\\ &A^{\alpha}\frac{\partial^{2}L}{\partial y^{\beta}\partial y^{\alpha}}&=&y^{ \alpha}\frac{\partial^{2}L}{\partial y^{\alpha}\partial y^{\beta}}\;,\\ &-\left[C\frac{\partial^{2}L}{\partial s\partial y^{\alpha}}+A^{\beta} \left(\rho^{i}_{\beta}\frac{\partial^{2}L}{\partial q^{i}\partial y^{\alpha}} -\rho^{i}_{\alpha}\frac{\partial^{2}L}{\partial q^{i}\partial y^{\beta}}+ \mathcal{C}^{\gamma}_{\alpha\beta}\frac{\partial L}{\partial y^{\gamma}} \right)+B^{\beta}\frac{\partial^{2}L}{\partial y^{\beta}\partial y^{\alpha}} \right]\\ &=&\rho^{i}_{\alpha}\left(y^{\beta}\frac{\partial^{2}L}{\partial q ^{i}\partial y^{\beta}}-\frac{\partial L}{\partial q^{i}}\right)-\frac{ \partial L}{\partial y^{\alpha}}\frac{\partial L}{\partial s}.\end{array} \tag{33}\] Since \(L\) is regular, from the second identity of (4.6), we obtain \[A^{\alpha}=y^{\alpha}\;,\quad\alpha=1,\ldots,m\,.\] Therefore \(\Gamma_{L}\) is a sode, and from \[C-\frac{\partial L}{\partial y^{\alpha}}A^{\alpha}=L-y^{\alpha}\frac{\partial L }{\partial y^{\alpha}}\] we conclude \(C=L\). Now, from the last identity on (4.6) we obtain \[L\,\frac{\partial^{2}L}{\partial s\partial y^{\alpha}}+y^{\beta}\left(\rho^{i} _{\beta}\,\frac{\partial^{2}L}{\partial q^{i}\partial y^{\alpha}}+\mathcal{C}^{ \gamma}_{\alpha\beta}\,\frac{\partial L}{\partial y^{\gamma}}\right)+B^{\beta} \,\frac{\partial^{2}L}{\partial y^{\beta}\partial y^{\alpha}}=\rho^{i}_{\alpha} \,\frac{\partial L}{\partial q^{i}}+\frac{\partial L}{\partial y^{\alpha}}\, \frac{\partial L}{\partial s}.\] In summary, if a section \(\Gamma_{L}\) is a solution of (4.9), then \(\Gamma_{L}\) is a sode in \(\mathcal{T}^{E}(E\times\mathbb{R})\) and it can be written locally as follows: \(\Gamma_{L}=y^{\alpha}\,\mathcal{X}_{\alpha}+B^{\alpha}\,\mathcal{V}_{\alpha}+L \,\mathcal{V}_{s}\,,\) for some functions \(B^{\alpha}\in\mathcal{C}^{\infty}(E\times\mathbb{R})\) satisfying \[y^{\beta}\,\rho^{i}_{\beta}\,\frac{\partial^{2}L}{\partial q^{i}\partial y^{ \alpha}}+B^{\beta}\,\frac{\partial^{2}L}{\partial y^{\beta}\partial y^{\alpha} }+L\,\frac{\partial^{2}L}{\partial s\partial y^{\alpha}}=\rho^{i}_{\alpha}\, \frac{\partial L}{\partial q^{i}}-y^{\beta}\,\mathcal{C}^{\gamma}_{\alpha\beta} \,\frac{\partial L}{\partial y^{\gamma}}+\frac{\partial L}{\partial y^{\alpha}} \frac{\partial L}{\partial s}\,. \tag{34}\] Now, let \(\widetilde{c}:I\subset\mathbb{R}\to E\times\mathbb{R}\), \(\widetilde{c}(t)=(q^{i}(t),y^{\alpha}(t),s(t))\) be an integral curve of the sode \(\Gamma_{L}\), that is, an integral curve of the vector field \(\rho^{\pi}(\Gamma_{L})\), say \[\rho^{\pi}(\Gamma_{L})(\widetilde{c}(t))=\widetilde{c}_{*}(t)\left(\frac{d}{ dt}\Big{|}_{t}\right).\] From (4.4) we deduce that (2.2) is locally equivalent to the identities \[\left.\frac{\mathrm{d}q^{i}}{\mathrm{d}t}\right|_{t}=\rho^{i}_{\alpha}(q^{i}(t ))y^{\alpha}(t)\;,\quad\left.\frac{\mathrm{d}y^{\alpha}}{\mathrm{d}t}\right|_ {t}=B^{\alpha}(\widetilde{c}(t))\;.\quad\left.\frac{\mathrm{d}s}{\mathrm{d}t} \right|_{t}=L(\widetilde{c}(t))\;. \tag{35}\] If we restrict equations (4.6) to the image of \(\widetilde{c}(t)\) and consider the above identities (4.6), we obtain \[\left.\frac{\mathrm{d}q^{i}}{\mathrm{d}t}\,\frac{\partial^{2}L}{\partial q^{ i}\partial y^{\alpha}}+\frac{\mathrm{d}y^{\beta}}{\mathrm{d}t}\,\frac{ \partial^{2}L}{\partial y^{\beta}\partial y^{\alpha}}+\frac{\mathrm{d}s}{ \mathrm{d}t}\,\frac{\partial^{2}L}{\partial s\partial y^{\alpha}}=\rho^{i}_{ \alpha}\,\frac{\partial L}{\partial q^{i}}-y^{\beta}\,\mathcal{C}^{\gamma}_{ \alpha\beta}\frac{\partial L}{\partial y^{\gamma}}+\frac{\partial L}{\partial y ^{\alpha}}\,\frac{\partial L}{\partial s},\right.\] or equivalently \[\left.\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial L}{ \partial y^{\alpha}}\Big{|}_{\widetilde{c}(t)}\right) = \rho^{i}_{\alpha}\,\frac{\partial L}{\partial q^{i}}\Big{|}_{ \widetilde{c}(t)}-y^{\beta}(t)\,\mathcal{C}^{\gamma}_{\alpha\beta}\,\frac{ \partial L}{\partial y^{\gamma}}\Big{|}_{\widetilde{c}(t)}+\frac{\partial L}{ \partial y^{\alpha}}\Big{|}_{\widetilde{c}(t)}\frac{\partial L}{\partial s} \Big{|}_{\widetilde{c}(t)},\right.\] \[\left.\frac{\mathrm{d}q^{i}}{\mathrm{d}t}\right|_{t} = y^{\alpha}(t)\,\rho^{i}_{\alpha}\;,\quad\left.\frac{\mathrm{d}s} {\mathrm{d}t}\right|_{t}=L(\widetilde{c}(t)),\] which are the Herglotz equations on Lie algebroids. **Remark 4.10**.: If \(E\) is the standard Lie algebroid \(TQ\), then \(\Theta_{L}\) and \(\eta_{L}\) are the usual Poincare-Cartan 1-form and the contact 1-form respectively, associated with the Lagrangian function \(L\colon TQ\times\mathbb{R}\to\mathbb{R}\) considered in Section 2.2. The equations of motion are the Herglotz equations given in Section 2.2. \(\diamond\) **Example 4.11**.: If \(E=\mathfrak{g}\) is the Lie algebra of a Lie group \(G\) projecting over \(Q=\{0\}\), let us consider coordinates \((y^{A})\) on \(\mathfrak{g}\) associated with the Lie algebra basis \(\{e_{A}\}\). Then we obtain Euler-Poincare-Herglotz equations (see [5]) \[\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L}{\partial y^{A}}+C ^{D}_{AB}y^{B}\frac{\partial L}{\partial y^{D}}=\frac{\partial L}{\partial s} \frac{\partial L}{\partial y^{A}},\] \[\frac{\mathrm{d}s}{\mathrm{d}t}=L(y^{A},s),\] for the Lagrangian \(L:\mathfrak{g}\times\mathbb{R}\to\mathbb{R}\) and structure constants \(C^{D}_{AB}\). **Example 4.12**.: Let \(\mathcal{A}:TQ\to\mathfrak{g}\) be a principal connection in the principal bundle \(\pi:Q\to Q/G\) and \(\mathcal{B}:TQ\oplus TQ\to\mathfrak{g}\) be the curvature of \(\mathcal{A}\). We will use coordinates \((q^{i},q^{A})\) on a suitable open subset \(\pi^{-1}(U)\) (containing \(U\times\{e\}\), where \(e\) is the identity of \(G\)) such that \((q^{i})\) are coordinates on \(U\), and \((q^{A})\) are coordinates on the fibre \(G\), where \(i=1,\ldots,n-d=\dim Q-\dim G\), \(A=1,\ldots,d=\dim G\). Then, the local expression of the projection \(\pi:Q\to Q/G\) is \(\pi(q^{i},q^{A})=(q^{i})\). Suppose that \(\{e_{A}\}\) is a basis of \(\mathfrak{g}\), and denote by \(\{\widehat{e_{A}}\}\) the fundamental vector fields on \(Q\) given by \[\widehat{e_{A}}(q,g)=(ad_{g}e_{A})_{Q}(q,g),\] where \(ad_{g}:\mathfrak{g}\to\mathfrak{g}\) is the adjoint action. If \[\mathcal{A}\left(\frac{\partial}{\partial q^{i}}\Big{|}_{(q,e)}\right)=\mathcal{ A}_{i}^{A}(q)\,e_{A},\quad\mathcal{B}\left(\frac{\partial}{\partial q^{i}} \Big{|}_{(q,e)},\frac{\partial}{\partial q^{j}}\Big{|}_{(q,e)}\right)=\mathcal{ B}_{ij}^{A}\left(q\right)e_{A},\] for \(i,j=1,\ldots,n-d\) and \(q\in U,\) then the horizontal lift of the vector field \(\frac{\partial}{\partial q^{i}}\) is the vector field on \(\pi^{-1}(U)\simeq U\times G\) given by \[e_{i}=\left(\frac{\partial}{\partial q^{i}}\right)^{h}=\frac{\partial}{ \partial q^{i}}-\mathcal{A}_{i}^{A}\,\widehat{e_{A}}.\] Therefore, the vector fields \(e_{i},\)\(\widehat{e_{A}}\) on \(U\times G\) are \(G\)-invariant under the action of \(G\) over \(Q\) and define a local basis \(\{e_{i},\widehat{e_{A}}\}\) on \(Sec(TQ/G)\) which induces local coordinates \((q^{i},\dot{q}^{i},v^{A})\) on \(TQ/G\). Then, we obtain the Lagrange-Poincare-Herglotz equations (see [4]) for \(L:\widehat{TQ}\times\mathbb{R}\to\mathbb{R}\) given by \[\frac{\partial L}{\partial q^{j}}-\frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{\partial L}{\partial\dot{q}^{j}}\right) =\frac{\partial L}{\partial v^{A}}\left(\mathcal{B}_{ij}^{A}\dot{ q}^{i}+c_{DB}^{A}\mathcal{A}_{j}^{B}v^{B}\right)-\frac{\partial L}{\partial s }\frac{\partial L}{\partial\dot{q}^{j}}\quad\forall j,\] \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial L}{\partial v ^{B}}\right) =\frac{\partial L}{\partial v^{A}}\left(c_{DB}^{A}v^{D}-c_{DB}^{A} \mathcal{A}_{i}^{D}\dot{q}^{i}\right)+\frac{\partial L}{\partial s}\frac{ \partial L}{\partial v^{B}}\quad\forall B,\] \[\frac{\mathrm{d}s}{\mathrm{d}t} =L,\] being \(\{c_{AB}^{C}\}\) the constant structures of \(\mathfrak{g}\) with respect to the basis \(\{e_{A}\}\) (see [30] for more details). ## 5 Contact Hamiltonian formalism on Lie algebroids In this section, we extend the standard Hamiltonian contact formalism to Lie algebroids. Let \((E,[\![\cdot,\cdot]\!],\rho)\) be a Lie algebroid of rank \(m\) over a manifold \(Q\) of dimension \(n\) and \(\tau\,^{*}:E\,^{*}\to Q\) be the vector bundle projection of the dual bundle \(E^{*}\) of \(E\). ### The contact Hamiltonian prolongation The standard contact Hamiltonian formalism is developed on the bundle \(T^{*}Q\times\mathbb{R}\). For this generalization to Lie algebroids, it is natural to consider the projection map \(\pi\colon E^{*}\times\mathbb{R}\to Q\) given by \(\pi(b_{q}^{*},s)=q\), being now \(P=E^{*}\times\mathbb{R}\) and \((b_{q}^{*},s)\) an element of \(E^{*}\times\mathbb{R}\). Let \((q^{i})\) be local coordinates on a neighborhood \(U\) of \(Q\), \(i=1,\ldots,n\), and \(\{e^{\alpha}\}\) be a local basis of sections of \(\tau^{*}:E^{*}\to Q\), \(\alpha=1,\ldots,m\). Given \(b_{q}^{*}\in E_{q}^{*}\), we can write \(b_{q}^{*}=y_{\alpha}(b_{q}^{*})e^{\alpha}(q)\in E_{q}^{*}\), so the coordinates of \(b_{q}^{*}\in E^{*}\) are \((q^{i}(q),y_{\alpha}(b_{q}^{*}))\) and each section \(\sigma\) is given locally by \(\sigma\big{|}_{U}=y_{\alpha}e^{\alpha}\). Then the local coordinates on \(\pi^{-1}(U)\subseteq E^{*}\times\mathbb{R}\) are given by \[q^{i}(b_{q}^{*},s)=q^{i}(q),\quad y_{\alpha}(b_{q}^{*},s)=y_{\alpha}(b_{q}^{* }),\quad s(b_{q}^{*},s)=s.\] Consider now the prolongation of \(E\) over the fibration \(\pi\colon E^{*}\times\mathbb{R}\to Q\) \[\mathcal{T}^{E}(E^{*}\times\mathbb{R})=\left\{(a_{q},v_{(b_{q}^{*},s)})\in E \times T(E^{*}\times\mathbb{R})/\ \rho(a_{q})=T\pi(v_{(b_{q}^{*},s)})\right\}.\] By (3.3), we know that \((a_{q},v_{(b^{*}_{q},s)})\in\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) if, and only if, \[v_{(b^{*}_{q},s)}=y^{\alpha}(a_{q})\,\rho^{i}_{\alpha}(q)\frac{\partial}{ \partial q^{i}}+\dot{y}_{\alpha}\frac{\partial}{\partial y_{\alpha}}+\dot{s} \frac{\partial}{\partial s},\] being \((q^{i},y^{\alpha},s,z_{\alpha},\dot{y}_{\alpha},\dot{s})\) the induced local coordinates on \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\), where \[q^{i}(a_{q},v_{(b^{*}_{q},s)}) = q^{i}(q)\;,\qquad z_{\alpha}(a_{q},v_{(b^{*}_{q},s)}) = y_{\alpha}(b^{*}_{q})\;,\] \[y^{\alpha}(a_{q},v_{(b^{*}_{q},s)}) = y^{\alpha}(a_{q})\;,\qquad\dot{y}_{\alpha}(a_{q},v_{(b^{*}_{q},s) }) = v_{(b^{*}_{q},s)}(y_{\alpha})\;,\] \[s(a_{q},v_{(b^{*}_{q},s)}) = s\;,\qquad\qquad\dot{s}(a_{q},v_{(b^{*}_{q},s)}) = v_{(b^{*}_{q},s)}(s)\;.\] From Section 3.3, we deduce the following properties of \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\). 1. The vector bundle \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) with projection \(\widetilde{\tau}_{E^{*}\times\mathbb{R}}\colon\mathcal{T}^{E}(E^{*}\times \mathbb{R})\to E^{*}\times\mathbb{R}\) given by \(\widetilde{\tau}_{E^{*}\times\mathbb{R}}(a_{q},v_{(b^{*}_{q},s)})=(b^{*}_{q},s)\) has a Lie algebroid structure \(([\![\cdot,\cdot]\!]^{*\pi},\rho^{*\pi})\), where the anchor map \(\rho^{*\pi}\colon\mathcal{T}^{E}(E^{*}\times\mathbb{R})\to T(E^{*}\times \mathbb{R})\) given by \(\rho^{*\pi}((a_{q},v_{(b^{*}_{q},s)}))=v_{(b^{*}_{q},s)}\) is the canonical projection on the second factor. We refer to this Lie algebroid as the _contact Hamiltonian prolongation_. The following diagram shows the different projections defined from \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) where \[\tau_{1}(a_{q},v_{(b^{*}_{q},s)}) = a_{q}\;\;\;,\;\;\;\rho^{*\pi}(a_{q},v_{(b^{*}_{q},s)}) = v_{(b^{*}_{q},s)}\;\;\;,\;\;\;\widetilde{\tau}_{E^{*}\times\mathbb{R}}(a _{q},v_{(b^{*}_{q},s)}) = (b^{*}_{q},s).\] 2. The set \(\widetilde{\mathcal{X}}_{\alpha},\widetilde{\mathcal{V}}_{\alpha},\widetilde{ \mathcal{V}}_{s}\colon E^{*}\times\mathbb{R}\to\mathcal{T}^{E}(E^{*}\times \mathbb{R})\) given by \[\widetilde{\mathcal{X}}_{\alpha}(b^{*}_{q},s)=\left(e_{\alpha}(q),\rho^{i}_{ \alpha}(q)\frac{\partial}{\partial q^{i}}\Big{|}_{(b^{*}_{q},s)}\right)\,,\; \widetilde{\mathcal{V}}_{\alpha}(b^{*}_{q},s)=\left(0_{q},\frac{\partial}{ \partial y_{\alpha}}\Big{|}_{(b^{*}_{q},s)}\right)\,,\;\widetilde{\mathcal{V}} _{s}(b^{*}_{q},s)=\left(0_{q},\frac{\partial}{\partial s}\Big{|}_{(b^{*}_{q},s )}\right)\] is a local basis of \(Sec(\mathcal{T}^{E}(E^{*}\times\mathbb{R}))\), the set of sections of \(\widetilde{\tau}_{E^{*}\times\mathbb{R}}\) (see (3.3)). 3. The anchor map \(\rho^{*\pi}\colon\mathcal{T}^{E}(E^{*}\times\mathbb{R})\to T(E^{*}\times \mathbb{R})\) allows us to associate a vector field with each section \(\xi\colon E^{*}\times\mathbb{R}\to\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) of \(\widetilde{\tau}_{E^{*}\times\mathbb{R}}\). Locally, if \(\xi\) is given by \[\xi=\xi_{2}^{\alpha}\widetilde{\mathcal{X}}_{\alpha}+\xi_{1}^{\alpha} \widetilde{\mathcal{V}}_{\alpha}+\xi_{0}\widetilde{\mathcal{V}}_{s}\in Sec( \mathcal{T}^{E}(E^{*}\times\mathbb{R})),\] then the associate vector field is \[\rho^{*\pi}(\xi)=\rho^{i}_{\alpha}\xi_{2}^{\alpha}\frac{\partial}{\partial q ^{i}}+\xi_{1}^{\alpha}\frac{\partial}{\partial y_{\alpha}}+\xi_{0}\frac{ \partial}{\partial s}\in\mathfrak{X}(E^{*}\times\mathbb{R})\,.\] 4. The Lie bracket of two sections of \(\widetilde{\tau}_{E^{*}\times\mathbb{R}}\) is characterized by the relations (see (3.3)), \[[\![\widetilde{\mathcal{X}}_{\alpha},\widetilde{\mathcal{X}}_{ \beta}]^{*\pi} = \mathcal{C}^{\gamma}_{\alpha\beta}\widetilde{\mathcal{X}}_{\gamma}\;, \qquad[\![\widetilde{\mathcal{X}}_{\alpha},\widetilde{\mathcal{V}}_{\beta}]^{* \pi} = 0\;,\qquad[\![\widetilde{\mathcal{X}}_{\alpha},\widetilde{\mathcal{V}}_{ \beta}]^{*\pi} = 0\;,\] \[\![\widetilde{\mathcal{V}}_{\alpha},\widetilde{\mathcal{V}}_{ \beta}]^{*\pi} = 0\;,\qquad[\![\widetilde{\mathcal{V}}_{\alpha},\widetilde{\mathcal{V}}_{ \beta}]^{*\pi} = 0\,.\] 5. If \(\{\widetilde{\mathcal{X}}^{\alpha},\widetilde{\mathcal{V}}^{\alpha},\widetilde{ \mathcal{V}}^{s}\}\) is the dual basis of \(\{\widetilde{\mathcal{X}}_{\alpha},\widetilde{\mathcal{V}}_{\alpha},\widetilde {\mathcal{V}}_{s}\}\), then the exterior differential is given by \[\begin{split}&\mathrm{d}\mathcal{T}^{E(E^{*}\times\mathbb{R})}f= \rho_{\alpha}^{i}\frac{\partial f}{\partial q^{i}}\widetilde{\mathcal{X}}^{ \alpha}+\frac{\partial f}{\partial y_{\alpha}}\widetilde{\mathcal{V}}^{\alpha }+\frac{\partial f}{\partial s}\widetilde{\mathcal{V}}^{s}\,,\quad\text{ for all }\ f\in\mathcal{C}^{\infty}(E^{*}\times\mathbb{R})\\ &\mathrm{d}\mathcal{T}^{E(E^{*}\times\mathbb{R})}\widetilde{ \mathcal{X}}^{\gamma}=-\frac{1}{2}\mathcal{C}_{\alpha\beta}^{\gamma} \widetilde{\mathcal{X}}^{\alpha}\wedge\widetilde{\mathcal{X}}^{\beta}\quad, \quad\mathrm{d}\mathcal{T}^{E(E^{*}\times\mathbb{R})}\widetilde{\mathcal{V}}^ {\gamma}=0\quad,\quad\mathrm{d}\mathcal{T}^{E(E^{*}\times\mathbb{R})} \widetilde{\mathcal{V}}^{s}=0\,.\end{split}\] (37) From now on we are going to set the notation \(\mathrm{d}=\mathrm{d}\mathcal{T}^{E(E^{*}\times\mathbb{R})}\). **Remark 5.1**.: Note that in the particular case \(E=TQ\), the manifold \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) reduces to \(T(T^{*}Q\times\mathbb{R})\). \(\diamond\) ### Hamiltonian formalism The Liouville 1-section \(\Theta:E^{*}\times\mathbb{R}\to(\mathcal{T}^{E}(E^{*}\times\mathbb{R}))^{*}\) is defined by \[\Theta_{(b_{q}^{*},s)}(a_{q},v_{(b_{q}^{*},s)})=b_{q}^{*}(a_{q})\;, \tag{38}\] for each \(a_{q}\in E\), \((b_{q}^{*},s)\in E^{*}\times\mathbb{R}\) and \(v_{(b_{q}^{*},s)}\in T_{(b_{q}^{*},s)}(E^{*}\times\mathbb{R})\). Now, we define the following 1-section \(\eta\) on \((\mathcal{T}^{E}(E^{*}\times\mathbb{R}))^{*}\) as \[\eta=\widetilde{\mathcal{V}}^{s}-\Theta, \tag{39}\] and its differential \(\mathrm{d}\eta:E^{*}\times\mathbb{R}\to\Lambda^{2}(\mathcal{T}^{E}(E^{*} \times\mathbb{R}))^{*}\) satisfies \(\mathrm{d}\eta=-\mathrm{d}\Theta\). From (2) we deduce that the local expressions of \(\Theta\) and \(\eta\) are \[\Theta=y_{\alpha}\widetilde{\mathcal{X}}^{\alpha}\,,\qquad\eta=\widetilde{ \mathcal{V}}^{s}-y_{\alpha}\widetilde{\mathcal{X}}^{\alpha}, \tag{40}\] and from the local expressions (5) and (5.2), we obtain \[\mathrm{d}\eta=\frac{1}{2}\,\mathcal{C}_{\alpha\beta}^{\gamma}\,y_{\gamma}\, \widetilde{\mathcal{X}}^{\alpha}\wedge\widetilde{\mathcal{X}}^{\beta}+ \widetilde{\mathcal{X}}^{\gamma}\wedge\widetilde{\mathcal{V}}^{\gamma}\,. \tag{41}\] **Remark 5.2**.: From (5.2) and (5.2) we deduce that \(\eta\) defines a contact structure of the Lie algebroid \((\mathcal{T}^{E}(E^{*}\times\mathbb{R}),[\![\cdot,\cdot]\!]^{*\pi},\rho^{*\pi})\) in the sense of Definition 4.1. Moreover, the Reeb section \(\mathcal{R}\) for this contact Lie algebroid, characterized by (4.2), is locally given by \(\mathcal{R}=\widetilde{\mathcal{V}}_{s}\). \(\diamond\) **Remark 5.3**.: When \(E=TQ\) and \(\rho=id_{TQ}\), \(\eta\) is the canonical contact structure (2.5). \(\diamond\) #### The contact Hamilton equations. **Theorem 5.4**.: _Let \(H:E^{*}\times\mathbb{R}\to\mathbb{R}\) be a Hamiltonian function. Then, since \(\eta\) is a contact section of \((\mathcal{T}^{E}(E^{*}\times\mathbb{R}),[\![\cdot,\cdot]\!]^{*\pi},\rho^{*\pi})\), there exists a unique section \(\xi_{H}:E^{*}\times\mathbb{R}\to\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) of \(\widetilde{\tau}_{E^{*}\times\mathbb{R}}\), called the Hamiltonian section, satisfying_ \[{}^{*}\!_{\xi_{H}}\eta=-H\quad,\quad{}^{*}\!_{\xi_{H}}\mathrm{d}\eta=\mathrm{d}H -\rho^{*\pi}(\mathcal{R})(H)\eta\,. \tag{42}\] _Moreover, if \(\widetilde{c}:\mathbb{R}\to E^{*}\times\mathbb{R}\), \(\widetilde{c}(t)=(c^{i}(t),c_{\alpha}(t),c_{s}(t))\) is an integral curve of \(\xi_{H}\), then \(\widetilde{c}\) is a solution of the following system of differential equations_ \[\begin{split}\frac{\mathrm{d}c^{i}}{\mathrm{d}t}\Big{|}_{t}& =\ \rho_{\alpha}^{i}\,\frac{\partial H}{\partial y_{\alpha}}\Big{|}_{ \widetilde{c}(t)}\;,\\ \frac{\mathrm{d}c_{\alpha}}{\mathrm{d}t}\Big{|}_{t}& =\ -\Big{(}\rho_{\alpha}^{i}\,\frac{\partial H}{\partial q^{i}}\Big{|}_{ \widetilde{c}(t)}+\mathcal{C}_{\alpha\beta}^{\gamma}\,c_{\gamma}\,\frac{ \partial H}{\partial y_{\beta}}\Big{|}_{\widetilde{c}(t)}+c_{\alpha}\,\frac{ \partial H}{\partial s}\Big{|}_{\widetilde{c}(t)}\Big{)}\,,\\ \frac{\mathrm{d}c_{s}}{\mathrm{d}t}\Big{|}_{t}& =\ c_{\alpha}\,\frac{\partial H}{\partial y_{\alpha}}\Big{|}_{ \widetilde{c}(t)}-H(\widetilde{c}(t))\,.\end{split} \tag{43}\] _These equations are called the_ **contact Hamilton equations** _on Lie algebroids._ Proof.: Proceeding in the same way as in the Lagrangian case (see Theorem 4.9), we obtain from (5), (5.2), (5.2) and (5.4), the local expression of \(\xi_{H}\) \[\xi_{H}=\frac{\partial H}{\partial y_{\alpha}}\widetilde{\mathcal{X}}_{\alpha} -\left(\rho_{\alpha}^{i}\,\frac{\partial H}{\partial q^{i}}+\mathcal{C}_{ \alpha\beta}^{\gamma}\,y_{\gamma}\,\frac{\partial H}{\partial y_{\beta}}+y_{ \alpha}\,\frac{\partial H}{\partial s}\right)\widetilde{\mathcal{V}}_{\alpha}+ \left(y_{\alpha}\,\frac{\partial H}{\partial y_{\alpha}}-H\right)\widetilde{ \mathcal{V}}_{s}. \tag{44}\] Then, an integral curve \(\widetilde{c}(t)\) of \(\xi_{H}\), that is, an integral curve of the vector field \(\rho^{*\pi}(\xi_{H})\), is a solution of (5.4). **Remark 5.5**.: In the particular case \(E=TQ\) and \(\rho=id_{TQ}\), equations (5.4) are the standard contact Hamilton equations (2.1). \(\diamond\) In addition to the Hamiltonian section \(\xi_{H}\) associated to a Hamiltonian function \(H:E^{*}\times\mathbb{R}\to\mathbb{R}\), there is another relevant section, called the _evolution section_\(\mathcal{E}_{v_{H}}\in Sec(\mathcal{T}^{E}(E^{*}\times\mathbb{R}))\) defined by \[\mathcal{E}_{v_{H}}=\xi_{H}+H\mathcal{R},\] so that it reads in local coordinates as follows \[\mathcal{E}_{v_{H}}=\frac{\partial H}{\partial y_{\alpha}}\widetilde{\mathcal{ X}}_{\alpha}-\left(\rho_{\alpha}^{i}\,\frac{\partial H}{\partial q^{i}}+ \mathcal{C}_{\alpha\beta}^{\gamma}\,y_{\gamma}\,\frac{\partial H}{\partial y_ {\beta}}+y_{\alpha}\,\frac{\partial H}{\partial s}\right)\widetilde{\mathcal{ V}}_{\alpha}+y_{\alpha}\,\frac{\partial H}{\partial y_{\alpha}}\widetilde{\mathcal{V}}_{s}. \tag{45}\] Then, the integral curves of \(\mathcal{E}_{v_{H}}\) satisfy \[\frac{\mathrm{d}q^{i}}{\mathrm{d}t}=\rho_{\alpha}^{i}\,\frac{\partial H}{ \partial y_{\alpha}}\,,\qquad\frac{\mathrm{d}y_{\alpha}}{\mathrm{d}t}=-\left( \rho_{\alpha}^{i}\,\frac{\partial H}{\partial q^{i}}+\mathcal{C}_{\alpha\beta }^{\gamma}\,y_{\gamma}\,\frac{\partial H}{\partial y_{\beta}}+y_{\alpha}\, \frac{\partial H}{\partial s}\right)\,,\qquad\frac{\mathrm{d}s}{\mathrm{d}t}= y_{\alpha}\,\frac{\partial H}{\partial y_{\alpha}}\,.\] **Remark 5.6**.: Recently, we have introduced a Jacobi structure on \(E^{*}\times\mathbb{R}\) in order to deduce the contact equations of motion on Lie algebroids (see [4]). In this framework, we make no use of a contact structure on Lie algebroids but only the Jacobi structure. Let \(X_{H}\in\mathfrak{X}(E^{*}\times\mathbb{R})\) be the Hamiltonian vector field obtained from the Hamiltonian function \(H:E^{*}\times\mathbb{R}\to\mathbb{R}\) using the above mentioned Jacobi structure. Comparing the local expression of the integral curves of both mechanical systems we deduce that \[\rho^{*\pi}\left(\xi_{H}\right)=X_{H}.\] \(\diamond\) **Example 5.7**.: When \(E=TQ\) is equipped with the Lie brackets and the anchor map is just the identity, then the Jacobi structure is the canonical one in \(T^{*}Q\times\mathbb{R}\). In that case, we recover the contact Hamiltonian equations (2.1) \[\frac{\mathrm{d}q^{i}}{\mathrm{d}t}=\frac{\partial H}{\partial p_{i}},\qquad \frac{\mathrm{d}p_{i}}{\mathrm{d}t}=\frac{\partial H}{\partial q^{i}}-p_{i} \frac{\partial H}{\partial s},\qquad\frac{\mathrm{d}s}{\mathrm{d}t}=p_{i}\frac {\partial H}{\partial p_{i}}-H.\] **Example 5.8**.: When \(E\) is a Lie algebra, say \(E=\mathfrak{g}\), considering adapted coordinates \((p_{A},s)\) to a dual basis of the Lie algebra \(\{e^{A}\}\), we find that the Hamiltonian vector field on \(\mathfrak{g}^{*}\times\mathbb{R}\) is just \[X_{H}=-\left(C_{AB}^{D}p_{D}\frac{\partial H}{\partial p_{B}}+p_{A}\frac{ \partial H}{\partial s}\right)\frac{\partial}{\partial p_{A}}+\left(p_{A} \frac{\partial H}{\partial p_{A}}-H(p_{A},s)\right)\frac{\partial}{\partial s}.\] which gives rise to the Lie-Poisson-Jacobi equations (see [5]) \[\frac{\mathrm{d}p_{A}}{\mathrm{d}t}=-C_{AB}^{D}p_{D}\frac{\partial H}{\partial p _{B}}-p_{A}\frac{\partial H}{\partial s},\qquad\frac{\mathrm{d}s}{\mathrm{d}t}= p_{A}\frac{\partial H}{\partial p_{A}}-H(p_{A},s).\] **Example 5.9**.: Given a Hamiltonian function \(H:T^{*}Q/G\times\mathbb{R}\to\mathbb{R}\) associated with the Atiyah algebroid \(TQ/G\to Q/G\), let \(\{e_{i},\widehat{e_{A}}\}\) be the local basis of \(G\)-invariant vector fields on \(Q\) given in Example 4.12, and \((q^{i},\dot{q}^{i},v^{A})\) be the corresponding local fibred coordinates on \(TQ/G\). Then, denote by \((q^{i},p_{i},\bar{p}_{A})\) the (dual) coordinates on \(T^{*}Q/G\) and \((q^{i},p_{i},\bar{p}_{A},s)\) the corresponding coordinates on \(T^{*}Q/G\times\mathbb{R}\). In these coordinates, the contact Hamiltonian equations are given by the Hamilton-Poincare-Herglotz equations (see [4]) \[\frac{\mathrm{d}q^{i}}{\mathrm{d}t} =\frac{\partial H}{\partial p_{i}},\qquad\frac{\mathrm{d}p_{i}}{ \mathrm{d}t}=-\frac{\partial H}{\partial q^{i}}+\mathcal{B}^{A}_{ij}\bar{p}_ {A}\frac{\partial H}{\partial p_{j}}-c^{C}_{AB}\mathcal{A}^{B}_{i}\bar{p}_{C} \frac{\partial H}{\partial\bar{p}_{A}}-p_{i}\frac{\partial H}{\partial s},\] \[\frac{\mathrm{d}\bar{p}_{A}}{\mathrm{d}t} =c^{C}_{AB}\mathcal{A}^{B}_{i}\bar{p}_{C}\frac{\partial H}{ \partial p_{i}}-c^{C}_{AB}\bar{p}_{C}\frac{\partial H}{\partial\bar{p}_{B}}- \bar{p}_{A}\frac{\partial H}{\partial s},\qquad\frac{\mathrm{d}s}{\mathrm{d}t }=p_{i}\frac{\partial H}{\partial p_{i}}+\bar{p}_{A}\frac{\partial H}{\partial \bar{p}_{A}}-H.\] ### The Legendre transformation and the equivalence between the Lagrangian and Hamiltonian formalisms Let \(L:E\times\mathbb{R}\to\mathbb{R}\) be a Lagrangian function. We introduce the _Legendre transformation_ associated to \(L\) as the map defined by \[\begin{split} Leg_{L}:& E\times\mathbb{R}\longrightarrow E ^{*}\times\mathbb{R}\\ &(a_{q},s)\longmapsto Leg_{L}(a_{q},s)=(\mu_{q}(a_{q},s),s),\end{split}\] where \[\mu_{q}(a_{q},s):E_{q}\to\mathbb{R}\,,\qquad\mu_{q}(a_{q},s)(b_{q})=\frac{d}{ dt}\Big{|}_{t=0}L(a_{q}+tu_{q},s)\] being \(b_{q}\in E_{q}\). The map \(Leg_{L}\) is well defined and its local expression in fibred coordinates \((q^{i},y^{\alpha},s)\) on \(E\times\mathbb{R}\), and \((q^{i},y_{\alpha},s)\) on \(E^{*}\times\mathbb{R}\) is \[Leg_{L}(q^{i},y^{\alpha},s)=\left(q^{i},\frac{\partial L}{\partial y^{\alpha} },s\right). \tag{46}\] From this local expression it is easy to prove that the Lagrangian \(L\) is regular if, and only if, \(Leg_{L}\) is a local diffeomorphism. The Legendre map induces a mapping \(\mathcal{T}^{E}\,Leg_{L}:\mathcal{T}^{E}(E\times\mathbb{R})\to\mathcal{T}^{E }(E^{*}\times\mathbb{R})\) defined by \[\mathcal{T}^{E}Leg_{L}(a_{q},v_{(b_{q},s)})=\left(a_{q},(Leg_{L})_{*}(b_{q},s )(v_{(b_{q},s)})\right), \tag{47}\] where \(a_{q}\in E_{q},\;(b_{q},s)\in E\times\mathbb{R}\). Using (5.3), we deduce that the local expression of \(\mathcal{T}^{E}\,Leg_{L}\) in the coordinates of \(\mathcal{T}^{E}(E\times\mathbb{R})\) and \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) (see Sections 4.1 and 5.1) is \[\mathcal{T}^{E}\,Leg(q^{i},y^{\alpha},s,z^{\alpha},\dot{y}^{\alpha},\dot{s})= \left(q^{i},\frac{\partial L}{\partial y^{\alpha}},s,z^{\alpha},\rho^{i}_{ \beta}\,y^{\beta}\frac{\partial^{2}L}{\partial q^{i}\partial y^{\alpha}}+\dot {y}^{\beta}\frac{\partial^{2}L}{\partial y^{\beta}\partial y^{\alpha}}+\dot{s} \frac{\partial^{2}L}{\partial s\partial y^{\alpha}},\dot{s}\right)\,. \tag{48}\] **Theorem 5.10**.: _Let \(L\colon E\times\mathbb{R}\to\mathbb{R}\) be a regular Lagrangian. The pair \((\mathcal{T}^{E}\,Leg_{L},Leg_{L})\) is a morphism between the Lie algebroids \((\mathcal{T}^{E}(E\times\mathbb{R}),[\![\cdot,\cdot]\!]^{\pi},\rho^{\pi})\) and \((\mathcal{T}^{E}(E^{*}\times\mathbb{R}),[\![\cdot,\cdot]\!]^{\pi},\rho^{\pi})\)_ \[\begin{CD}\mathcal{T}^{E}(E\times\mathbb{R})@>{\mathcal{T}^{E}\,Leg_{L}}>{}> \mathcal{T}^{E}(E^{*}\times\mathbb{R})\\ @V{\bar{\tau}_{E^{*}\times\mathbb{R}}}V{}V@V{}V{\bar{\tau}_{E^{*}\times \mathbb{R}}}V\\ E\times\mathbb{R}@>{Leg_{L}}>{}>E^{*}\times\mathbb{R}\end{CD}\] _Moreover, if \(\eta_{L}\) (respectively, \(\eta\)) is the Lagrangian contact section associated to \(L\) (respectively, the contact section on \((\mathcal{T}^{E}(E^{*}\times\mathbb{R}))^{*}\)), then_ \[(\mathcal{T}^{E}Leg_{L},Leg_{L})^{*}\eta=\eta_{L}\,,\qquad(\mathcal{T}^{E}Leg_{ L},Leg_{L})^{*}\left(\mathrm{d}^{\mathcal{T}^{E}(E^{*}\times\mathbb{R})}\eta \right)=\mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}\eta_{L}. \tag{49}\] Proof.: First, we have to prove that the pair \((\mathcal{T}^{E}\,Leg_{L},Leg_{L})\) satisfies the condition (3.1) to be a Lie algebroid morphism. Let \((q^{i})\) be local coordinates on \(Q\), \(\{e_{\alpha}\}\) a local basis of sections of \(E\), and \(\{\mathcal{X}_{\alpha},\mathcal{V}_{\alpha},\mathcal{V}_{s}\}\) and \(\{\widetilde{\mathcal{X}}_{\alpha},\widetilde{\mathcal{V}}_{\alpha}, \widetilde{\mathcal{V}}_{s}\}\) the corresponding local basis of sections of \(\mathcal{T}^{E}(E\times\mathbb{R})\) and \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\), respectively. Then, using (5), (5.3) and (5.3) we deduce that \[\left(\mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\widetilde{\mathcal{X}}^{ \alpha}=\mathcal{X}^{\alpha}\,,\quad\left(\mathcal{T}^{E}Leg_{L},Leg_{L} \right)^{*}\widetilde{\mathcal{V}}^{\alpha}=\mathrm{d}^{\mathcal{T}^{E}(E \times\mathbb{R})}\left(\frac{\partial L}{\partial y^{\alpha}}\right)\,,\quad \left(\mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\widetilde{\mathcal{V}}^{s}= \mathcal{V}^{s}\] Thus, from (5) and (5) we conclude \[\left(\mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\left(\mathrm{d}^{ \mathcal{T}^{E}(E^{*}\times\mathbb{R})}f^{\prime}\right)=\mathrm{d}^{ \mathcal{T}^{E}(E\times\mathbb{R})}\left(f^{\prime}\circ Leg_{L}\right), \tag{50}\] \[\left(\mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\left(\mathrm{d}^{ \mathcal{T}^{E}(E^{*}\times\mathbb{R})}\widetilde{\mathcal{X}}^{\alpha}\right) =\mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}\left(\left( \mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\widetilde{\mathcal{X}}^{\alpha} \right),\] \[\left(\mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\left(\mathrm{d}^{ \mathcal{T}^{E}(E^{*}\times\mathbb{R})}\widetilde{\mathcal{V}}^{\alpha}\right) =\mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}\left(\left( \mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\widetilde{\mathcal{V}}^{\alpha} \right),\] \[\left(\mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\left(\mathrm{d}^{ \mathcal{T}^{E}(E^{*}\times\mathbb{R})}\widetilde{\mathcal{V}}^{s}\right) =\mathrm{d}^{\mathcal{T}^{E}(E\times\mathbb{R})}\left(\left( \mathcal{T}^{E}Leg_{L},Leg_{L}\right)^{*}\widetilde{\mathcal{V}}^{s}\right),\] for all \(f^{\prime}\in\mathrm{C}^{\infty}(E^{*}\times\mathbb{R})\) and for all \(\alpha\), which proves that the pair \((\mathcal{T}^{E}\,Leg_{L},Leg_{L})\) is a Lie algebroid morphism. Finally, from (5.3) and (5.3), using the local expressions (4.5) and (5.2) of \(\eta_{L}\) and \(\eta\) respectively, and taking into account the above results, we deduce (5.10). Assume now that \(L\) is hyperregular, that is, \(Leg_{L}\) is a global diffeomorphism. From (5.3) and Theorem 5.10, we deduce that the pair \((\mathcal{T}^{E}Leg_{L},Leg_{L})\) is a Lie algebroid isomorphism. Moreover, we may consider the Hamiltonian function \(H:E^{*}\times\mathbb{R}\rightarrow\mathbb{R}\) defined by \[H=E_{L}\circ Leg_{L}^{-1},\] where \(E_{L}:E\times\mathbb{R}\rightarrow\mathbb{R}\) is the Lagrangian energy associated to \(L\) given by (4.5). The Hamiltonian section \(\xi_{H}\in Sec(\mathcal{T}^{E}(E^{*}\times\mathbb{R}))\) is characterized by the conditions (5.4) and the Lagrangian section \(\Gamma_{L}\in Sec(\mathcal{T}^{E}(E\times\mathbb{R}))\) is characterized by (4.9). Therefore, we have the following. **Theorem 5.11**.: _If the Lagrangian \(L\) is hyperregular, then the Lagrangian section \(\Gamma_{L}\) associated to \(L\) and the Hamiltonian section \(\xi_{H}\) are \((\mathcal{T}^{E}Leg_{L},Leg_{L})\)-related, that is,_ \[\xi_{H}\circ Leg_{L}=\mathcal{T}^{E}Leg_{L}\circ\Gamma_{L}. \tag{51}\] _Moreover, if \(\widetilde{c}:I\subset\mathbb{R}\to E\times\mathbb{R}\) is a solution of the Herglotz equations associated to \(L\), then \(\sigma=Leg_{L}\circ\widetilde{c}:I\subset\mathbb{R}\to E^{*}\times \mathbb{R}\) is a solution of the Hamilton equations associated to \(H\) and, conversely, if \(\sigma:I\subset\mathbb{R}\to E^{*}\times\mathbb{R}\) is a solution of the Hamilton equations, then \(\widetilde{c}=Leg_{L}^{-1}\circ\sigma\) is a solution of the Herglotz equations._ Proof.: Let \(\xi_{H}=\mathcal{T}^{E}Leg_{L}\circ\Gamma_{L}\circ Leg_{L}^{-1}\) be the Hamiltonian section solution to (5.4). Then, from (4.9), (5.10), (5.3), and since \[\mathcal{T}^{E}Leg_{L}(\mathcal{R}_{L})=\mathcal{R},\] we obtain that (5.11) holds. Now, using (5.11) and Theorem 5.10, we deduce the second part. **Remark 5.12**.: When \(E=TQ\), the Legendre transformation defined above coincides with the Legendre map of the standard contact formalism, and Theorem 5.11 gives the equivalence between the standard contact Lagrangian and Hamiltonian formalisms, see [22, 35]. \(\diamond\) ## 6 Legendrian Lie subalgebroids in contact Lie algebroids Given a contact manifold \((M,\eta)\), an interesting type of submanifolds are the so-called Legendrian submanifolds, and several results are known that help us to understand the dynamics of a contact system as a Legendrian submanifold (see for example, [28]). This concept is the natural extension of that of Lagrangian submanifold that has been extensively used in symplectic geometry [57], and later generalized to Poisson and Jacobi manifolds [34, 39]. In order to extend this concept to Lie algebroids, we introduce the notion of a Legendrian Lie subalgebroid of a contact Lie algebroid, and we give a characterization that will allow us to relate these objects with the solution of the Hamilton-Jacobi equation. **Definition 6.1**.: _Let \((E,[\![\cdot,\cdot]\!]_{E},\rho)\) be a contact Lie algebroid of rank \(2k+1\) over a manifold \(M\) with contact section \(\eta\), and \(j:F\to E\), \(i:N\to M\) be a Lie subalgebroid (see Definition 3.2). Then, the Lie subalgebroid is said to be Legendrian if_ 1. \(\text{dim}\,F_{x}=k\)_,_ 2. \((\eta(i(x)))_{\big{|}_{j(F_{x})}}=0\)_._ Let \((E,[\![\cdot,\cdot]\!],\rho)\) be a Lie algebroid of rank \(m\) over a manifold \(Q\) of dimension \(n\). Then, the prolongation \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) of \(E\) over \(\pi:E^{*}\times\mathbb{R}\to Q\) is a contact Lie algebroid (see Theorem 5.2). Moreover, if \(q\) is a point of \(Q\) and \(E^{*}_{q}\times\mathbb{R}\) is the fibre of \(E^{*}\times\mathbb{R}\) over the point \(q\), we denote by \[j_{q}:TE^{*}_{q}\times\mathbb{R}\to\mathcal{T}^{E}(E^{*}\times\mathbb{R})\,, \qquad i_{q}:E^{*}_{q}\times\mathbb{R}\to E^{*}\times\mathbb{R}\] the maps given by \[j_{q}(v_{b^{*}_{q}},s)=(0_{q},v_{(b^{*}_{q},s)})\,,\qquad i_{q}(b^{*}_{q},s)=( b^{*}_{q},s),\] for \((v_{b^{*}_{q}},s)\in TE^{*}_{q}\times\mathbb{R}\) and \((b^{*}_{q},s)\in E^{*}_{q}\times\mathbb{R}\), where \(0_{q}:Q\to E\) is the zero section. On the other hand, if \(\gamma\) is a section of \(\pi:E^{*}\times\mathbb{R}\to Q\) we will denote by \(F_{\gamma}\) the vector bundle over \(\gamma(Q)\) given by \[F_{\gamma}=\{(a,T\gamma(\rho(a))\in E\times T(E^{*}\times\mathbb{R})/a\in E\}, \tag{52}\] and by \(j_{\gamma}:F_{\gamma}\to\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) and \(i_{\gamma}:\gamma(Q)\to E^{*}\times\mathbb{R}\) the canonical inclusions. Note that the vector bundles \(E\) and \(F_{\gamma}\) have the same rank \(m\), so that the pair \([(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma]\) is an isomorphism between these vector bundles, where the map \((\mathrm{Id}_{E},T\gamma\circ\rho)\) is given by \[(\mathrm{Id}_{E},T\gamma\circ\rho)(a)=(a,T\gamma(\rho(a)))\,,\qquad a\in E.\] Thus, \(F_{\gamma}\) is a Lie algebroid over \(\gamma(Q)\). **Definition 6.2**.: _Consider a function \(f:Q\to\mathbb{R}\). We denote by \(j^{1}f:Q\to E^{*}\times\mathbb{R}\), \(j^{1}f=(\mathrm{d}^{E}f,f)\) the \(1\)-jet of \(f\), given in local coordinates by_ \[j^{1}f(q^{i})=\left(q^{i},\rho_{\alpha}^{i}\frac{\partial f}{\partial q^{i}},f(q ^{i})\right).\] Then, we have the following **Proposition 6.3**.: _Let \((E,\llbracket\cdot,\cdot\rrbracket,\rho)\) be a Lie algebroid of rank \(m\) over a manifold \(Q\) of dimension \(n\), and \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) the prolongation of \(E\) over \(\pi:E^{*}\times\mathbb{R}\to Q\)._ 1. _If_ \(q\in Q\)_, then_ \(j_{q}:TE_{q}^{*}\times\mathbb{R}\to\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) _and_ \(i_{q}:E_{q}^{*}\times\mathbb{R}\to E^{*}\times\mathbb{R}\) _is a Legendrian Lie subalgebroid of the contact Lie algebroid_ \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\)_._ 2. _If_ \(\gamma\in Sec(E^{*}\times\mathbb{R})\)_, then_ \(j_{\gamma}:F_{\gamma}\to\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) _and_ \(i_{\gamma}:\gamma(Q)\to E^{*}\times\mathbb{R}\) _is a Legendrian Lie subalgebroid of the contact Lie algebroid_ \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) _if, and only if,_ \(\gamma\) _is locally the_ \(1\)_-jet of a function on_ \(Q\)_._ Proof.: 1. We can see that the rank of the vector bundle \(TE_{q}^{*}\times\mathbb{R}\to E_{q}^{*}\times\mathbb{R}\) is \(m\). Consider local coordinates \((q^{i})\) on \(Q\), \(\{e_{\alpha}\}\) a local basis of sections of \(E\), \((q^{i},y_{\alpha},s)\) the corresponding local coordinates on \(E^{*}\times\mathbb{R}\) and \(\{\widetilde{\mathcal{X}}_{\alpha},\widetilde{\mathcal{V}}_{\alpha}, \widetilde{\mathcal{V}}_{s}\}\) the corresponding local basis of sections of \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\). From (2), it follows that \[(j_{q},i_{q})^{*}(\widetilde{\mathcal{X}}^{\alpha})=0\,,\qquad(j_{q},i_{q})^{* }(\widetilde{\mathcal{V}}^{\alpha})=\mathrm{d}^{TE_{q}^{*}\times\mathbb{R}}(y _{\alpha}\circ i_{q})\,,\qquad(j_{q},i_{q})^{*}(\widetilde{\mathcal{V}}^{s})=0\,,\] and \[j_{q}\left(\frac{\partial}{\partial y_{\alpha}}\Big{|}_{b_{q}^{*}},s\right)= \widetilde{\mathcal{V}}_{\alpha}(b_{q}^{*},s)\,,\qquad b_{q}^{*}\in E_{q}^{*}.\] (53) Using (5), this implies that \(j_{q}:TE_{q}^{*}\times\mathbb{R}\to\mathcal{T}^{E}(E^{*}\times\mathbb{R})\), \(i_{q}:E_{q}^{*}\times\mathbb{R}\to E^{*}\times\mathbb{R}\) is a morphism of Lie algebroids. Thus, since \(j_{q}\) is injective and \(i_{q}\) is an injective immersion, we deduce that \((j_{q},i_{q})\) is a Lie subalgebroid of \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\). Finally, from (5.2) and (1) we conclude that \[\eta\left(i_{q}(b_{q}^{*},s)\right)\Big{|}_{j_{q}\left(T_{\hat{b}_{q}^{*}}(E_{ q}^{*})\times\mathbb{R}\right)}=0.\] 2. If \(\gamma\) is a section of \(E^{*}\times\mathbb{R}\) then the Lie algebroids \(E\to Q\) and \(F_{\gamma}\to\gamma(Q)\) are isomorphic under \([(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma]\). Note that \((\mathrm{Id}_{E},T\gamma\circ\rho)\) is injective and \(\gamma\) is an injective immersion. On the other hand, from Proposition 7.1, we have \[[\,(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,]^{*}\,\eta=\mathrm{d}^{E}\gamma _{s}-\gamma_{0},\] where \(\gamma=(\gamma_{0},\gamma_{s})\), \(\gamma_{0}:Q\to E^{*}\) and \(\gamma_{s}:Q\to\mathbb{R}\). Therefore, the Lie subalgebroid \((j_{\gamma},i_{\gamma})\) is Legendrian if, and only if, \(\gamma\) is locally the \(1\)-jet of a function on \(Q\), namely \(\gamma=j^{1}\gamma_{s}\). **Remark 6.4**.: When \(E\) is the standard Lie algebroid \(TQ\), a section \(\gamma:Q\to T^{*}Q\times\mathbb{R}\) is a Legendrian submanifold of \((T^{*}Q\times\mathbb{R},\eta_{Q})\) if, and only if, \(\gamma\) is locally the \(1\)-jet of a function on \(Q\) in the usual sense (see Proposition 3 in [28]). ## 7 The Hamilton-Jacobi equations The Hamilton-Jacobi problem consists in finding a function \(S:Q\to\mathbb{R}\) (called the generating function) solution to the equation \(H(q^{i},\frac{\partial S}{\partial q^{i}})=E\), for some \(E\in\mathbb{R}\), which is called the Hamilton-Jacobi equation for \(H\). Of course, one can easily see that the above equation can be written as \(\mathrm{d}(H\circ\mathrm{d}S)=0\), which opens the possibility to consider general \(1\)-forms instead of just differentials of a function. Given a Hamiltonian function \(H:E^{*}\times\mathbb{R}\to\mathbb{R}\), in this section we provide some ingredients necessary to study the Hamilton-Jacobi problem for a contact Hamiltonian section and for the corresponding evolution section. Let \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) be the prolongation of the Lie algebroid \((E,\llbracket\cdot,\cdot\rrbracket,\rho)\) over \(\pi:E^{*}\times\mathbb{R}\to Q\) and consider the morphism \(((\mathrm{Id}_{E},T\gamma\circ\rho),\gamma))\) between the vector bundles \(E\) and \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) defined by \((\mathrm{Id}_{E},T\gamma\circ\rho)(a_{q})=(a_{q},(T_{q}\gamma)\rho(a_{q}))\), for \(a_{q}\in E_{q}\) and \(q\in Q\). **Proposition 7.1**.: _Let \(\eta\) be the contact \(1\)-section of \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) defined on (5.2). If \(\gamma\) is a section of \(E^{*}\times\mathbb{R}\to Q\), then the pair_ \[[\,(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,]\] _is a morphism between the Lie algebroids \((E,\llbracket\cdot,\cdot\rrbracket,\rho)\) and \((\mathcal{T}^{E}(E^{*}\times\mathbb{R}),\llbracket\cdot,\cdot\rrbracket^{* \pi},\rho^{*\pi})\). Moreover,_ \[[\,(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,]^{*}\,\eta=\mathrm{d}^{E} \gamma_{s}-\gamma_{0},\] _where \(\gamma=(\gamma_{0},\gamma_{s})\), \(\gamma_{0}:Q\to E^{*}\) and \(\gamma_{s}:Q\to\mathbb{R}\)._ Proof.: Consider \((q^{i})\) the local coordinates on \(Q\) and \(\{e_{\alpha}\}\) the local basis of sections of \(E\). Let \(\{\widetilde{\mathcal{X}}_{\alpha},\widetilde{\mathcal{V}}_{\alpha}, \widetilde{\mathcal{V}}_{s}\}\) be a local basis of sections of \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\) and \(\{\widetilde{\mathcal{X}}^{\alpha},\widetilde{\mathcal{V}}^{\alpha}, \widetilde{\mathcal{V}}^{s}\}\) be the dual basis. Suppose \(\gamma\) is locally written as \(\gamma(q^{i})=(q^{i},\gamma_{\alpha}(q^{i}),\gamma_{s}(q^{i}))\), then using (2), it follows that \[(\mathrm{Id}_{E},T\gamma\circ\rho)\circ e_{\alpha}=\left(\widetilde{\mathcal{ X}}_{\alpha}+\rho_{\alpha}^{i}\frac{\partial\gamma_{s}}{\partial q^{i}} \widetilde{\mathcal{V}}_{\beta}+\rho_{\alpha}^{i}\frac{\partial\gamma_{s}}{ \partial q^{i}}\widetilde{\mathcal{V}}_{s}\right)\circ\gamma,\] for \(\alpha=1,\ldots,m\). Thus, from (3.1) and (3.1) we have \[[\,(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,]^{*}\,\widetilde{\mathcal{X}}^ {\alpha}=e^{\alpha}\,,\qquad[\,(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,]^{ *}\,\widetilde{\mathcal{V}}^{\alpha}=\mathrm{d}^{E}\gamma_{\beta}\,,\qquad[\,( \mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,]^{*}\,\widetilde{\mathcal{V}}^{s}= \mathrm{d}^{E}\gamma_{s}. \tag{54}\] Therefore, from (3.1), (3.1) and (5) we obtain that the pair \([\,(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,]\) is a morphism between the Lie algebroids \(E\to Q\) and \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\to E^{*}\times\mathbb{R}\). Now, if \(q\) is a point of \(Q\) and \(a_{q}\in E_{q}\) then, using (5.2), we have that \[(\,\,[(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,]^{*}\,\Theta)\,(q)(a_{q})= \Theta(\gamma(q))(a_{q},(T_{q}\gamma)(\rho(a_{q})))=\gamma_{0}(q)(a_{q}). \tag{55}\] Finally, since \(\eta=\widetilde{\mathcal{V}}^{s}-\Theta\), from (7) and (7) we conclude that \[((\mathrm{Id}_{E},T\gamma\circ\rho),\gamma)^{*}\eta=\mathrm{d}^{E}\gamma_{s}- \gamma_{0}.\] **Corollary 7.2**.: _If \(\gamma\in Sec(E^{*}\times\mathbb{R})\) is the \(1\)-jet of a function on \(Q\), then_ \[\left[\,(\mathrm{Id}_{E},T\gamma\circ\rho),\gamma\,\right]^{*}\eta=0.\] ### The Hamilton-Jacobi equations for the Hamiltonian section Let \((E,\llbracket\cdot,\cdot\rrbracket,\rho\,)\) be a Lie algebroid over a manifold \(Q\) and \((\llbracket\cdot,\cdot\rrbracket^{*\pi},\rho^{*\pi}\,)\) be the Lie algebroid structure on \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\). Let \(H:E^{*}\times\mathbb{R}\to\mathbb{R}\) be a Hamiltonian function and \(\xi_{H}\in Sec(\mathcal{T}^{E}(E^{*}\times\mathbb{R}))\) be the corresponding Hamiltonian section. Consider \(\gamma\) a section of \(\pi:E^{*}\times\mathbb{R}\to Q\) and assume that, in local coordinates, it reads \[\gamma(q^{i})=(q^{i},\gamma_{\alpha}(q^{i}),\gamma_{s}(q^{i})).\] The Hamilton-Jacobi problem consists in finding a function \(\gamma_{s}:Q\to\mathbb{R}\) such that \[H\left(q^{i},\rho_{\alpha}^{i}\frac{\partial\gamma_{s}}{\partial q^{i}}, \gamma_{s}(q^{i})\right)=E,\] for some \(E\in\mathbb{R}\). Denote by \(\xi_{H}^{\gamma}\in Sec(E)\) the section defined by \[\xi_{H}^{\gamma}=pr_{1}\circ\xi_{H}\circ\gamma,\] as shown in the following diagram This diagram does not necessarily commute. Indeed, \(\xi_{H}\) and \(\xi_{H}^{\gamma}\) are not necessarily \(\gamma\)-related, that is, \[\xi_{H}\circ\gamma=(\mathrm{Id}_{E},T\gamma\circ\rho)\circ\xi_{H}^{\gamma} \tag{56}\] does not necessarily hold. We can compute \(\xi_{H}\circ\gamma\) in local coordinates and, from (5.2), obtain \[\begin{array}{ll}\xi_{H}\circ\gamma=&\left(\frac{\partial H}{ \partial y_{\alpha}}\circ\gamma\right)\widetilde{\mathcal{X}}_{\alpha}\circ \gamma-\left[\rho_{\alpha}^{i}\left(\frac{\partial H}{\partial q^{i}}\circ \gamma\right)+\mathcal{C}_{\alpha\beta}^{\eta}\gamma_{\eta}\left(\frac{ \partial H}{\partial y_{\beta}}\circ\gamma\right)+\gamma_{\alpha}\left(\frac{ \partial H}{\partial s}\circ\gamma\right)\right]\widetilde{\mathcal{V}}_{ \alpha}\circ\gamma\\ &+\left[\gamma_{\alpha}\left(\frac{\partial H}{\partial y_{\alpha}}\circ \gamma\right)-H\circ\gamma\right]\widetilde{\mathcal{V}}_{s}\circ\gamma.\end{array} \tag{57}\] On the other hand, from the definition of \(\xi_{H}^{\gamma}\) and (7.1) we deduce that the local expression of \(\xi_{H}^{\gamma}\) is given by \[\xi_{H}^{\gamma}=\left(\frac{\partial H}{\partial y_{\alpha}}\circ\gamma \right)e_{\alpha}\] and therefore from (3.1), we have \[(\mathrm{Id}_{E},T\gamma\circ\rho)\circ\xi_{H}^{\gamma}=\left(\left(\frac{ \partial H}{\partial y_{\alpha}}\circ\gamma\right)e_{\alpha},\left(\frac{ \partial H}{\partial y_{\alpha}}\circ\gamma\right)\rho_{\alpha}^{i}\left(\frac{ \partial}{\partial q^{i}}+\frac{\partial\gamma_{\beta}}{\partial q^{i}}\frac {\partial}{\partial y_{\beta}}+\frac{\partial\gamma_{s}}{\partial q^{i}}\frac {\partial}{\partial s}\right)\right).\] Thus, equation (7.1) holds if, and only if, the following relations are satisfied \[-\left[\rho_{\alpha}^{i}\left(\frac{\partial H}{\partial q^{i}} \circ\gamma\right)+\mathcal{C}_{\alpha\beta}^{\eta}\gamma_{\eta}\left(\frac{ \partial H}{\partial y_{\beta}}\circ\gamma\right)+\gamma_{\alpha}\left(\frac{ \partial H}{\partial s}\circ\gamma\right)\right]= \rho_{\beta}^{i}\left(\frac{\partial H}{\partial y_{\beta}}\circ \gamma\right)\frac{\partial\gamma_{\alpha}}{\partial q^{i}}\,, \tag{58}\] \[\gamma_{\alpha}\left(\frac{\partial H}{\partial y_{\alpha}}\circ \gamma\right)-H\circ\gamma= \rho_{\alpha}^{i}\left(\frac{\partial H}{\partial y_{\alpha}} \circ\gamma\right)\frac{\partial\gamma_{\beta}}{\partial q^{i}}\,. \tag{59}\] Assume now that \(\gamma\) is locally the 1-jet of a function, namely \(\gamma=j^{1}\gamma_{s}\), that is, \[\gamma(q^{i})=\left(q^{i},\rho_{\alpha}^{i}\frac{\partial\gamma_{s}}{\partial q ^{i}},\gamma_{s}(q^{i})\right).\] Then, performing the above substituting of \(\gamma_{\alpha}\), and recalling that the structure functions of the Lie algebroid \(E\) satisfy relations (3.1), we can see that equations (7.1) and (7.1) transform into \[\mathrm{d}^{E}(H\circ\gamma)= 0\,, \tag{60}\] \[H\circ\gamma= 0. \tag{61}\] Hence, taking into account the second part of Proposition 6.3, we have proved the following result. **Theorem 7.3**.: _Let \(\gamma\in Sec(E^{*}\times\mathbb{R})\) such that \(j_{\gamma}:F_{\gamma}\to\mathcal{T}^{E}(E^{*}\times\mathbb{R})\), \(i_{\gamma}:\gamma(Q)\to E^{*}\times\mathbb{R}\) is a Legendrian Lie subalgebroid of \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\), where \(F_{\gamma}\) is the vector bundle over \(\gamma(Q)\) given by (6). Then, \(\xi_{H}\) and \(\xi_{H}^{\gamma}\) are \(\gamma\)-related if, and only if, (7.1) holds._ Equations (7.1) and (7.1) are indistinctly referred as a Hamilton-Jacobi equation with respect to a contact structure on \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\). A section \(\gamma\) fulfilling the assumption of the theorem and the Hamilton-Jacobi equation will be called a _solution_ of the Hamilton-Jacobi problem for \(H\). ### The Hamilton-Jacobi equations for the evolution section Let \(\mathcal{E}_{v_{H}}\in Sec(\mathcal{T}^{E}(E^{*}\times\mathbb{R}))\) be the evolution section associated to a Hamiltonian function \(H:E^{*}\times\mathbb{R}\to\mathbb{R}\) and given in local coordinates by (5.2). Assume that \(\gamma\) is a section of \(\pi:E^{*}\times\mathbb{R}\to Q\) such that, in local coordinates, it reads \[\gamma(q^{i})=(q^{i},\gamma_{\alpha}(q^{i}),\gamma_{s}(q^{i})).\] Denote by \(\mathcal{E}_{v_{H}}^{\gamma}\in Sec(E)\) the section defined by \[\mathcal{E}_{v_{H}}^{\gamma}=pr_{1}\circ\mathcal{E}_{v_{H}}\circ\gamma.\] A direct computation shows that \(\mathcal{E}_{v_{H}}\) and \(\mathcal{E}_{v_{H}}^{\gamma}\) are \(\gamma\)-related, that is, \[\mathcal{E}_{v_{H}}\circ\gamma=(\mathrm{Id}_{E},T\gamma\circ\rho)\circ \mathcal{E}_{v_{H}}^{\gamma}\] if, and only if, \[-\left[\rho_{\alpha}^{i}\left(\frac{\partial H}{\partial q^{i}} \circ\gamma\right)+\mathcal{C}_{\alpha\beta}^{\eta}\gamma_{\eta}\left(\frac{ \partial H}{\partial y_{\beta}}\circ\gamma\right)+\gamma_{\alpha}\left(\frac{ \partial H}{\partial s}\circ\gamma\right)\right]= \rho_{\beta}^{i}\left(\frac{\partial H}{\partial y_{\beta}} \circ\gamma\right)\frac{\partial\gamma_{\alpha}}{\partial q^{i}}\,, \tag{62}\] \[\gamma_{\alpha}\left(\frac{\partial H}{\partial y_{\alpha}}\circ \gamma\right)= \rho_{\alpha}^{i}\left(\frac{\partial H}{\partial y_{\alpha}} \circ\gamma\right)\frac{\partial\gamma_{s}}{\partial q^{i}}\,. \tag{63}\] If we assume now that \(\gamma=j^{1}\gamma_{s}\), then \[\gamma_{\alpha}=\rho_{\alpha}^{i}\frac{\partial\gamma_{s}}{\partial q^{i}}\] and so (7.2) is fulfilled, and (7.2) becomes \[\mathrm{d}^{E}(H\circ\gamma)=0. \tag{64}\] Therefore, by Proposition 6.3, we have the following. **Theorem 7.4**.: _Let \(\gamma\in Sec(E^{*}\times\mathbb{R})\) such that \(j_{\gamma}:F_{\gamma}\rightarrow\mathcal{T}^{E}(E^{*}\times\mathbb{R})\), \(i_{\gamma}:\gamma(Q)\to E^{*}\times\mathbb{R}\) is a Legendrian Lie subalgebroid of \(\mathcal{T}^{E}(E^{*}\times\mathbb{R})\). Then, \(\mathcal{E}_{v_{H}}\) and \(\mathcal{E}_{v_{H}}^{\gamma}\) are \(\gamma\)-related if, and only if, (7.2) holds._ Equation (7.2) is referred as a _Hamilton-Jacobi equation for the evolution section_. **Remark 7.5**.: When \(E\) is the standard Lie algebroid \(TQ\), then the above theorem generelizes Theorem 5 in [28]. \(\diamond\) ### Acknowledgments Modesto Salgado and Silvia Souto acknowledge financial support from Grant Ministerio de Ciencia, Innovacion y Universidades (Spain), project PID2021-125515NB-C21. A. Anahory Simoes, L. Colombo and M. de Leon acknowledge financial support from the Spanish Ministry of Science and Innovation, under grants PID2019-106715GB-C21 and the Severo Ochoa Programme for Centres of Excellence in R&D (CEX2019-000904-S).
2303.15989
Continuous time crystal in an electron-nuclear spin system: stability and melting of periodic auto-oscillations
Crystals spontaneously break the continuous translation symmetry in space, despite the invariance of the underlying energy function. This has triggered suggestions of time crystals analogously lifting translational invariance in time. Originally suggested for closed thermodynamic systems in equilibrium, no-go theorems prevent the existence of time crystals. Proposals for open systems out of equilibrium led to the observation of discrete time crystals subject to external periodic driving to which they respond with a sub-harmonic response. A continuous time crystal is an autonomous system that develops periodic auto-oscillations when exposed to a continuous, time-independent driving, as recently demonstrated for the density in an atomic Bose-Einstein condensate with a crystal lifetime of a few ms. Here we demonstrate an ultra-robust continuous time crystal in the nonlinear electron-nuclear spin system of a tailored semiconductor with a coherence time exceeding hours. Varying the experimental parameters reveals huge stability ranges of this time crystal, but allows one also to enter chaotic regimes, where aperiodic behavior appears corresponding to melting of the crystal. This novel phase of matter opens the possibility to study systems with nonlinear interactions in an unprecedented way.
A. Greilich, N. E. Kopteva, A. N. Kamenskii, P. S. Sokolov, V. L. Korenev, M. Bayer
2023-03-28T14:06:08Z
http://arxiv.org/abs/2303.15989v1
# Continuous time crystal in an electron-nuclear spin system: ###### Abstract **Crystals spontaneously break the continuous translation symmetry in space, despite the invariance of the underlying energy function. This has triggered suggestions of time crystals analogously lifting translational invariance in time. Originally suggested for closed thermodynamic systems in equilibrium [1; 2], no-go theorems prevent the existence of time crystals [3; 4; 5]. Proposals for open systems out of equilibrium led to the observation of discrete time crystals [6; 7] subject to external periodic driving to which they respond with a sub-harmonic response [8; 9; 10]. A continuous time crystal is an autonomous system that develops periodic auto-oscillations when exposed to a continuous, time-independent driving [8; 9; 10], as recently demonstrated for the density in an atomic Bose-Einstein condensate with a crystal lifetime of a few ms [11]. Here we demonstrate an ultra-robust continuous time crystal in the nonlinear electron-nuclear spin system of a tailored semiconductor with a coherence time exceeding hours. Varying the experimental parameters reveals huge stability ranges of this time crystal, but allows one also to enter chaotic regimes, where aperiodic behavior appears corresponding to melting of the crystal. This novel phase of matter opens the possibility to study systems with nonlinear interactions in an unprecedented way.** The spins of electrons and nuclei in semiconductors form a system showing strongly intertwined, highly nonlinear dynamics. Several decades ago, two fundamental problems that have remained unresolved so far were formulated for this system. The first one is the spontaneous phase transition to a magnetically ordered, antiferromagnetic nuclear state canceling the fluctuations in the nuclear spin ensemble. This state is expected to develop by deep cooling of the nuclei to temperatures in the order of \(10^{-7}\) K [12; 13; 14; 15; 16; 17], Recently, advanced cooling protocols allowed achieving record nuclear spin temperatures of \(5\times 10^{-7}\) K in GaAs [18], which, however, was not yet sufficient for the phase transition. The second problem is the existence of a strange attractor in the chaotic polarization oscillations of the autonomous electron-nuclear spin system (ENSS) [19]. Depending on the number of accessible oscillation frequencies, the ENSS can have different types of attractors in phase space. A periodic auto-oscillation of one frequency is called a limit cycle, while an auto-oscillation with incommensurate frequencies occurs on a two-dimensional torus. Indications of these trajectories were found by Kalevich _et al._[20]. Further theoretical work also predicted the existence of homoclinic trajectories [21] that do not form closed cycles returning to their start positions - a precursor of chaotic motion. Up to now, this range of chaos has not been observed yet in semiconductors due to unrealistic required parameters, including the Overhauser effective magnetic field of polarized nuclei. Here, we use a semiconductor with an ENSS tailored to develop pronounced auto-oscillations under continuous optical pumping with circularly polarized laser light. The ENSS with an on-purpose reduced symmetry is an In\({}_{0.03}\)Ga\({}_{0.97}\)As epilayer doped with Si donors. The observed, strictly periodic auto-oscillations are unique signatures of a continuous time crystal (CTC) with an exceptionally long-lasting coherence exceeding hours, limited only by the measurement time, or in other words, a near-perfect clocking order of the "time atoms". The robustness of this first solid-state CTC is tested by linear and nonlinear analysis tools and approved across order-of-magnitude wide ranges of control parameters (laser power, sample temperature, and magnetic field), in which limit cycles in phase space are found for the ENSS evolution. On the other hand, these parameters allow one to vary the CTC period. Further, we could enter for the first time external parameter ranges of chaotic behavior, evidenced by a fractional correlation dimension [22], a positive maximum Lyapunov exponent, and a \(K\)-parameter value close to unity in the \(0-1\) chaos test [23; 24]. Then the chaotic auto-oscillations violate the ideal periodicity in time and can be interpreted as CTC "melting". ## Results ### Optical properties of the epilayer In our studies, the sample was tailored for a reduction of the crystal symmetry from cubic by introducing locally well-defined lattice distortions that impact the ENSS through the nuclear level splitting. The result of these considerations was an In\({}_{0.03}\)Ga\({}_{0.97}\)As epilayer doped with Si donors whose optical properties are shown in Fig. 1a. Localization of electrons at the Si-donors leads to an enhanced hyperfine interaction with the nuclei, compared to free electrons. At the temperature of \(T=6\) K, the dominant spectral line in the photoluminescence (PL) spectrum is associated with the recombination of free excitons, while the emission at its low energy flank arises from donor bound exciton recombination; see the black curve. Due to the incorporated indium atoms, the reduced band gap of the epilayer at \(E_{g}\)(InGaAs) = 1.461 eV is located in the transparency range of the GaAs substrate with \(E_{g}\)(GaAs) = 1.519 eV. Spin polarization of the Si-bound electrons is generated by non-resonant circularly polarized excitation, using a continuous wave (CW) pump diode laser emitting at \(E_{\rm pu}=1.579\) eV photon energy. The pump-induced spin polarization is monitored through the Faraday rotation (FR) of the linear polarization of the probe beam emitted by a tunable CW laser. The red curve in Fig. 1a shows the spectral dependence of the FR signal recorded by tuning the probe energy. For all further experiments, we fix the energy of the probe laser at \(E_{\rm pr}=1.454\) eV corresponding to a local FR signal maximum, see the arrow in Fig. 1a. This choice is thus simultaneously optimized for low laser absorption and large FR. More details are given in the Methods. The best way to assess the interaction between the electron and nuclear spins is to monitor the electron spin polarization, first done as a function of a tilted magnetic field \(B_{\rm ext}\), see the sketch at the bottom of Fig. 1b indicating the field orientation in combination with its components normal and parallel to the sample plane. Tilting the magnetic field is favorable for building up nuclear spin polarization and measuring electron spin dynamics: The initial electron spin orientation is set by the helicity of the circularly polarized pump laser, which orients the spins along the beam direction (\(z\)-axis) [25]. Subsequently, the electron spins precess about the transverse component \(B_{\rm x}\) of the magnetic field and lose their initial orientation. The pump-induced electron polarization can be transferred to the nuclear spins by flip-flop processes [19]. The transfer efficiency increases when a magnetic field is applied along the electron spins, i.e. by the \(B_{\rm z}\) component. However, when the pump light helicity is modulated sufficiently fast between right and left circular polarization, the available time for nuclear polarization build-up is too short, which is the case when the modulation frequency is higher than the rate of nuclear polarization by the electron spins. Then the observed signal has the Lorentzian shape called the Hanle curve [26] with a width inversely proportional to the electron spin lifetime, see the blue curve centered at zero magnetic field in Fig. 1b. Here, the half width at half maximum of the peak is 0.1 mT, which corresponds to the electron spin lifetime \(T_{s}=200\) ns, using the electron \(g\)-factor \(-0.568\)[27]. In contrast, if the excitation is done with fixed circular polarization, the electron spins polarize the nuclear spin system, which produces an effective magnetic field, the Overhauser field, acting back on the electron spins and changing the FR response to the external magnetic field. In a tilted magnetic field, the curve becomes strongly asymmetric relative to zero field and demonstrates a FR maximum shifted to finite \(B_{x}\) by the Overhauser field strength. For clarity of this expected behavior, we show the black curve in Fig. 1b, which is a spline fit to the original data given by the red curve that removes the superimposed oscillations. Here, one nicely sees the side peak at about \(B_{x}=-1\) mT. Surprisingly, at the chosen slow sweep rate of the field, \(\Delta B=10\,\mu\)T/s, in addition, strong polarization oscillations emerge at negative fields. The appearance of these oscillations depends on the field sweep rate, and for rates higher than the chosen one, the curve develops into the black curve (not shown here). Figure 1: **Optical properties of the Si-doped In\({}_{0.03}\)Ga\({}_{0.97}\)As epilayer.****a**, Photoluminescence spectrum at \(T=6\) K, excited by a diode laser at \(E_{\rm pu}=1.579\) eV photon energy, the black curve. The red curve gives the Faraday rotation spectrum, measured by scanning a probe laser. For electron spin polarization, the same pump laser was used for recording the photoluminescence. The inset shows a sketch of the GaAs lattice distortion by a large In atom replacing a Ga atom. **b**, Faraday rotation measured at the probe energy \(E_{\rm pr}=1.454\) eV while scanning the tilted magnetic field, using pump excitation with helicity modulation at \(f_{\rm m}=75\) kHz frequency (blue curve) and with fixed helicity (red curve). The field sweep rate is \(\Delta B=10\,\mu\)T/s, the tilt angle between the sample plane and the magnetic field is \(\alpha=10^{\circ}\) (see sketch at the bottom). The black curve shows the FR nominally expected for circular excitation, to highlight the additional maximum at the finite magnetic field strength of about \(-1\) mT due to nuclear polarization. This curve is gained from the red curve by removing the oscillations. The shape of curves is independent of the field scan direction. ### Periodic auto-oscillations The observation central to this manuscript is the auto-oscillations of the ENSS: when keeping the experimental parameters fixed in the right ranges, see below, the temporal evolution of the FR signal is strictly periodic without any notable decay, as demonstrated in Fig. 2a across the whole time range in a geometry with \(B_{x}=$-1\,\mathrm{m}\mathrm{T}$\) and \(B_{z}=$0.176\,\mathrm{m}\mathrm{T}$\) (\(\alpha=$10^{\circ}$\)): Amplitude and frequency of the oscillations remain constant over 40 minutes measurement time, only the background varies slightly. Figure 2b gives a close-up of the FR signal with an M-shape repeating with a period of about \(6.9\,\mathrm{s}\). To analyze the signal, we calculate its fast Fourier transformation (FFT), shown in Fig. 2e. The spectrum consists of a distinct set of equidistantly spaced spikes, corresponding to the precession frequency of \(0.145\,\mathrm{Hz}\) and its higher harmonics. The spikes in the frequency comb are remarkably narrow with a width of \(0.4\,\mathrm{m}\mathrm{H}\mathrm{z}\), as expected from the decay-free FR trace, limited only by the signal accumulation time of 40 minutes. The inset in Fig. 2e shows the autocorrelation (AC) function of the trace, showing a correlation between the signal with a delayed copy of itself. Also here, the AC amplitude does not decay with increasing delay, corroborating the non-random periodicity of the signal. Auto-oscillations occur in dissipative, nonlinear systems as the ENSS, when a continuous source of incoming energy - in our case applied highly non-resonant - compensates for the energy losses. The time evolution is then determined by the intrinsic properties of the system and is not affected from the outside. It may evolve as periodic, chaotic, or even unpredictable. We find clear indications for periodicity in our case, allowing a reliable claim for CTC behavior. As we observed no indication of any oscillation decay for 40 minutes, we can safely conclude a CTC lifetime of at least a few hours. The FFT of the oscillations can be understood as a structural analysis of the CTC, characteristic for the crystal unit cell. We have performed additional crucial tests established in the chaos theory of nonlinear systems. Application Figure 2: **Periodic auto-oscillations of CTC.****a**, Oscillations of the electron spin polarization in time monitored in FR, applying a tilted magnetic field with components \(B_{x}=$-1\,\mathrm{m}\mathrm{T}$\) and \(B_{z}=$0.176\,\mathrm{m}\mathrm{T}$\), corresponding to \(\alpha=$10^{\circ}$\). Pump and probe photon energies: \(E_{\mathrm{pu}}=$1.579\,\mathrm{eV}$\), \(E_{\mathrm{pr}}=$1.454\,\mathrm{eV}$\); pump and probe powers: \(P_{\mathrm{pu}}=$0.3\,\mathrm{m}\mathrm{W}$\), \(P_{\mathrm{pr}}=$1\,\mathrm{m}\mathrm{W}$\). **b**, Initial 30 seconds of the temporal data range from panel **a**, zooming into the oscillation details. The set of red segments of equal length between the FR maxima highlights the CTC periodicity. **c**, Calculated electron spin polarization, using the parameters \(\alpha=$10^{\circ}$\), \(B_{x}=$-1\,\mathrm{m}\mathrm{T}$\), \(a_{\mathrm{N}}=$20\,\mathrm{m}\mathrm{T}$\), \(b_{\mathrm{N}}=$21\,\mathrm{m}\mathrm{T}$\), and \(T_{\mathrm{N}}=$0.5\,\mathrm{s}$\) (see the Interpretation section for details). **d**, Top and front view of the three-dimensional plot of the spin polarization cycle, with the coordinates successively delayed by \(\tau\delta t=$5\delta t$\), where the measurement time step \(\delta t=$89\,\mathrm{m}\mathrm{s}$\). Such a delayed coordinates choice allows the representation of the time series in three dimensions. The black points mark the data. **e**, Fast Fourier transform and autocorrelation function as a function of delay time (inset), calculated for the signal from panel **a**. of these tests becomes possible only because of the long lifetime of the CTC. One such time-series analysis test is the Chaos Decision Tree Algorithm described in Ref. [24]. The analysis gives the parameter \(K\) in the \(0-1\) chaos test, which approaches zero for periodic systems, while it converges to unity for chaotic systems. For the data presented in Fig. 2, \(K=0.3536\), which approves periodicity. Additional supporting measures are the correlation dimension (\(D_{2}\)) [22] and the maximal Lyapunov exponent (LE) [28]. In our case, they are calculated using the TISEAN software package [29]. If the correlation dimension is an integer, the time series describes a periodic signal, and the maximal Lyapunov exponent should be less than or equal to zero. In our case, the periodicity is supported by the correlation dimension \(D_{2}=1.1\pm 0.1\) and the non-positive maximal Lyapunov exponent (see also Extended Data Fig. 6 for the full analysis). A useful way to visualize the time evolution of a non-linear system is to use delayed coordinates. A spin polarization vector is constructed so that the coordinates are chosen with a particular time delay \(\tau\) relative to each other. The coordinates for each phase space point are calculated as \(S(x,y,z)=(S(t),S(t-\tau),S(t-2\tau))\), with \(S(t)\) being the spin polarization at time \(t\). Figure 2d shows two views of this spin polarization vector using the time step of \(\tau\delta t=5\delta t\), with \(\delta t=89\,\)ms. Clearly, the phase space trace is a limit cycle as required for the periodic oscillations of a CTC. Next, we check the stability of the CTC. ### Stability and melting of CTC Here we test the robustness of the periodic auto-oscillations with respect to variations of several experimental parameters. The Extended Data on the auto-oscillations are shown in Figs. 1 - 4, here we present the analysis of these data, applying the \(0-1\) chaos test to obtain the parameter \(K\), see above. The \(K\)-value dependencies on the photon energy of the probe, the pump and probe laser powers, the tilt and strength of the magnetic field, and the temperature of the sample are summarized in Fig. 3a-d, where the black dots indicate robust parameter settings for a CTC, while the red dots give parameter settings with \(K>0.99\) which are, therefore, candidates for chaotic behavior, threatening the ideal CTC behavior. Figure 3a shows the dependence of \(K\) on the pump and the probe power for a fixed magnetic field orientation relative to the sample plane given by \(\alpha=7^{\circ}\). The probe power does not change the FR spectra but only influences the FR signal strength. In contrast, the pump, responsible for the nuclear spin polarization, is a decisive factor: at low pump power (\(0.05\,\)mW), the efficiency of nuclear spin polarization is low, requiring about 10 Figure 3: **Dependencies of the \(K\)-parameter and CTC period on the experimental conditions.** The top (bottom) axis of the upper (lower) figure is also relevant for the other figure in the column. **a**, The green diamonds show the dependence of the \(K\)-parameter on the probe power at \(P_{\rm pu}=0.3\,\)mW (lower axis), and the blue circles give the dependence on the pump power at \(P_{\rm pr}=0.05\,\)mW (upper axis). \(B_{x}=-1\,\)mT, \(\alpha=7^{\circ}\), and \(T=6\,\)K. **b**, \(K\) for different \(B_{z}\) at fixed \(B_{x}=-1\,\)mT. The upper axis shows the corresponding angle \(\alpha\). \(P_{\rm pu}=0.3\,\)mW and \(P_{\rm pr}=1\,\)mW, \(T=6\,\)K. **c**, \(K\) dependence on the strength of the magnetic field at fixed \(\alpha=7^{\circ}\). The magnetic field’s \(B_{z}\) and \(B_{x}\) components are plotted on the lower and upper axes, respectively. \(P_{\rm pu}=0.3\,\)mW and \(P_{\rm pr}=0.05\,\)mW, \(T=6\,\)K. **d**, \(K\) dependence on temperature. \(B_{x}=1\,\)mT at \(\alpha=180^{\circ}\). \(P_{\rm pu}=0.3\,\)mW and \(P_{\rm pr}=1\,\)mW. In all cases, the red-colored circles mark the signal with \(K>0.99\). **e**, The green diamonds show the dependence of the CTC period on the probe power at \(P_{\rm pu}=0.3\,\)mW (lower axis), and the blue circles give the dependence on the pump power at \(P_{\rm pr}=0.05\,\)mW (upper axis). **f**, CTC period for different \(B_{z}\) at fixed \(B_{x}=-1\,\)mT. **g**, CTC period dependence on the strength of the magnetic field at fixed \(\alpha=7^{\circ}\). **h**, CTC period dependence on temperature. minutes for saturation, see Ref. [27]. While \(K=0.9926\) in this case, the signal strength is comparable to the noise level and therefore has to be considered with caution. At higher pump power (\(2\,\mathrm{mW}\)), the signal amplitude decays due to pump-induced acceleration of the electron spin relaxation. For optimal polarization conditions, we choose \(0.3\,\mathrm{mW}\) pump power. Figure 3b shows the \(K\)-dependence on the magnetic field angle \(\alpha\) keeping \(B_{x}=-1\,\mathrm{mT}\) fixed. At \(\alpha=0^{\circ}\), the parameter \(K=0.9921\), and for all other angles, \(K\) is below 0.9, proving robust periodicity of the auto-oscillations. Variation of the strength of the magnetic field at fixed \(\alpha=7^{\circ}\) does not give an indication for chaos, see Fig. 3c. Finally, the temperature dependence of \(K\) at \(B_{x}=1\,\mathrm{mT}\) demonstrates an increase of \(K\) up to \(K=0.9938\) at \(T=17\,\mathrm{K}\), while below, we find CTC behavior. The condition for chaotic behavior at \(17\,\mathrm{K}\), however, has to be also considered with care, as we are working here close to the edge of stability of the auto-oscillations. Even small temperature variations of about \(\sim 0.1\,\mathrm{K}\) can lead to changes in the periodicity of the signal, see the Extended Data in Fig. 5a. Summarizing all data recorded, we find that the auto-oscillations remain reliably in the strictly periodic regime, corresponding to ideal CTC behavior across wide parameter ranges. Crossing the frontiers of these stability ranges, we expect, however, chaotic behavior. We focus on the most prominent example for chaos with \(\alpha=0^{\circ}\) (Voigt geometry). Figure 4a shows a 90-second slot out of the corresponding time series of auto-oscillations, taken at \(T=6\,\mathrm{K}\) for \(B_{x}=-1\,\mathrm{mT}\). The signal still consists of pronounced peaks with side wings at earlier delays, from first sight occurring periodically with a separation of about \(12\,\mathrm{s}\). In between these features, the signal no longer vanishes but rises slowly. Taking a closer look, however, the periodicity is disturbed, as can be clearly seen from the comparison with the set of red, \(12\,\mathrm{s}\) long time segments in Fig. 4a. The FFT spectrum of this time series is presented in Fig. 4b and, besides the broadened peaks, shows noisy, asymmetric wings around these peaks, which have no similarity with each other and indicate a reduction of periodicity: while the oscillations are still close to periodic (see Fig. 4a), taken a closer look, their period varies with time randomly. This characteristic leads to the slow decay of the autocorrelation function, see the inset of Fig. 4b. Applying the time series tests, we determine the non-integer correlation dimension to be \(D_{2}=1.3\pm 0.1\) and the positive maximum Lyapunov exponent to be \(0.12\pm 0.01\), corroborating the assumption of chaotic behavior. The time evolution of these auto-oscillations in phase space using delayed coordinates is shown in Fig. 4c. It has a nontrivial topology with, compared to the periodic case, a tendency for a decreased phase volume that is completely filled by the data points within the limiting values. The nonlinear time series analysis for all identified cases of chaotic auto-oscillations is presented in the Extended Data, Fig. 6. They all have a non-integer correlation dimension and a positive maximal Lyapunov exponent. These clear signatures for becoming chaotic and the related deviations from periodicity are a result of the melting of the CTC. Figure 4: **Chaotic auto-oscillations: melting of the CTC.****a**, Oscillations of the electron spin polarization in the FR signal, measured in a transverse magnetic field with \(B_{x}=-1\,\mathrm{mT}\), \(B_{z}=0\,\mathrm{mT}\) (\(\alpha=0^{\circ}\)). \(P_{\mathrm{pu}}=0.3\,\mathrm{mW}\) and \(P_{\mathrm{pr}}=1\,\mathrm{mW}\). The set of red time segments with equal length underlines the aperiodic behavior of the main peaks. **b**, Fast Fourier transform and autocorrelation function vs. delay time (inset), calculated for the signal from panel **a**. **c**, Top and front view of the phase portrait of the spin polarization auto-oscillations. \(\tau=62\) time steps corresponding to \(5.5\,\mathrm{s}\). ### Interpretation of auto-oscillations To understand the physics behind the observations, we consider the model developed by M. I. D'yakonov _et al._[30], that describes periodic auto-oscillations and provides an elegant interpretation of our results. The circularly polarized optical excitation orients electron spins, which subsequently polarize the nuclear spin system via the hyperfine interaction [19]. The Overhauser field of the polarized nuclear spins \(\mathbf{B}_{\mathrm{N}}\) is, in general, not parallel to the average electron spin \(\mathbf{S}\), so that an electron spin precesses about \(\mathbf{B}_{\mathrm{N}}\), causing a variation of \(\mathbf{S}\). Thus, in the strongly coupled nonlinear system of electron and nuclear spins, the electron spins are responsible for the production of the Overhauser field and, vice versa, depend on its magnitude and direction. The ENSS becomes autonomous when all external parameters do not depend on time. The dissipation of angular momentum and energy in the spin system is compensated by the absorption of circularly polarized light. Then, under certain conditions, the dynamic regime of auto-oscillations may appear, which can be captured as follows: since the electron spin lifetime (\(T_{\mathrm{s}}\)) is much shorter than the longitudinal nuclear spin relaxation time (\(T_{\mathrm{N}}\)), the electron spin (\(\mathbf{S}\)) is described by the solution of the stationary Bloch equation in the sum of the external magnetic field (\(\mathbf{B}_{\mathrm{ext}}\)) and the Overhauser field (\(\mathbf{B}_{\mathrm{N}}\)): \[\mathbf{S}=\mathbf{S}_{0}+\frac{\mu_{\mathrm{B}}gT_{\mathrm{s}}}{\hbar}( \mathbf{B}_{\mathrm{ext}}+\mathbf{B}_{\mathrm{N}})\times\mathbf{S}. \tag{1}\] Here, \(\mathbf{S}_{0}\) is the average electron spin polarization in the absence of the magnetic field, \(\mu_{\mathrm{B}}\) is the Bohr magneton, and \(g\) is the electron \(g\)-factor. \(\hbar/\mu_{\mathrm{B}}gT_{\mathrm{s}}\) is the half-width at half-maximum of the electron Hanle curve that is not influenced by dynamic nuclear polarization. The precession of the electron spin polarization about the total magnetic field changes the Overhauser field in time according to [19; 30]: \[\frac{d\mathbf{B}_{\mathrm{N}}}{dt}=-\frac{1}{T_{\mathrm{N}}}\left(\mathbf{B }_{\mathrm{N}}-\hat{a}\mathbf{S}\right), \tag{2}\] where \(\hat{a}\) is the second-rank tensor describing the process of dynamic nuclear polarization. The model suggests that \(\hat{a}\mathbf{S}\) is a linear function of \(\mathbf{S}\). In the case of pure GaAs, where all lattice nuclei are located in a tetrahedral surrounding, \(\hat{a}\) gives the contribution to the Overhauser field \(B_{\mathrm{N}}^{0}=b_{\mathrm{N}}(\mathbf{S}\mathbf{B}_{\mathrm{ext}}) \mathbf{B}_{\mathrm{ext}}/|\mathbf{B}_{\mathrm{ext}}|^{2}\). Here \(b_{\mathrm{N}}\) is the parameter of the hyperfine interaction between electrons and nuclei. In this case, the values of \(\mathbf{S}\) and \(B_{\mathrm{N}}^{0}\) are constant, and no auto-oscillations occur. This situation changes when the crystal symmetry is reduced. For \(\mathrm{In}_{0.03}\mathrm{Ga}_{0.97}\mathrm{As}\), the carefully adjusted incorporation of indium replacing Ga atoms causes significant non-uniform crystal deformations (see the sketch in Fig. 1a), despite the low In-content of 3% only. The deformation affects not only the nearest neighbors of an In-atom but also more distant atoms. Accordingly, distortion magnitude and direction are highly inhomogeneous since the stress gradient decays radially with increasing distance from an inserted indium atom. The local strain leads to a quadrupole splitting of the nuclear spin levels for all nuclei with spin larger than 1/2. Due to the strong deformation, the spin of the \(i\)-th nucleus is oriented along the main local axis \(\mathbf{h}_{i}\) of the tensor describing the quadrupole interaction rather than along the external magnetic field. The contribution of these nuclei to the total Overhauser field is \(\mathbf{B}_{\mathrm{Q}}=\sum_{i}a_{i}(\mathbf{S}\mathbf{h}_{i})\mathbf{h}_{i}\), where the summation is carried out over all quadrupole perturbed nuclei within the electron localization volume around a donor [31; 32]. For an isotropic distribution of the axes, the field can be written as \(\mathbf{B}_{\mathrm{Q}}=a_{\mathrm{N}}\mathbf{S}\). Therefore, \(\hat{a}\) can be reduced to the simplified form [33]: \[\hat{a}\mathbf{S}=\mathbf{B}_{\mathrm{N}}^{0}+\mathbf{B}_{\mathrm{Q}}=b_{ \mathrm{N}}(\mathbf{S}\mathbf{B}_{\mathrm{ext}})\mathbf{B}_{\mathrm{ext}}/| \mathbf{B}_{\mathrm{ext}}|^{2}+a_{\mathrm{N}}\mathbf{S}, \tag{3}\] which is the sum of the contribution to the Overhauser field from quadrupole-unperturbed nuclei (\(\mathbf{B}_{\mathrm{N}}^{0}\)) and of the contribution to the Overhauser field from quadrupole-perturbed nuclei (\(\mathbf{B}_{\mathrm{Q}}\)). The direction of the contributions and the deformation of the Hanle curve in the presence of \(B_{\mathrm{N}}^{0}\) as well as of \(\mathbf{B}_{\mathrm{Q}}+\mathbf{B}_{\mathrm{N}}^{0}\) are given in the Extended Data, Fig. 8. The model allows us to simulate the periodic auto-oscillations, as shown in Fig. 2c. For the calculations, we use the parameters known from the experiment: \(\alpha=10^{\circ}\), \(B_{x}=-1\,\mathrm{mT}\), and \(a_{\mathrm{N}}=20\,\mathrm{mT}\). As a fit parameter, \(b_{\mathrm{N}}=21\,\mathrm{mT}\) is taken to achieve better agreement with the experiment. \(T_{\mathrm{N}}=0.5\,\mathrm{s}\) is determined from the comparison of periodicity in the experimental and calculated signals. The simulated auto-oscillations reproduce the periodically repeated M-shape, in good agreement with the experimental signal. The phase space portrait is represented by a limit cycle as required for a CTC; details of the periodicity analysis are given in the Extended Data, Fig. 7. In general, periodic auto-oscillations can be observed in structures that fulfill the following conditions: (1) nuclei with a quadrupole moment are present (nuclear spin \(>1/2\)), (2) the decrease of the local lattice symmetry leads to an isotropic spatial distribution of the strain-induced quadrupole splitting, and (3) the quadrupole-induced effective field \(B_{\mathrm{Q}}\) is comparably strong to the external magnetic field. A theoretical analysis of the coupled Eqs. (1)-(3) was performed in Refs. [19; 34], where both the scalar and the tensor form of the nuclear fields were considered. It was shown that only limit cycles are realized, as experimentally confirmed for \(\mathrm{Al}_{0.26}\mathrm{Ga}_{0.74}\mathrm{As}\). On the other hand, Bakaleniikov [21] showed the presence of homoclinic trajectories as a precursor for chaos, provided that the explicit form of the tensor does not correspond to a system with high symmetry - a situation that we have achieved by our sample design. It is the crystal deformations in \(\mathrm{In}_{0.03}\mathrm{Ga}_{0.97}\mathrm{As}\) facilitating the chaotic auto-oscillations. Our experimental results indicate that chaotic auto-oscillations are observed close to the Voigt configuration. In this case, the influence of the hyperfine interaction field of the electrons on the nuclei (the Knight field) becomes important [19; 35]. This leads to the tensor \(\hat{a}=\hat{a}(S)\) becoming a nonlinear function of the electron spin \(\mathbf{S}\). The general analysis of Eqs. (1) and (2) describing the ENSS is complicated and has not been carried out in full detail yet, the importance of the Knight field for the coupled dynamics gives a hint on a further extension of the theoretical studies. Finally, we address the controllability of the CTC period in the studied ENSS. To that end, we consider the influence of the same set of parameters on the period as in the studies of the parameter \(K\). The resulting dependencies are given in Figs. 3e-h, evidencing a wide tuning range of the CTC period. For example, when keeping the magnetic field orientation fixed, an increase of the field strength leads to a drop in the CTC period from about \(45\,\mathrm{s}\) down to almost \(5\,\mathrm{s}\). Also, a change in the sample temperature leads to a similarly strong variation of the period, while the other parameter dependencies are comparatively weak. The observed variations are non-monotonic, showing that measurement of the CTC period can give further detailed insight into nonlinear systems' dynamics. ## Conclusion The strongly correlated electron-nuclear spin system presented here provides a new dynamical many-body state in the solid state, a spin-based CTC. The hugely extended robustness and compactness compared to other CTCs allows one to explore nonlinear dynamics and seek for applications, for example, in information and metrology technology, as it may be used as a flexible frequency standard that can be controlled via sample design: the In\({}_{0.03}\)Ga\({}_{0.97}\)As system with deliberately introduced symmetry reduction by deformations has facilitated the observation of auto-oscillations of the coupled ENSS across wide ranges of experimental conditions. The CTC operation is threatened by the possibility of chaotic auto-oscillations in border ranges of the parameters. We have confirmed the existence of this chaotic regime using analysis routines for nonlinear time series, evidencing the first such observation for semiconductor spins, possible due to the robust statistical significance of the required chaos tests. Measurement of the dynamic mode parameters shows that a strange attractor is reached in our system, as evidenced by a non-integer correlation dimensionality and a positive maximum Lyapunov exponent. The description of deterministic chaos in the ENSS still remains a challenge asking for an extended mathematical analysis. For further progress on the experimental side, one may consider active modulation of the external CTC parameters in time, thereby getting active control of the periodic auto-oscillations.
2301.03992
Vision Transformers Are Good Mask Auto-Labelers
We propose Mask Auto-Labeler (MAL), a high-quality Transformer-based mask auto-labeling framework for instance segmentation using only box annotations. MAL takes box-cropped images as inputs and conditionally generates their mask pseudo-labels.We show that Vision Transformers are good mask auto-labelers. Our method significantly reduces the gap between auto-labeling and human annotation regarding mask quality. Instance segmentation models trained using the MAL-generated masks can nearly match the performance of their fully-supervised counterparts, retaining up to 97.4\% performance of fully supervised models. The best model achieves 44.1\% mAP on COCO instance segmentation (test-dev 2017), outperforming state-of-the-art box-supervised methods by significant margins. Qualitative results indicate that masks produced by MAL are, in some cases, even better than human annotations.
Shiyi Lan, Xitong Yang, Zhiding Yu, Zuxuan Wu, Jose M. Alvarez, Anima Anandkumar
2023-01-10T18:59:00Z
http://arxiv.org/abs/2301.03992v1
# Vision Transformers Are Good Mask Auto-Labelers ###### Abstract We propose Mask Auto-Labeler (MAL), a high-quality Transformer-based mask auto-labeling framework for instance segmentation using only box annotations. MAL takes box-cropped images as inputs and conditionally generates their mask pseudo-labels.We show that Vision Transformers are good mask auto-labelers. Our method significantly reduces the gap between auto-labeling and human annotation regarding mask quality. Instance segmentation models trained using the MAL-generated masks can nearly match the performance of their fully-supervised counterparts, retaining up to 97.4% performance of fully supervised models. The best model achieves 44.1% mAP on COCO instance segmentation (test-dev 2017), outperforming state-of-the-art box-supervised methods by significant margins. Qualitative results indicate that masks produced by MAL are, in some cases, even better than human annotations. ## 1 Introduction Computer vision has seen significant progress over the last decade. Tasks such as instance segmentation have made it possible to localize and segment objects with pixel-level accuracy. However, these tasks rely heavily on expansive human mask annotations. For instance, when creating the COCO dataset, about 55k worker hours were spent on masks, which takes about 79% of the total annotation time [1]. Moreover, humans also make mistakes. Human annotations are often misaligned with actual object boundaries. On complicated objects, human annotation quality tends to drop significantly if there is no quality control. Due to the expensive cost and difficulty of quality control, some other large-scale detection datasets such as Open Images [2] and Objects365 [3], only contain partial or even no instance segmentation labels. In light of these limitations, there is an increasing interest in pursuing box-supervised instance segmentation, where the goal is to predict object masks from bounding box supervision directly. Recent box-supervised instance segmentation methods [4, 5, 6, 7, 8] have shown promising performance. The emergence of these methods challenges the long-held belief that mask annotations are needed to train instance segmentation models. However, there is still a non-negligible gap between state-of-the-art approaches and their fully-supervised oracles. **Our contributions:** To address box-supervised instance segmentation, we introduce a two-phase framework consisting of a mask auto-labeling phase and an instance segmentation training phase (see Fig. 2). We propose a Transformer-based mask auto-labeling framework, Mask Auto-Labeler (MAL), that takes Region-of-interest (RoI) images as inputs and conditionally generates high-quality masks (demonstrated in Fig. 1) within the box. Our contributions can be summarized as follows: * Our two-phase framework presents a versatile design compatible with any instance segmentation architecture. Unlike existing methods, our framework is simple and agnostic to instance segmentation module designs. * We show that Vision Transformers (ViTs) used as image encoders yield surprisingly strong auto-labeling results. We also demonstrate that some specific designs in MAL, such as our attention-based decoder, multiple-instance learning with box expansion, and class-agnostic training, crucial for strong auto-labeling performance. Thanks to these components, MAL sometimes even surpasses humans in annotation quality. * Using MAL-generated masks for training, instance segmentation models achieve up to 97.4% of their fully supervised performance on COCO and LVIS. Our result significantly narrows down the gap between box-supervised and fully supervised approaches. We also demonstrate the outstanding open-vocabulary generalization of MAL by labeling novel categories not seen during training. Our method outperforms all the existing state-of-the-art box-supervised instance segmentation methods by large margins. This might be attributed to good representations of ViTs and their emerging properties such as meaningful grouping [9], where we observe that the attention to objects might benefit our task significantly (demonstrated in Fig. 6). We also hypothesize that our class-agnostic training design enables MAL to focus on learning general grouping instead of focusing on category information. Our strong results pave the way to remove the need for expensive human annotation for instance segmentation in real-world settings. ## 2 Related work ### Vision Transformers Transformers were initially proposed in natural language processing [10]. Vision Transformers [11] (ViTs) later emerged as highly competitive visual recognition models that use multi-head self-attention (MHSA) instead of convolutions as the basic building block. These models are recently marked by their competitive performance in many visual recognition tasks [12]. We broadly categorize existing ViTs into two classes: plain ViTs, and hierarchical ViTs. **Standard Vision Transformers.** Standard ViTs [11] are the first vision transformers. Standard ViTs have the simplest structures, which consist of a tokenization embedding layer followed by a sequence of MHSA layers. However, global MHSA layers can be heavy and usually face significant optimization issues. To improve their performance, many designs and training recipes are proposed to train ViTs in data-efficient manners [9, 13, 14, 15, 16, 17, 18, 19]. **Hierarchical Vision Transformers.** Hierarchical Vision Transformers [12, 20, 21, 22] are pyramid-shaped architectures that aim to benefit other tasks besides image classification with their multi-scale designs. On top of plain ViTs, these ViTs [20, 21] separate their multi-head self-attention layers into hierarchical stages. Between the stages, there are spatial reduction layers, such as max-pooling layers. These architectures are usually mixed with convolutional layers [23] and often adopt efficient self-attention designs to deal with long sequence lengths. ### Instance segmentation Instance segmentation is a visual recognition task that predicts the bounding boxes and masks of objects. **Fully supervised instance segmentation.** In this setting, both bounding boxes and instance-level masks are provided as the supervision signals. Early works [24, 25, 26, 27] follow a two-stage architecture that generates box proposals or segmentation proposals in the first stage and then produces the final segmentation and classification information in the second stage. Later, instance segmentation models are broadly divided into two categories: some continue the spirit of the two-stage design and extend it to multi-stage architectures [28, 29]. Others simplify the architecture and propose one-stage instance segmentation, e.g., YOLACT [30], SOLO [31, 32], CondInst [33], PolarMask [34, 35]. Recently, DETR and Deformable DETR [36, 37] show great potential of query-based approaches in object detection. Then, methods like MaxDeepLab [38], MaskFormer [39], Figure 2: An overview of the two-phase framework of box-supervised instance segmentation. For the first phase, we train Mask Auto-Labeler using box supervision and conditionally generate masks of the cropped regions in training images (top). We then train the instance segmentation models using the generated masks (bottom). PanopticSegFormer [40], Mask2Former [41] and Mask DINO [42] are introduced along this line and have pushed the boundary of instance segmentation. On the other hand, the instance segmentation also benefits from more powerful backbone designs, such as Swin Transformers [12, 22], ViTDet [43], and ConvNeXt [44]. **Weakly supervised instance segmentation.** There are two main styles of weakly supervised instance segmentation: learning with image-level and box-level labels. The former uses image-level class information to perform instance segmentation [45, 46, 47, 48, 49], while the latter uses box-supervision. Hsu et al. [4] leverages the tight-box priors. Later, Box-Inst [5] proposes to leverage color smoothness to improve accuracy. Besides that, DiscoBox [7] proposes to leverage both color smoothness and inter-image correspondence for the task. Other follow-ups [6, 8] also leverage tight-box priors and color smoothness priors. ### Deep learning interpretation The interest in a deeper understanding of deep networks has inspired many works to study the interpretation of deep neural networks. For example, Class Activation Map (CAM) [50] and Grad-CAM [51] visualize the emerging localization during image classification training of convolutional neural networks (CNNs). This ability has also inspired much weakly-supervised localization and shows deep connections to general weakly-supervised learning, which partly motivates our decoder design in this paper. DINO [9] further shows that meaning visual grouping emerges during self-supervised learning with ViTs. In addition, FAN [52] shows that such emerging properties in ViTs are linked to their robustness. ## 3 Method Our work differs from previous box-supervised instance segmentation frameworks [4, 5, 6, 7, 8] that simultaneously learns detection and instance segmentation. We leverage a two-phase framework as visualized in Fig. 2, which allows us to have a network focused on generating mask pseudo-labels in phase 1, and another network focused on learning instance segmentation [24, 28, 41, 43] in phase 2. Our proposed auto-labeling framework is used in phase 1 to generate high-quality mask pseudo-labels. We propose this two-phase framework because it brings the following benefits: * We can relax the learning constraints in phase 1 and focus only on mask pseudo-labels. Therefore, in this phase, we can take Region-of-interest (RoI) images instead of untrimmed images as inputs. This change allows us to use a higher resolution for small objects and a strong training technique mentioned in Sec. 3.1, which helps improve the mask quality. * We can leverage different image encoders and mask decoders in phases 1 and 2 to achieve higher performance. We empirically found that phases 1 and 2 favor different architectures for the image encoders and mask decoders. See the ablation study in Tab. 3 and 4. * We can use MAL-generated masks to directly train the most fully supervised instance segmentation models in phase 2. This makes our approach more flexible than previous architecture-specific box-supervised instance segmentation approaches [4, 5, 6, 7, 8]. As phase 2 follows the previous standard pipelines, which do not need to be re-introduced here, we focus on introducing phase 1 (MAL) in the following subsections. Figure 3: Overview of MAL architecture. We visualize the architecture of Mask Auto-Labeler. Mask Auto-Labeler takes cropped images as inputs. Mask Auto-Labeler consists of two symmetric networks, Task Network and Teacher Network. Each network contains the image encoder \(E\)(or \(E^{t}\)), and the mask decoder \(D\)(or \(D^{t}\)). We use the exponential moving average (EMA) to update the weights of the teacher network. We apply multiple instance learning (MIL) loss and conditional random fields (CRFs) loss. The CRF loss takes the average mask predictions of the teacher network and the task network to make the training more stable and generate refined masks for self-training. ### RoI input generation Most box-supervised instance segmentation approaches [4, 5, 6, 7] are trained using the entire images. However, we find that using RoI images might have more benefits in box-supervised instance segmentation. Moreover, we compare two intuitive sampling strategies of RoI images to obtain foreground and background pixels and explain the better strategy, box expansion, in detail. **Benefits of using RoI inputs.** There are two advantages of using RoI images for inputs. First, using the RoI images as inputs is naturally good for handling small objects because no matter how small the objects are, the RoI images are enlarged to avoid the issues caused by low resolution. Secondly, having RoI inputs allows MAL to focus on learning segmentation and avoid being distracted from learning other complicated tasks, e.g., object detection. **RoI sampling strategy.** The sampling strategy should ensure both positive and negative pixels are included. We present two straightforward sampling strategies: * The first strategy is to use bounding boxes to crop the images for positive inputs. We crop the images using randomly generated boxes containing only background pixels for negative inputs. MAL does not generate good mask pseudo-labels with cropping strategy. We observe that the networks tend to learn the trivial solution (all pixels are predicted as either foreground or background). * The second is to expand the bounding boxes randomly and include background pixels, where negative bags are chosen from the expanded rows and columns. We visualize how we define positive/negative bags in Fig. 3 and explain the detail in Sec. 3.3. **This detailed design is critical to make MAL work** as it prevents MAL from learning trivial solutions. Without this design, the generated masks tend to fill the entire bounding box. **Box expansion specifics.** Given an untrimmed image \(\mathbf{I}^{u}\in\mathbb{R}^{C\times H^{+}\times W^{u}}\) and the bounding box \(\mathbf{b}=(x_{0},y_{0},x_{1},y_{1})\) indicating the x, y coordinates of the top-left corners and the bottom-right corners. To obtain background pixels, we randomly expand the bounding box \(\mathbf{b}\) to \(\mathbf{b^{\prime}}=(x_{e}+\beta_{x}(x_{0}-x_{c}),y_{c}+\beta^{\prime}_{x}(y_{0}- y_{c}),x_{c}+\beta_{y}(x_{1}-x_{c}),y_{c}+\beta^{\prime}_{y}(y_{1}-y_{c}))\), where \(x_{c}=(x_{0}+x_{1})/2\), \(y_{c}=(y_{0}+y_{1})/2\). To generate random values of \(\beta_{x},\beta^{\prime}_{x},\beta_{y},\beta^{\prime}_{y}\), we randomly generate \(\theta_{x},\theta_{y}\in[0,\theta]\) for x- and y-direction, where \(\theta\) is the upper bound of box expansion rate. Next, we randomly generate \(\beta_{x}\in[0,\theta_{x}]\) and \(\beta_{y}\in[0,\theta_{y}]\). In the end, we assign \(\beta^{\prime}_{x}\) as \(\theta_{x}-\beta_{x}\) and \(\beta^{\prime}_{y}\) as \(\theta_{y}-\beta_{y}\). Finally, we use \(\mathbf{b^{\prime}}\) to crop the image and obtain trimmed image \(\mathbf{I}^{t}\). We conduct the ablation study for \(\theta\) in Tab. 5. At last, We resize the trimmed image \(\mathbf{I}^{t}\) to the size of \(C\times H^{c}\times W^{c}\) as the input image \(\mathbf{I}^{c}\). ### MAL architecture MAL can be divided into two symmetric networks: the task network and the teacher network. The task network consists of an image encoder denoted as \(E\), and a mask decoder denoted as \(D\), demonstrated in Fig. 3. The architecture of the teacher network is identical to the task network. We denote the segmentation output of the task network and the teacher network as \(\mathbf{m},\mathbf{m}^{t}\in\{0,1\}^{N}\), respectively. **Image encoder**. We use Standard ViTs [11] as the image encoder and drop the classification head of Standard ViTs. We compare different image encoders in Sec. 4.4. We also try feature pyramid networks on top of Standard ViTs, e.g., FPN [53], but it causes a performance drop. Similar conclusions were also found in ViTDet [43]. **Mask decoder.** For the mask decoder \(D\), we use a simple attention-based network inspired by YOLACT [30], which includes an instance-aware head \(K\) and a pixel-wise head \(V\), where \(D(E(\mathbf{I}))=K(E(\mathbf{I}))\cdot V(E(\mathbf{I}))\), and ". " represents the inner-product operator. For the instance-aware head \(K\), we use a max-pooling layer followed by a fully connected layer. The input channel dimension of \(K\) is equivalent to the output channel dimension of \(E\). The output channel dimension of \(K\) is 256. For the pixel-wise head \(V\), we use four sequential convolutional layers. Each is followed by a ReLU layer. Between the second and the third convolutional layer, we insert a bilinear interpolation layer to increase the feature resolution by 2. The input channel dimension is equivalent to the output channel dimension of \(E\). We use 256 dimensions for hidden channels and output channels. We also compare different design choices of mask decoders in Sec. 4.5. **Exponential moving average (EMA) teacher.** Instead of training the teacher network directly, we leverage exponential moving averages (EMA) to update the parameters in the teacher network using the parameters in the task network similar to MOCO [54]. The goal of using EMA Teacher is to eliminate the loss-explosion issues in training since optimizing Standard Vision Transformers is usually non-trivial [13, 14, 16]. We do not observe any significant performance drop or improvement on DeiT-small-based MAL after removing the teacher network. However, it makes the training more stable when we use larger-scale image encoders in MAL, e.g. ViT-MAE-Base [13]. Figure 4: (a) The fully connected decoder (b) The fully convolutional Decoder (c) The attention-based decoder (used in MAL) (d) The query-based Decoder. ### Losses We use Multiple Instance Learning Loss \(\mathcal{L}_{\text{mil}}\) and Conditional Random Field Loss \(\mathcal{L}_{\text{crf}}\) as the box-supervised loss: \[\mathcal{L}=\alpha_{\text{mil}}\mathcal{L}_{\text{mil}}+\alpha_{\text{crf}} \mathcal{L}_{\text{crf}} \tag{1}\] **Multiple Instance Learning Loss.** The motivation of the Multiple Instance Segmentation is to exploit the priors of tight-bounding box annotations. After the student network produces the output \(\mathbf{m}\), we apply the Multiple Instance Learning (MIL) Loss on the output mask \(\mathbf{m}\). We demonstrate the process in Fig. 3. We denote \(\mathbf{m}_{i,j}\) as the mask score at the location \(i,j\) in the image \(\mathbf{I}^{c}\). We define each pixel as an instance in the MIL loss. Inspired by BBTP [4], we treat each row or column of pixels as a bag. We determine whether a bag is positive or negative based on whether it passes a ground-truth box. We define the bags as \(\mathbf{B}\), and each bag \(\mathbf{B}_{i}\) contains a row or column of pixels. Additionally, we define the label for each bag \(\mathbf{g}\), and each label \(\mathbf{g}_{i}\) corresponds to a bag \(\mathbf{B}_{i}\). Therefore, we use the max pooling as the reduction function and dice loss [55]: \[\mathcal{L}_{mil}=1-\frac{2\sum_{i}\mathbf{g}_{i}\cdot\max\{\mathbf{B}_{i}\}^{2}}{\sum _{i}\max\{\mathbf{B}_{i}\}^{2}+\sum_{i}\mathbf{g}_{i}^{2}} \tag{2}\] **Conditional Random Field Loss.** The goal of CRF loss is to refine the mask prediction by imposing the smoothness priors via energy minimization. Then, we leverage this refined mask as pseudo-labels to self-train the mask prediction in an online-teacher manner. We use the average mask prediction \(\mathbf{m}^{a}=\frac{1}{2}(\mathbf{m}+\mathbf{m}^{t})\) as the mask prediction to be refined for more stable training. Next, we define a random field \(\mathbf{X}=\{\mathbf{X}_{1},...,\mathbf{X}_{N}\}\), where \(N=H^{c}\times W^{c}\) is the size of cropped image and each \(\mathbf{X}_{i}\) represents the label that corresponds to a pixel in \(\mathbf{I}^{c}\), therefore we have \(\mathbf{X}\in\{0,1\}^{N}\), meaning the background or the foreground. We use \(\mathbf{l}\in\{0,1\}^{N}\) to represent a labeling of \(\mathbf{X}\) minimizing the following CRF energy: \[E(\mathbf{l}|\mathbf{m}^{a},\mathbf{X}^{c})=\mu(\mathbf{X}|\mathbf{m}^{a},\mathbf{I}^{c})+\psi(\mathbf{X}| \mathbf{I}^{c}), \tag{3}\] where \(\mu(\mathbf{X}|\mathbf{m}^{a},\mathbf{I}^{c})\) represents the unary potentials, which is used to align \(\mathbf{X}_{i}\) and \(\mathbf{m}_{i}^{a}\) since we assume that most of the mask predictions are correct. Meanwhile, \(\psi(\mathbf{X}|\mathbf{I}^{c})\) represents the pairwise potential, which sharpens the refined mask. Specifically, we define the pairwise potentials as: \[\psi(\mathbf{X}|\mathbf{I}^{c})=\sum_{\begin{subarray}{c}i\in\{0..N-1\} \\ j\in\mathcal{N}(i)\end{subarray}}\omega\exp(\frac{-|\mathbf{I}_{i}^{c}-\mathbf{I}_{j}^ {c}|^{2}}{2\zeta^{2}})[\mathbf{X}_{i}\neq\mathbf{X}_{j}], \tag{4}\] where \(\mathcal{N}(i)\) represents the set of 8 immediate neighbors to \(\mathbf{X}_{i}\) as shown in Fig. 3. Then, we use the MeanField algorithm [7, 56] to efficiently approximate the optimal solution, denoted as \(\mathbf{l}=MeanField(\mathbf{I}^{c},\mathbf{m}^{a})\). We attach the derivation and PyTorch code in the supplementary. At last, we apply Dice Loss to leverage the refined masks \(\mathbf{l}\) to self-train the models as: \[\mathcal{L}_{\text{crf}}=1-\frac{2\sum_{i}\mathbf{l}_{i}\mathbf{m}_{i}}{\sum_{i}\mathbf{ I}_{i}^{2}+\mathbf{m}_{i}^{2}} \tag{5}\] ## 4 Experiments We evaluate MAL on COCO dataset [1], and LVIS [57]. The main results on COCO and LVIS are shown in Tab. 1 and 2. The qualitative results are shown in Fig. 1 and Fig. 5. ### Datasets **COCO dataset.** contains 80 semantic categories. We follow the standard partition, which includes train2017 (115K images), val2017 (5K images), and test-dev (20k images). **LVIS dataset.** contains 1200+ categories and 164K images. We follow the standard partition of training and validation. ### Implementation Details We use 8 NVIDIA Tesla V100s to run the experiments. **Phase 1 (mask auto-labeling).** We use AdamW [58] as the network optimizer and set the two momentums as 0.9, 0.9. We use the cosine and annealing scheduler to adjust the learning rate, which is set to \(1.5\cdot 10^{-6}\) per image. The MIL loss weight \(\alpha_{\text{mil}}\), CRF loss weight \(\alpha_{\text{crf}}\), \(\zeta\), and \(\omega\) in CRF pairwise potentials are set to 4, 0.5, 0.5, 2, respectively. We analyze the sensitivity of the loss weights and CRF hyperparameters in Fig. 8. We use the input resolution of \(512\times 512\), and a batch size of 32 (4 per GPU). For EMA, we use a momentum of 0.996. For the task and teacher network, we apply random flip data augmentation. On top of that, we apply extra random color jittering, random grey-scale conversion, and random Gaussian blur for the task network. We train MAL for 10 epochs. It takes around 23 hours and 35 hours to train MAL with Standard ViT-Base [11] on the COCO and LVIS datasets, respectively. **Phase 2 (Training instance segmentation models).** We select a couple of high-performance fully supervised instance segmentation models, which are ConvNeXts [44] with Cascade R-CNN [28], Swin Transformers [12] with MaskZFormer [41], ResNets [59] and ResNeXts [60] with SOLOv2 [31]. MAL works extremely well with these architectures, which demonstrates the great power of Mask Auto-Labeler from the perspective of accuracy and generalization. We leverage the codebase in MMDetection [61] for phase 2. Again, we only replace the GT masks with MAL-generated mask pseudo-labels to adjust all these fully supervised models to box-supervised learning. ### Instance segmentation results **Retention Rate.** We argue that the sole mAP of instance segmentation is not fair enough to evaluate box-supervised instance segmentation since the performance gain can be achieved by improving box quality unrelated to segmentation quality. However, the retention rate can better reflect the real mask quality because the fully supervised counterparts also get boosted by the better box results. **Results on COCO.** In table 1, we show that various modern instance segmentation models can achieve up to 94.5% performance with the pseudo-labels of the fully supervised oracles. Our best results are 43.3% mAP on COCO test-dev and 44.1% mAP on COCO val, achieved by using MAL (Standard ViT-Base [11] pretrained with MAE) for phase 1, and using Mask2Former (Swin-Small) [12, 41] for phase 2. There is no significant retention drop when we use the mask pseudo-labels to train more powerful instance segmentation models. On the contrary, the higher retention rates on COCO are achieved by the heavier instance segmentation models, e.g., Cascade R-CNN with ConvNeXts and Mask2Former with Swin-Small. However, other methods have significantly lower retention rates compared with MAL. The experiment results quantitatively imply that the mask quality outperforms other methods by a large margin. **Results on LVIS.** In table 2, we also observe that all instance segmentation models work very well with the mask pseudo-labels generated by MAL (Ret. = 93% ~ 98%). We visualize part of the results in figure 5. We also evaluate the open-vocabulary ability of MAL by training MAL on COCO dataset but generating mask pseudo-labels on LVIS, and thus training instance segmentation models using these mask pseudo-labels. ### Image encoder variation To support our claim that Vision Transformers are good auto-labelers, we compare three popular networks as the image encoders of MAL: Standard Vision Transformers [11, 13, 16], Swin Transformer [12], ConvNeXts [44] in Tab. 4. First, we compare the fully supervised pretrained weights of these three models. We choose the official fully supervised pre-trained weights of ConvNeXts and Swin Transformers. For Standard Vision Transformers, we adopt a popular fully supervised approach, DeiT [16]. We observe that fully supervised Standard Vision Transformers (DeiT) as image encoders of Mask Auto-Labeler are better than Swin Transformers and ConvNeXts even though the imaganet-1k performance of Swin Transformers and ConvNeXts is higher than that of DeiT. We argue that the success of Standard Vision Transformers might be owed to the self-emerging properties of Standard ViTs [9, 11] (visualized in Fig. 6), and the larger-receptive field brought by global multi-head self-attention layers. Second, the mask pseudo-labels can be further improved by Mask AutoEncoder (MAE) pretraining [13]. The potential reason might be that MAE pretraining enhances Standard ViTs via learning pixel-level information, which is very important for dense-prediction tasks like segmentation. ### Mask decoder variation We compare four different modern designs of mask decoders: the fully connected Decoder [62], the fully convolutional decoder [24, 63], the attention-based decoder [30, 31], and the query-based decoder [41] in Tab. 3. We visualize different designs of mask decoders in Figure 4. For the fully connected Decoder, we use two fully connected layers with a hidden dimension of 2048 and then output a confidence map for each pixel. We reshape this output vector as the 2D confidence map. We introduce the attention-based decoder in Sec 3.2. For the fully convolutional Decoder, We adopt the pixel-wise head \(V\) in the attention-based De \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Method & Labeler Backbone & InstSeg Backbone & InstSeg Model & Sup & (\(\%\))Mask AP\({}_{\text{val}}\) & (\(\%\)Mask AP\({}_{\text{net}}\) & (\(\%\))Ret\({}_{\text{val}}\) & (\(\%\))Ret\({}_{\text{val}}\) & (\(\%\))Ret\({}_{\text{val}}\) \\ \hline Mask R-CNN\({}^{*}\)[24] & - & ResNet-101 & Mask R-CNN & Mask & 38.6 & 38.8 & - & - \\ Mask R-CNN\({}^{*}\)[24] & - & ResNet-101 & Mask R-CNN & Mask & 39.5 & 39.9 & - & - \\ Condultons [33] & - & ResNet-101 & CondfNat & Mask & 38.6 & 39.1 & - & - \\ SOLOv2 [31] & - & ResNet-50 & SOLOv2 & Mask & 37.5 & 38.4 & - & - \\ SOLOv2 [31] & - & ResNet-101-DCN & SOLOv2 & Mask & 41.7 & 41.8 & - & - \\ SOLOv2 [31] & - & ResNet-101-DCN & SOLOv2 & Mask & 42.4 & 42.7 & - & - \\ ConvNeXt [41] & - & ConvNeXt-Small [44] & Cascade R-CNN & Mask & 44.8 & 45.5 & - & - \\ ConvNeXt [41] & - & ConvNeXt-Base [44] & Cascade R-CNN & Mask & 45.4 & 46.1 & - & - \\ Mask2Former [41] & - & Swin-Small & Mask2Former & Mask & **46.1** & **47.0** & - & - \\ \hline BBTP[4] & - & ResNet-101 & Mask R-CNN & Box & - & 21.1 & - & 59.1 \\ BoatL[5] & - & ResNet-101 & CondfNat & Box & 33.0 & 33.2 & 85.5 & 84.9 \\ BoatL[56] & - & ResNet-101-DCN & SOLOv2 & Box & 35.0 & 35.4 & 83.9 & 83.5 \\ DiscoBlox [7] & - & ResNet-50 & SOLOv2 & Box & 30.7 & 32.0 & 81.9 & 83.3 \\ DiscoBlox [7] & - & ResNet-101-DCN & SOLOv2 & Box & 35.3 & 35.8 & 84.7 & 85.9 \\ DiscoBlox [7] & - & ResNet-101-DCN & SOLOv2 & Box & 37.3 & 37.9 & 88.0 & 88.8 \\ BoxTeacher [3] & - & Swin-Base & CondInst & Box & - & 40.0 & - & - \\ \hline Mask Auto-Labeler & ViT-MAE-Base [13] & ResNet-50 & SOLOv2 & Box & 35.0 & 35.7 & 93.3 & 93.0 \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ResNet-101-DCN & SOLOv2 & Box & 38.2 & 38.7 & 91.6 & 92.6 \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ResNet-101-DCN & SOLOv2 & Box & 38.9 & 39.1 & 91.7 & 91.6 \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ConvNeXt-Small [44] & Cascade R-CNN & Box & 42.3 & 43.0 & 94.4 & **94.5** \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ConvNeXt-Base [44] & Cascade R-CNN & Box & 42.9 & 43.3 & **94.5** & 93.9 \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ConvNeXt-Small [44] & Cascade R-CNN & Box & 42.9 & 43.3 & **44.1** & 93.9 & 93.8 \\ Mask Auto-Labeler & ViT-MAE-Base [13] & Swin-Small [12] & Mask2Former [41] & Box & **43.3** & **44.1** & 93.9 & 93.8 \\ \hline \hline \end{tabular} \end{table} Table 1: Main results on COCO. Ret means the retention rate of the \begin{tabular}{} \end{tabular}{tabular} \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & Labeler Backbone & InstSeg Backbone & InstSeg Model & Sup & (\(\%\))Mask AP\({}_{\text{val}}\) & (\(\%\)Mask AP\({}_{\text{net}}\) & (\(\%\))Ret\({}_{\text{val}}\) & (\(\%\))Ret\({}_{\text{val}}\) & (\(\%\))Ret\({}_{\text{val}}\) \\ \hline Mask R-CNN\({}^{*}\)[24] & - & ResNet-101 & Mask R-CNN & Mask & 38.6 & 38.8 & - & - \\ Mask R-CNN\({}^{*}\)[24] & - & ResNet-101 & Mask R-CNN & Mask & 39.5 & 39.9 & - & - \\ Condultons [33] & - & ResNet-101 & CondfNat & Mask & 38.6 & 39.1 & - & - \\ SOLOv2 [31] & - & ResNet-50 & SOLOv2 & Mask & 37.5 & 38.4 & - & - \\ SOLOv2 [31] & - & ResNet-101-DCN & SOLOv2 & Mask & 41.7 & 41.8 & - & - \\ SOLOv2 [31] & - & ResNet-101-DCN & SOLOv2 & Mask & 42.4 & 42.7 & - & - \\ ConvNeXt [41] & - & ConvNeXt-Small [44] & Cascade R-CNN & Mask & 44.8 & 45.5 & - & - \\ ConvNeXt [41] & - & ConvNeXt-Base [44] & Cascade R-CNN & Mask & 45.4 & 46.1 & - & - \\ Mask2Former [41] & - & Swin-Small & Mask2Former & Mask & **46.1** & **47.0** & - & - \\ \hline BBTP[4] & - & ResNet-101 & Mask R-CNN & Box & - & 21.1 & - & 59.1 \\ BoatL[5] & - & ResNet-101 & CondfNat & Box & 33.0 & 33.2 & 85.5 & 84.9 \\ BoatL[56] & - & ResNet-101-DCN & SOLOv2 & Box & 35.0 & 35.4 & 83.9 & 83.5 \\ DiscoBlox [7] & - & ResNet-50 & SOLOv2 & Box & 30.7 & 32.0 & 81.9 & 83.3 \\ DiscoBlox [7] & - & ResNet-101-DCN & SOLOv2 & Box & 35.3 & 35.8 & 84.7 & 85.9 \\ DiscoBlox [7] & - & ResNeXt-101-DCN & SOLOv2 & Box & 37.3 & 37.9 & 88.0 & 88.8 \\ BoxTeacher [3] & - & Swin-Base & CondInst & Box & - & coder. For the query-based decoder, we follow the design in Mask2Former [41]. We spend much effort exploring the query-based Decoder on MAL since it performs extremely well on fully supervised instance segmentation. However, the results are surprisingly unsatisfactory. We suspect the slightly heavier layers might cause optimization issues under the box-supervised losses. Experiments show that box-supervised instance segmentation favors the attention-based decoder. However, state-of-the-art instance segmentation and object detection methods often adopt the fully convolutional decoder [43, 15] or the query-based decoder [41]. Our proposed two-phase framework resolves this dilemma and allows the networks to enjoy the merits of both the attention-based Decoder and the non-attention-based Decoders. ### Clustering analysis As the results are shown in Tab. 4, we wonder why the Standard ViTs outperform other modern image encoders in auto-labeling. As the comparison of classification ability does not seem to reflect the actual ability of auto-labeling, we try to use the ability clustering to evaluate the image encoders because foreground(FG)/background(BG) segmentation is very similar to the binary clustering problem. Specifically, we extract the feature map output by the last layers of Swin Transformers [12], ConvNeXts [44], Standard ViTs [11]. Then, we use the GT mask to divide the feature vectors into the FG and BG feature sets. By evaluating the average distance from the FG/BG feature vectors to their clustering centers, we can reveal the ability of the networks to distinguish FG and BG pixels empirically. Formally, we define the feature vector of token \(i\) generated by backbone E as \(\mathbf{f}_{i}^{E}\). We define the FG/BG clustering centers \(\mathbf{f}_{1}^{\prime}\), \(\mathbf{f}_{0}^{\prime}\) as the mean of the FG/BG feature vectors. Then, we use the following metric as the clustering score: \[S=\frac{1}{N}\sum_{i}^{N}(\frac{\mathbf{f}_{i}^{E}}{|\mathbf{f}_{i}^{E}|}-\frac{\mathbf{f }_{\gamma(i)}^{\prime}}{|\mathbf{f}_{\gamma(i)}^{\prime}|})^{2}, \tag{6}\] \begin{table} \begin{tabular}{l c c c} \hline \hline Backbone & IN-1k Acc@1 & Mask AP\({}_{\text{val}}\) & Ret\({}_{\text{val}}\) \\ \hline ConvNeXt-Base [44] & **83.8** & 39.6 & 88.4 \\ Swin-Base [12] & 83.5 & 40.2 & 89.7 \\ ViT-DeiT-Small [64] & 79.9 & 40.8 & 91.0 \\ ViT-DeiT-Base [64] & 81.8 & **41.1** & **91.7** \\ \hline ViT-MAE-Base [13] & 83.6 & **42.3** & **94.4** \\ ViT-MAE-Large [13] & 85.9 & **42.3** & **94.4** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study of different backbones. All models are pre-trained on ImageNet-1k. ConvNeXt and Swin Transformer outperform DeiT on image classification, but standard ViT-Small [16] (ViT-DeiT-Small) outperforms ConvNeXt-base and Swin-Base on mask Auto-labeling. Standard ViT-Base (ViT-MAE-Base) and Standard ViT-Large (ViT-MAE-Large) pretrained via MAE achieve the best performance on mask Auto-labeling. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Method & Autoalber Backbone & InstSeg Backbone & InstSeg Model & Training Data & Sup & \((\%)\)Mask AP\({}_{\text{val}}\) & \((\%)\)Ret\({}_{\text{val}}\) \\ \hline Mask R-CNN [24] & - & ResNet-50-DCN & Mask R-CNN [24] & - & Mask & 21.7 & - \\ Mask R-CNN [24] & - & ResNet-101-DCN & Mask R-CNN [24] & - & Mask & 23.6 & - \\ Mask R-CNN [24] & - & ResNet-101-DCN & Mask R-CNN [24] & - & Mask & 25.5 & - \\ Mask R-CNN [24] & - & ResNeXt-101-64x4-FPN & Mask R-CNN [24] & - & Mask & 25.8 & - \\ \hline Mask Auto-Labeler & ViT-MAE-Base [13] & ResNet-50-DCN & Mask R-CNN [24] & LVIS v1 & Box & 20.7 & 95.4 \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ResNet-101-DCN & Mask R-CNN [24] & LVIS v1 & Box & 23.0 & **97.4** \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ResNeXt-101-64x4-FPN & Mask R-CNN [24] & LVIS v1 & Box & 23.7 & 92.9 \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ResNeXt-101-64x4-FPN & Mask R-CNN [24] & LVIS v1 & Box & **24.5** & 95.0 \\ \hline Mask Auto-Labeler & ViT-MAE-Base [13] & ResNeXt-101-32x4-FPN & Mask R-CNN [24] & COCO & Box & 23.3 & 91.8 \\ Mask Auto-Labeler & ViT-MAE-Base [13] & ResNeXt-101-64x4-FPN & Mask R-CNN [24] & COCO & Box & **24.2** & **93.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Main results on LVIS v1. Training data means the dataset we use for training MAL. We also finetune it on COCO and then generate pseudo-labels of LVIS v1. Compared with trained on LVIS v1 directly, MAL finetuned on COCO only caused around 0.35% mAP drop on the final results, which indicates the great potential of the open-set ability of MAL. Ret means the retention rate of \(\frac{\text{box-supervised mask AP}}{\text{supervised mask AP}}\). Figure 5: Qualitative results of mask pseudo-labels generated by Mask Auto-Labeler on LVIS v1. where if pixel i is FG, \(\gamma(i)=1\), otherwise \(\gamma(i)=0\). We show the clustering evaluation on the COCO val 2017 in Tab. 6. The results align our conclusion that Standard Vision Transformers are better at mask auto-labeling. ### MAL masks v.s. GT masks We show the apples to apples qualitative comparison in Fig. 7 and make the following observations. First, MAL-generated mask pseudo-labels are considerably sharper and boundary-sticky than human-annotated ones since humans have difficulties in aligning with the true boundaries. Second, severe occlusion also presents a challenging issue. ## 5 Conclusion In this work, we propose a novel two-phase framework for box-supervised instance segmentation and a novel Transformer-based architecture, Mask Auto-Labeler (MAL), to generate high-quality mask pseudo-labels in phase 1. We reveal that Standard Vision Transformers are good mask auto-labelers. Moreover, we find that random using box-expansion RoI inputs, the attention-based Decoder, and class-agnostic training are crucial to the strong mask auto-labeling performance. Moreover, thanks to the two-phase framework design and MAL, we can adjust almost all kinds of fully supervised instance segmentation models to box-supervised learning with little performance drop, which shows the great generalization of MAL. **Limitations.** Although great improvement has been made by our approaches in mask auto-labeling, we still observe many failure cases in the occlusion situation, where human annotations are much better than MAL-generated masks. Additionally, we meet saturation problems when scaling the model from Standard ViT-Base to Standard ViT-Large. We leave those problems in the future work. **Broader impacts.** Our proposed Transformer-based mask auto-labeler and the two-phase architecture serve as a standard paradigm for high-quality box-supervised instance segmentation. If follow-up work can find and fix the issues under our proposed paradigm, there is great potential that expansive human-annotated masks are no longer needed for instance segmentation in the future. \begin{table} \begin{tabular}{l c} \hline \hline Backbone & Score (\(\downarrow\)) \\ \hline ConvNeXt-Base [44] & 0.459 \\ Swin-Base [12] & 0.425 \\ ViT-DeiT-Small [64] & 0.431 \\ ViT-DeiT-Base [64] & 0.398 \\ ViT-MAE-Base [13] & 0.324 \\ ViT-MAE-Large [13] & **0.301** \\ \hline \hline \end{tabular} \end{table} Table 6: Clustering scores for different image encoders. The smaller clustering scores imply a better ability to distinguish foreground and background features. Figure 8: Sensitivity analysis of loss weights and CRF hyperparameters. We use ViT-Base [11] pretrained via MAE [13] as the image encoder for the first phase and SOLOv2 (ResNet-50) for the second phase. The x-axis and y-axis indicate the hyper-parameter values and the (%)mask AP, respectively. \begin{table} \begin{tabular}{c c c} \hline \hline \(\theta\) & Mask AP\({}_{\text{val}}\) & Ret\({}_{\text{val}}\) \\ \hline 0.6 & 41.3 & 92.2 \\ 0.8 & 41.7 & 93.1 \\ 1.0 & 42.2 & 94.2 \\ 1.2 & **42.3** & **94.4** \\ 1.4 & 42.0 & 93.8 \\ 1.6 & 41.8 & 93.3 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation on box expansion ratio. We use Standard ViT-Base pretrained via MAE (ViT-MAE-Base) and Cascade R-CNN (ConvNeXt-Small) for phase 1 and 2. Figure 6: Attention visualization of two RoI images produced by MAL. In each image group, the left-most image is the original image. We visualize the attention map output by the 4\({}^{\text{th}}\), 8\({}^{\text{th}}\), 12\({}^{\text{th}}\) MHSA layers of the Standard ViTs in MAL. Figure 7: The lateral comparison between MAL-generated pseudo-labels (top) and GT masks (bottom) on COCO val2017. On the left, we observe that MAL-generated pseudo-labels are sharper and more boundary-sticky than GT masks in some cases. On the right, we observe that in highly occluded situations, human-annotated masks are still better.
2310.04764
Characterizations of Monadic Second Order Definable Context-Free Sets of Graphs
We give a characterization of the sets of graphs that are both definable in Counting Monadic Second Order Logic (CMSO) and context-free, i.e., least solutions of Hyperedge-Replacement (HR) grammars introduced by Courcelle and Engelfriet. We prove the equivalence of these sets with: (a) recognizable sets (in the algebra of graphs with HR-operations) of bounded tree-width; we refine this condition further and show equivalence with recognizability in a finitely generated subalgebra of the HR-algebra of graphs; (b) parsable sets, for which there is an MSO-definable transduction from graphs to a set of derivation trees labelled by HR operations, such that the set of graphs is the image of the set of derivation trees under the canonical evaluation of the HR operations; (c) images of recognizable unranked sets of trees under an MSO-definable transduction, whose inverse is also MSO-definable. We rely on a novel connection between two seminal results, a logical characterization of context-free graph languages in terms of tree to graph MSO-definable transductions, by Courcelle and Engelfriet and a proof that an optimal-width tree decomposition of a graph can be built by an MSO-definable transduction, by Bojanczyk and Pilipczuk.
Radu Iosif, Florian Zuleger
2023-10-07T09:53:52Z
http://arxiv.org/abs/2310.04764v4
# Characterizations of Definable Context-Free Graphs ###### Abstract We give a characterization of those sets of graphs that are both _definable_ in Counting Monadic Second Order Logic (\(\mathsf{CMS}\)) and _context-free_, i.e., least solutions of Hyperedge-Replacement (\(\mathsf{HR}\))-grammars introduced by Courcelle and Engelfriet [9]. We give the following equivalent characterizations: (a) a set of graphs is recognizable (in the algebra that consists of all graphs and \(\mathsf{HR}\)-operations) and has bounded tree-width; further, we refine this condition and show equivalence with recognizability in a finite-sort subalgebra of the graph algebra; (b) the set is parsable, i.e., there is an \(\mathsf{MS}\)-definable transduction from graphs to a set of derivation trees labelled by \(\mathsf{HR}\)-operations, such that the set of graphs is the image of this set of trees under the evaluation of the \(\mathsf{HR}\)-operations; (c) the set of graphs is the image of unranked recognizable set of trees under an \(\mathsf{MS}\)-definable transduction whose inverse is also \(\mathsf{MS}\)-definable. The main goal of this paper is to present the above characterization, of which several directions are already known, in an accessible and unified way. We rely on a novel connection between two seminal results, a logical characterization of context-free graph languages in terms of tree to graph \(\mathsf{MS}\)-definable transductions, by Courcelle and Engelfriet [8], and a proof that an optimal-width tree decomposition of a graph can be built by an \(\mathsf{MS}\)-definable transduction, by Bojanczyk and Pilipczuk [3, 4]. ## 1 Introduction Formal language theory studies the finite representations of infinite sets of objects, such as words, trees, graphs, hypergraphs, etc. There are essentially two kinds of representations, namely (i) _logical_ or descriptive, and (ii) _algebraic_ or operational. The first kind relies on logics, such as First Order (\(\mathsf{FO}\)) or Monadic Second Order Logic (\(\mathsf{MS}\)) to describe properties, such as "every even position has the letter \(a\)" for words, or "every vertex can be colored either red, green or blue, such that adjacent vertices have distinct colors" for graphs. The classes of objects satisfying such logical properties are said to be _definable_. In contrast, algebraic representations rely on operations that build the considered objects, to define notions such as _recognizability_ and _context-freeness_. In a nutshell, recognizable sets are unions of equivalence classes of a finite congruence with respect to the operations in the algebra, whereas context-free sets are least solutions of finite systems of recursive equations written using these operations. The comparison between the expressivity of definable, recognizable and context-free sets is central to formal language theory and a source of well-established results. For instance, \(\mathsf{MS}\)-definability equals recognizability for finite words [5], whereas for ranked3 trees over finite alphabets \(\mathsf{MS}\)-definability, recognizability and context-freeness coincide [11, 15]. For unranked trees, \(\mathsf{CMS}\)-definability and recognizability coincide [6], where \(\mathsf{CMS}\) is the extension of \(\mathsf{MS}\) with modulo constraints on the cardinality of sets, e.g., a set having an even/odd cardinality. When considering graphs, an important parameter is the _treewidth_, roughly speaking, a positive integer that indicates how close the graph is to a tree. The notion of treewidth is a cornerstone of algorithmic tractability. For instance, many \(\mathsf{NP}\)-complete graph problems such as Hamiltonicity and \(3\)-Coloring become \(\mathsf{PTIME}\), when restricted to inputs whose treewidth is bounded by a constant, see, e.g., [13, Chapter 11]. Moreover, bounding the treewidth sets the frontier between the decidability and undecidability of monadic second order (\(\mathsf{MS}\)) logical theories. A seminal result of Courcelle [Corollary 4.8 (2)][6] is that \(\mathsf{CMS}\) is decidable over classes of graphs of bounded treewidth, by reduction to the emptiness of tree automata. A dual result of Seese [16] is that each class of graphs with a decidable \(\mathsf{CMS}\) theory necessarily has bounded treewidth. Figure 1 highlights the overall relation between bounded tree-width (\(\mathsf{BTW}\)), context-free (CF), recognizable (\(\mathsf{REC}\)) and \(\mathsf{CMS}\)-definable sets of graphs. Every context-free set of graphs has bounded tree-width [9, Proposition 1.20], but there are sets of graphs of bounded treewidth that are not context-free4. Moreover, every \(\mathsf{CMS}\)-definable set is recognizable [6, Theorem 4.4], but there are recognizable sets that are neither \(\mathsf{CMS}\)-definable or context-free. This is because there are uncountably many recognizable sets [6, Proposition 2.14] and only countably many \(\mathsf{CMS}\) formulae and finite systems of recursive equations. The equivalence between \(\mathsf{CMS}\)-definability and recognizability has been established for sets of graphs having bounded treewidth [3]. Using the notation of Figure 1, this means that \(\mathsf{BTW}\cap(\mathsf{REC}\setminus\mathsf{CMS})=\emptyset\). Footnote 4: Take for instance the lists encoding of the word language \(\{a^{n}b^{n}c^{n}\mid n\in\mathbb{N}\}\). This paper deals with the class of \(\mathsf{CMS}\)-definable context-free sets, i.e., the area \(\mathsf{CMS}\cap\mathsf{CF}\) depicted using cross-hatching in Figure 1. These sets are important in branches of computing, such as static analysis and program verification, because their universality and inclusion problem are decidable. More precisely, given sets of graphs \(\mathcal{L}_{1},\mathcal{L}_{2}\in\mathsf{CF}\cap\mathsf{CMS}\), one can build \(\mathsf{CMS}\) formulae \(\phi_{1}\) and \(\phi_{2}\) that define \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\), respectively. Then the problem \(\mathcal{L}_{1}\subseteq\mathcal{L}_{2}\) is equivalent to the unsatisfiability of the \(\mathsf{CMS}\) formula \(\phi_{1}\wedge\neg\phi_{2}\). The latter problem is decidable because \(\mathcal{L}_{1}\in\mathsf{BTW}\) and the satisfiability of a \(\mathsf{CMS}\) formula is decidable for graphs of bounded treewidth [6, Corollary 4.8 (2)]. Our contribution is a characterization of the context-free \(\mathsf{CMS}\)-definable graph languages by five equivalent conditions (Theorems 9 and 10). The main goal of this paper is to present this characterization, of which many several directions are already known, in an accessible and unified way. In addition, our characterization relies on a novel connection between two seminal results. The first ingredient is a purely logical (i.e., free of low-level algebraic aspects, such as graph operations) characterization of context Figure 1: Bounded Treewidth, Context Free, Recognizable and \(\mathsf{CMS}\)-Definable Sets of Graphs free graph languages as images of recognizable ranked sets of trees under \(\mathsf{MS}\)-definable transductions, by Courcelle and Engelfriet [8]. The second ingredient is a construction of tree decompositions of optimal width for graphs, by means of \(\mathsf{MS}\)-definable transductions, by Bojanczyk and Pilipczuk [3, 4]. The first equivalence is between context-free \(\mathsf{CMS}\)-definable and bounded tree-width recognizable sets which means that the simply-hatched area \(\mathsf{CMS}\cap(\mathsf{BTW}\setminus\mathsf{CF})\) from Figure 1 is empty. We note that this equivalence is a direct consequence of the results from [3]. This result uses the notion of recognizability for graphs introduced by Courcelle [6], which generalizes the standard recognizability by finite automata for words [5] and trees [11], using _locally finite_ algebras, i.e., multi-sorted algebras that are finite for each sort, but not finite overall. Second, we show that, for bounded tree-width sets of graphs, recognizability in a locally finite algebra is equivalent to recognizability in a finite algebra i.e., bounded tree-width sets of graphs can be recognized by finite automata. This is typically not the case for sets of graphs of unbounded tree-width, such as infinite sets of grids, that can only be recognized by locally finite algebras and not by finite ones. We note that the equivalent definitions of recognizability for sets of graphs of bounded tree-width was already established in [10], but our development presents a new proof for this result. Third, we prove the equivalence of bounded treewidth recognizable sets with _parsable_ sets, for which a derivation tree that describes the construction of the graph by grammar rules, can be recovered using an \(\mathsf{MS}\)-definable transduction. This proves an open conjecture of Courcelle [7, Conjecture 3]. Here we use in an essential way the result of Bojanczyk and Pilipczuk [3, 4], which provides an \(\mathsf{MS}\)-definable transduction yielding an optimal tree decomposition of a graph. We relate this transduction to parsable sets of graphs by showing that a tree decompositions can be translated into a derivation tree (labeled with algebraic operations) via an \(\mathsf{MS}\)-definable transduction (Lemma 11). Finally, we extend the characterization of context-free sets of graphs, as images of recognizable ranked sets of trees under \(\mathsf{MS}\)-definable transductions, by Courcelle and Engelfriet [8], from ranked to unranked trees (Corollary 2). This generalization is required in order to apply the result of [8] to parsable sets of graphs, whose parse trees stem from the tree decompositions provided by [3, 4], thus forming unranked sets of trees. This proves the last equivalence, namely that parsable sets of graphs are images of unranked recognizable sets of trees under \(\mathsf{MS}\)-definable transductions whose inverse contain an \(\mathsf{MS}\)-definable transduction with the same domain. The idea of characterizing the recognizable sets of graphs of bounded tree-width in terms of a pair of \(\mathsf{MS}\)-definable transductions has also been developed concurrently (in more generality) in [2]. Unfortunately, each of the five equivalent conditions that characterize \(\mathsf{CMS}\)-definable context free sets of graphs is undecidable. However, several classes of graphs, such as those defined by regular graph grammars or series-parallel graphs, are known to be parsable [7]. We notice that pairs \((F,G)\) of \(\mathsf{MS}\)-transductions, such that \(G\) and \(F^{-1}\) have the same domain and \(G\subseteq F^{-1}\) are partially closed under composition, thus providing a practical method of deriving new parsable sets from known ones. We conclude with a discussion on recognizability for graphs and show that a set of graphs is recognizable by a locally finite algebra if and only if it is recognizable in an infinite sequence of finite algebras, such that each algebra is an extension of its predecessor in the sequence. In the light of this result, the equivalence between locally finite and finite recognizability for bounded treewidth sets of graphs occurs as a cut-off in this infinite sequence. Definitions The set of natural numbers is denoted by \(\mathbb{N}\). Given \(i,j\in\mathbb{N}\), we write \([i,j]\) for the set \(\{i,i+1,\ldots,j\}\), assumed to be empty if \(i>j\). The cardinality of a finite set \(A\) is denoted by \(\operatorname{card}(A)\). By writing \(A\subseteq_{\mathit{fin}}B\) we mean that \(A\) is a finite subset of \(B\). For a set \(A\), we denote by \(\operatorname{pow}(A)\) its powerset, \(A^{0}\stackrel{{\mathit{def}}}{{=}}\{\varepsilon\}\), \(A^{i+1}\stackrel{{\mathit{def}}}{{=}}A^{i}\times A\), for all \(i\geq 0\), \(A^{*}\stackrel{{\mathit{def}}}{{=}}\bigcup_{i\geq 0}A^{i}\) and \(A^{+}\stackrel{{\mathit{def}}}{{=}}\bigcup_{i\geq 1}A^{i}\), where \(\times\) is the Cartesian product and \(\varepsilon\) denotes the empty sequence. Intuitively, \(A^{\mp}\) (resp. \(A^{+}\)) denotes the set of possibly empty (resp. nonempty) sequences of elements from \(A\). The length of a sequence \(\sigma\in A^{*}\) is denoted as \(\operatorname{len}(\sigma)\) and \(\sigma_{i}\) denotes its \(i\)-th element, for all \(\sigma\in A^{+}\) and \(i\in[1,\operatorname{len}(\sigma)]\). A multiset with elements \(a_{1},a_{2},\ldots\) is denoted as \(\llbracket a_{1},a_{2},\ldots\rrbracket\). For a relation \(R\subseteq A\times B\), we denote by \(\operatorname{dom}(R)\) and \(\operatorname{img}(R)\) the sets consisting of the first and second components of the pairs in \(R\), respectively. We write \(R^{-1}\) for the inverse relation and \(R(L)\) for the image of \(L\) via \(R\). Sometimes we write \(R(a)\) instead of \(R(\{a\})\), for an element \(a\in A\). The _domain-restriction_\(R|_{C}\) restricts the relation \(R\) to the pairs with first element in \(C\). A bijective function \(f\) is an _\(A\)-permutation_ if \(\{a\in\operatorname{dom}(f)\mid f(a)\neq a\}\subseteq A\subseteq \operatorname{dom}(f)\). It is a _finite permutation_ if it is an \(A\)-permutation, for some finite set \(A\). ### Algebras and Recognizability Let \(\Sigma\) be a set of _sorts_, ranged over by \(\sigma,\sigma_{1},\sigma_{2}\), etc. A _signature_\(\mathcal{F}\) is a set of function symbols \(f\) of _arity_\(\#f\geq 0\). A function symbol \(f\) is a _constant_, _unary_ or _binary_ if \(\#f=0,1\) or \(2\), respectively. Let \(\mathcal{V}\) be a set of variables, ranged over \(x,y,x_{1},x_{2}\), etc. An _\(\mathcal{F}\)-term_ is a variable or \(f(t_{1},\ldots,t_{\#f})\), where \(f\in\mathcal{F}\) is a function symbol and \(t_{1},\ldots,t_{\#f}\) are terms. A term is _ground_ if it contains no variables. An _\(\mathcal{F}\)-algebra_\(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{f^{\mathfrak{A}} \}_{f\in\mathcal{F}})\) consists of a _universe_\(\mathcal{A}^{\sigma}\) for each sort \(\sigma\in\Sigma\) and a function \(f^{\mathfrak{A}}:\mathcal{A}^{\#f}\rightarrow\mathcal{A}\) for each function symbol \(f\in\mathcal{F}\), where \(\mathcal{A}\stackrel{{\mathit{def}}}{{=}}\bigcup_{\sigma\in \Sigma}\mathcal{A}^{\sigma}\) denotes the union of all universes of \(\mathfrak{A}\). We say that the function \(f^{\mathfrak{A}}\) is the _interpretation_ of the function symbol \(f\in\mathcal{F}\) in \(\mathfrak{A}\). The sort of an element \(a\in\mathcal{A}\) is denoted \(\operatorname{\mathtt{sort}}(a)\). A _homomorphism_ between \(\mathcal{F}\)-algebras \(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\mathcal{F}^{ \mathfrak{A}})\) and \(\mathfrak{B}=(\{\mathcal{B}^{\sigma}\}_{\sigma\in\Sigma},\mathcal{F}^{ \mathfrak{B}})\) is a function \(h:\mathcal{A}\rightarrow\mathcal{B}\) such that (1) \(f(\mathcal{A}^{\sigma})\subseteq\mathcal{B}^{\sigma}\), for all sorts \(\sigma\in\Sigma\), and (2) \(h(f^{\mathfrak{A}}(a_{1},\ldots,a_{\#f}))=f^{\mathfrak{B}}(h(a_{1}),\ldots,h(a_ {\#f}))\), for all function symbols \(f\in\mathcal{F}\) and all elements \(a_{1},\ldots,a_{\#f}\in\mathcal{A}\). An algebra \(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{f^{\mathfrak{A}} \}_{f\in\mathcal{F}})\) is _locally finite_ if \(\mathcal{A}^{\sigma}\) is finite, for each \(\sigma\in\Sigma\). We define recognizability by taking inverse images of homomorphisms that map into locally finite algebras: Definition 1: Let \(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\mathcal{F}^{ \mathfrak{A}})\) be an algebra. A set \(\mathcal{L}\subseteq\mathcal{A}\) is _recognizable_ in \(\mathfrak{A}\) iff there exists a locally finite algebra \(\mathfrak{B}=(\{\mathcal{B}^{\sigma}\}_{\sigma\in\Sigma},\mathcal{F}^{ \mathfrak{B}})\) and a homomorphism \(h\) between \(\mathfrak{A}\) and \(\mathfrak{B}\) such that \(\mathcal{L}=h^{-1}(\mathcal{C})\), for a set \(\mathcal{C}\subseteq\mathcal{B}\). Note that recognizable sets are closed under taking inverse images of homomorphisms, i.e., any set \(g^{-1}(\mathcal{L})\) is recognizable if \(\mathcal{L}\) is recognizable and \(g\) is a homomorphism. An \(\mathcal{F}\)-term \(t\) with variables \(x_{1},\ldots,x_{n}\) is viewed as a function symbol of arity \(n\), that defines the _derived operation_\(t^{\mathfrak{A}}:\mathcal{A}^{n}\rightarrow\mathcal{A}\) inductively on the structure of \(t\): * \(x_{i}(a_{1},\ldots,a_{n})^{\mathfrak{A}}\stackrel{{\mathit{def}}}{{= }}a_{i}\), * \(f(a_{1},\ldots,a_{n})^{\mathfrak{A}}\stackrel{{\mbox{\tiny def}}}{{=}}f^{ \mathfrak{A}}(t_{1}^{\mathfrak{A}},\ldots,t_{\#f}^{\mathfrak{A}})\), if \(t=f(t_{1},\ldots,t_{\#f})\). We also lift the evaluation of terms to set variables in the expected way and define \(t^{\mathfrak{A}}:\operatorname{pow}(\mathcal{A})^{n}\to\operatorname{pow}( \mathcal{A})\) by setting \(f(A_{1},\ldots,A_{n})^{\mathfrak{A}}\stackrel{{\mbox{\tiny def}}}{{= }}\{f(a_{1},\ldots,a_{n})^{\mathfrak{A}}\mid a_{i}\in A_{i}\}\). A set \(\mathcal{D}\) of \(\mathcal{F}\)-terms defines a _derived algebra_ of \(\mathfrak{A}\) that interprets each function symbol \(t\in\mathcal{D}\) as the derived operation \(t^{\mathfrak{A}}\), i.e. \(\mathfrak{D}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{t^{\mathfrak{A}} \}_{t\in\mathcal{D}})\). We have the following relation between recognizability in an algebra and in any of its derived algebras: Lemma 1: _Let \(\mathfrak{D}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{t^{\mathfrak{A}} \}_{t\in\mathcal{D}})\) be a derived algebra of \(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{f^{\mathfrak{A}} \}_{f\in\mathcal{F}})\). Then, any set \(\mathcal{L}\subseteq\mathcal{A}\) is recognizable in \(\mathfrak{D}\) if it is recognizable in \(\mathfrak{A}\)._ Proof: For the first point, for any locally finite \(\mathcal{F}\)-algebra \(\mathfrak{B}=(\{\mathcal{B}_{\sigma}\}_{\sigma\in\Sigma},\mathcal{F}^{ \mathfrak{B}})\), any homomorphism between \(\mathfrak{A}\) and \(\mathfrak{B}\) is also a homomorphism between \(\mathfrak{D}\) and the derived algebra \(\mathfrak{D}^{\prime}=(\{\mathcal{B}_{\sigma}\}_{\sigma\in\Sigma},\{t^{ \mathfrak{B}}\}_{t\in\mathcal{D}})\). Then \(h\) and the set \(\mathcal{C}\subseteq\mathcal{A}\) that witness the recognizability of \(\mathcal{L}\) in \(\mathfrak{A}\) also witness the recognizability of \(\mathcal{L}\) in \(\mathfrak{D}\). Note that the converse does not hold, for instance, if we consider the algebra of words over the alphabet \(\{a,b\}\) with signature consisting of the empty word \(\epsilon\) and concatenation. A derived algebra is obtained by taking the empty word and the derived operation \(x\mapsto\textit{axb}\). Then the set of words \(\{a^{n}b^{n}\mid n\in\mathbb{N}\}\) is recognizable in the derived algebra but not in the original one. A _subalgebra_ of \(\mathfrak{A}\) is an \(\mathcal{F}^{\prime}\)-algebra \(\mathfrak{B}=(\{\mathcal{B}^{\sigma}\}_{\sigma\in\Sigma^{\prime}},\{f^{ \mathfrak{B}}\}_{f\in\mathcal{F}^{\prime}})\), with \(\Sigma^{\prime}\subseteq\Sigma\), \(\mathcal{F}^{\prime}\subseteq\mathcal{F}\), \(\mathcal{B}^{\sigma}\subseteq\mathcal{A}^{\sigma}\), for each \(\sigma\in\Sigma^{\prime}\), such that \(\mathcal{B}\) is closed under the operations \(f^{\mathfrak{B}}\), for each \(f\in\mathcal{F}^{\prime}\), i.e., \(f^{\mathfrak{B}}(b_{1},\ldots,b_{\#f})\in\mathcal{B}\), for all \((b_{1},\ldots,b_{\#f})\in(\mathcal{B})^{\#f}\), i.e., \(f^{\mathfrak{B}}\) is the restriction of \(f^{\mathfrak{B}}\) to elements from \(\mathcal{B}\), for each \(f\in\mathcal{F}^{\prime}\). The _representable subalgebra_\(\mathfrak{A}_{\operatorname{rep}}=(\{\mathcal{A}^{\sigma}_{\operatorname{rep} }\}_{\sigma\in\Sigma},\{f^{\mathfrak{A}_{\operatorname{rep}}}\}_{f\in\mathcal{ F}})\) is the subalgebra of \(\mathfrak{A}\) whose universes \(\mathcal{A}^{\sigma}_{\operatorname{rep}}\stackrel{{\mbox{\tiny def}}}{{=}}\{t^{ \mathfrak{G}}\mid t\mbox{ is a ground $\mathcal{F}$-term}\}\cap\mathcal{A}^{\sigma}\) and functions are restricted to the values of ground terms, also called the _representable elements_ of \(\mathfrak{A}\). Lemma 2: _Let \(\mathfrak{B}=(\{\mathcal{B}^{\sigma}\}_{\sigma\in\Sigma^{\prime}},\{f^{ \mathfrak{B}}\}_{f\in\mathcal{F}^{\prime}})\) be a subalgebra of \(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{f^{\mathfrak{A}} \}_{f\in\mathcal{F}})\). Then, any set \(\mathcal{L}\subseteq\mathcal{B}\) is recognizable in \(\mathfrak{B}\) if it is recognizable in \(\mathfrak{A}\)._ Proof: Let \(\mathfrak{D}=(\{\mathcal{D}^{\sigma}\}_{\sigma\in\Sigma},\mathcal{F}^{\mathfrak{ D}})\) be a locally finite \(\mathcal{F}\)-algebra and \(h:\mathcal{A}\to\mathcal{D}\) be a homomorphism, such that \(\mathcal{L}=h^{-1}(\mathcal{C})\), for some \(\mathcal{C}\subseteq\mathcal{D}\). Let \(\mathfrak{E}\) be the subalgebra of \(\mathfrak{D}\) obtained by restricting \(\mathfrak{D}\) to sorts \(\Sigma^{\prime}\) and signature \(\mathcal{F}^{\prime}\). Then \(h^{\prime}\stackrel{{\mbox{\tiny def}}}{{=}}h\mathord{\downright}_{ \mathfrak{B}}\) is a homomorphism between the algebras \(\mathfrak{B}\) and \(\mathfrak{E}\) and \(\mathcal{L}=h^{-1}(\mathcal{C})\cap\mathcal{B}={h^{\prime}}^{-1}(\mathcal{C})\), which witnesses the recognizability of \(\mathcal{L}\) in \(\mathfrak{B}\). ### Graphs Let \(\mathbb{S}\) be a countably infinite set of _source labels_ and \(\mathbb{A}\) be an alphabet of _edge labels_, disjoint from \(\mathbb{S}\), ranged by \(a,b\), etc. Each edge label \(a\in\mathbb{A}\) has an associated _arity_\(\#a\geq 1\), i.e., we do not consider edge labels of arity zero. The sets \(\mathbb{S}\) and \(\mathbb{A}\) are considered fixed in the rest of this paper. For any finite set \(\tau\subseteq\mathbb{S}\), a _concrete graph of sort_\(\tau\) is a tuple \(G=\langle V_{G},E_{G},L_{G},\upsilon_{G},\xi_{G}\rangle\), where: * \(V_{G}\) is a finite set of _vertices_, * \(E_{G}\) is a finite set of _edges_, disjoint from \(V_{G}\), * \(L_{G}:E_{G}\to\mathbb{A}\) is a mapping that defines the labels of the edges, * \(\upsilon_{G}:E_{G}\to V_{G}^{+}\) is a mapping that associates each edge a nonempty sequence of vertices attached to the edge, such that \(\#(L_{G}(e))=\operatorname{len}(\upsilon_{G}(e))\), for each \(e\in E_{G}\), * \(\xi_{G}:\tau\to V_{G}\) is a _one-to-one_ mapping designating the _sources_ of \(G\), such that \(\xi_{G}(s)\) is called the \(s\)-source of \(G\), i.e., a vertex cannot be both an \(s\)- and \(s^{\prime}\)-source, for \(s\neq s^{\prime}\). For instance, the leftmost concrete graph in Figure 2 (a) has four vertices of which tree sources labeled \(s_{1}\), \(s_{2}\) and \(s_{3}\) and three edges labeled \(a\), \(b\) and \(c\). The \(a\)-labeled edge is attached to three vertices, whereas the \(b\)- and \(c\)-labeled edges are binary. The middle concrete graph is of sort \(\{s_{1},s_{2},s_{4}\}\) and the rightmost one of sort \(\{s_{1},s_{2},s_{3},s_{4}\}\). Isomorphism of concrete graphs is defined as usual (see, e.g., [6, SS2]). A _graph of sort_\(\tau\) is the set (i.e., equivalence class) of isomorphic concrete graphs of sort \(\tau\). We denote by \(\mathcal{G}\) the set of graphs and by \(\mathcal{G}^{\tau}\stackrel{{\text{\tiny{\it def}}}}{{=}}\{G\in \mathcal{G}\mid\operatorname{\mathsf{sort}}(G)\subseteq\tau\}\) the set of graphs of sort included or equal to \(\tau\). We describe next the _hyperedge replacement_ (HR) operations on graphs [9, SS2.3]. We fix the set of sorts \(\Sigma\) to be the finite subsets of \(\mathbb{S}\). The signature \(\mathcal{F}_{\text{HR}}\) consists of the following function symbols: 1. the constants \(\mathbf{0}_{\tau}\), for all finite sets \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), 2. the constants \(\mathbf{a}_{(s_{1},\ldots,s_{\mathit{fin}})}\), for all \(a\in\mathbb{A}\) and \(s_{1},\ldots,s_{\mathit{fin}}\in\mathbb{S}\), 3. the unary function symbols \(\mathsf{restrict}_{\tau}\), for all finite sets \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), 4. the unary function symbols \(\mathsf{rename}_{\alpha}\), for all finite permutations \(\alpha:\mathbb{S}\to\mathbb{S}\), 5. the binary function symbol \(\|\). The _hyperedge replacement graph algebra_\(\mathfrak{G}\) interprets the symbols in \(\mathcal{F}_{\text{HR}}\) as follows: 1. **graphs with sources only**: the graph \(\mathbf{0}_{\tau}^{\mathfrak{G}}\), for \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), consists of one \(s\)-source for each \(s\in\tau\) and no edges. 2. **graph with a single edge**: the graph \(\mathbf{a}_{(s_{1},\ldots,s_{\mathit{fin}})}^{\mathfrak{G}}\), for \(a\in\mathbb{A}\) and \(s_{1},\ldots,s_{\mathit{fin}}\in\mathbb{S}\), consists of an \(s_{i}\)-source, for each \(i\in[1,\#a]\), and a single edge labeled with \(a\) attached to the \(s_{1}\)-,\(\ldots\),\(s_{\mathit{fin}}\)-sources, in this order. 3. **restriction**: the unary function \(\mathsf{restrict}_{\tau}^{\mathfrak{G}}\), for a set \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), takes as input any graph of sort \(\tau^{\prime}\) and returns the graph of sort \(\tau\cap\tau^{\prime}\) obtained by removing the source labels in \(\tau^{\prime}\setminus\tau\) from \(G\). Formally, let \(G\) be a concrete graph of sort \(\tau^{\prime}\) and define the concrete graph \(G^{\prime}=\langle V_{G},E_{G},L_{G},\upsilon_{G},\xi_{G}\circ\alpha\rangle\). The function \(\lambda G\cdot G^{\prime}\) is isomorphism-preserving and \(\mathsf{restrict}_{\tau}^{\mathfrak{G}}\) is defined as its lifting from concrete graphs to graphs. 4. **rename:** the unary function \(\mathsf{rename}_{\alpha}^{\mathfrak{G}}\), for a finite permutation \(\alpha:\mathbb{S}\to\mathbb{S}\), takes as input a graph of sort \(\tau\) and returns the graph of sort \(\alpha^{-1}(\tau)\) obtained by renaming its sources according to \(\alpha\). Formally, let \(G\) be a concrete graph and define the concrete graph \(G^{\prime}=\langle V_{G},E_{G},L_{G},\upsilon_{G},\xi_{G}\circ\alpha\rangle\). The function \(\lambda G\cdot G^{\prime}\) is isomorphism-preserving and \(\mathsf{rename}_{\alpha}^{\mathfrak{G}}\) is its lifting from concrete graphs to graphs. Figure 2: Composition (a), Restriction (b) and Renaming (c) of Graphs 5. **composition:** the binary function \(\|^{\mathfrak{G}}\) takes the disjoint union of two graphs and fuses the vertices labeled by the same source label in both. Formally, let \(G_{i}\) be disjoint concrete graphs of sort \(\tau_{i}\), for \(i=1,2\), i.e., \(V_{G_{1}}\cap V_{G_{2}}=\emptyset\) and \(E_{G_{1}}\cap E_{G_{2}}=\emptyset\). Let \(\sim\subseteq(V_{G_{1}}\cup V_{G_{2}})^{2}\) be the least equivalence relation such that \(u_{1}\sim u_{2}\) if \(u_{i}=\xi_{G_{i}}(s)\), for \(i=1,2\) and \(s\in\tau_{1}\cap\tau_{2}\). We denote by \([u]_{\sim}\) the \(\sim\)-equivalence class of \(u\in V_{G_{1}}\cup V_{G_{2}}\). Then \(G_{12}\) is defined by setting: * \(V_{G_{12}}=\{[u]_{\sim}\mid u\in V_{G_{1}}\cup V_{G_{2}}\}\), * \(E_{G_{12}}=E_{G_{1}}\cup E_{G_{2}}\), \(L_{G_{12}}\stackrel{{\text{\tiny def}}}{{=}}L_{G_{1}}\cup L_{G_{2}}\), * \(\mathfrak{v}_{G_{12}}(e)\stackrel{{\text{\tiny def}}}{{=}}\langle [u_{1}]_{\sim},\ldots,[u_{k}]_{\sim}\rangle\) for every \(e\in E_{G_{i}}\), where \(\mathfrak{v}_{G_{i}}(e)=\langle u_{1},\ldots,u_{k}\rangle\), * \(\xi_{G_{12}}(s)\stackrel{{\text{\tiny def}}}{{=}}[\xi_{G_{i}}(s)]_ {\sim}\) iff \(s\in\tau_{i}\), for \(i=1,2\). The function \(\lambda G_{1}\lambda G_{2}\). \(G_{12}\) preserves isomorphism and the composition operation \(\|^{\mathfrak{G}}\) is defined as its lifting from concrete graphs to graphs. Note that the composition operation is both commutative and associative. Figure 2 (a) shows the result of the composition of two graphs, whereas (b) and (c) show the result of applying restriction and renaming to this composition, respectively. For every finite set \(\tau\subseteq_{\text{fin}}\mathbb{S}\) of source labels, we define \(\mathfrak{G}^{\tau}\) to be the subalgebra of the graph algebra \(\mathfrak{G}\), obtained by restricting its universe to the graphs \(\mathcal{G}^{\tau}\) and by restricting the signature \(\mathcal{F}_{\text{HR}}\) to the signature \(\mathcal{F}_{\text{HR}}^{\tau}\) consisting of: 1. the constants \(\mathbf{0}_{\tau}\), for \(\tau^{\prime}\subseteq\tau\), 2. the constants \(\mathbf{a}_{(s_{1},\ldots,s_{\#a})}\), for \(a\in\mathbb{A}\) and \(s_{1},\ldots,s_{\#a}\in\tau\), 3. the function symbols \(\mathsf{restrict}_{\tau^{\prime}}\), for \(\tau^{\prime}\subseteq\tau\), 4. the function symbols \(\mathsf{rename}_{\alpha}\), for all \(\tau\)-permutations \(\alpha\), and 5. the binary function symbol \(\|\). We denote by \(\mathcal{G}^{\tau}_{\text{rep}}\) the set of representable elements of \(\mathfrak{G}^{\tau}\), i.e., the graphs \(t^{\mathfrak{G}}\in\mathcal{G}^{\tau}\), where \(t\) is an \(\mathcal{F}^{\tau}_{\text{HR}}\)-term. Then, \(\mathfrak{G}^{\tau}_{\text{rep}}\) denotes the subalgebra of \(\mathfrak{G}^{\tau}\) restricted to its representable elements. We note that while all elements of the graph algebra \(\mathfrak{G}\) are representable, i.e., \(\mathfrak{G}=\mathfrak{G}_{\text{rep}}\), each algebra \(\mathfrak{G}^{\tau}_{\text{rep}}\) is a proper subalgebra of \(\mathfrak{G}^{\tau}\). ### Trees We introduce trees by a derived subalgebra of graphs over an alphabet \(\mathbb{B}\) of edge labels of arities one and two, ranged over by \(c\) and \(b\), where \(\#c=1\) and \(\#b=2\), respectively. Let \(\tau,l\in\mathbb{S}\) be source labels, where \(\tau\) denotes the root of the tree and \(l\) is an auxiliary label. Let \(G\) be a graph of sort \(\{\tau\}\) and \(b\in\mathbb{B}\) be a binary edge label. Then, \(\mathsf{append}^{\mathfrak{G}}_{b}(G)\) denotes the graph \(G\) to which we add a fresh vertex that becomes the root of the extended graph, that is connected to the original root by an edge labelled by \(b\). Formally, we define \(\mathsf{append}^{\mathfrak{G}}_{b}(G)\stackrel{{\text{\tiny def}}}{{=}} \mathsf{rename}^{\mathfrak{G}}_{\tau\leftrightarrow l}(\mathsf{restrict}^{ \mathfrak{G}}_{\{l\}}(\mathbf{b}^{\mathfrak{G}}_{(\mathfrak{t},\mathfrak{t}) }\parallel G))\), where \(\tau\leftrightarrow l\) is the \(\{\tau,l\}\)-permutation that switches \(\tau\) with \(l\). For instance, Figure 3 depicts the appending (a) and composition (b) operations on trees. Definition 2: Let \(\mathcal{F}_{\mathsf{tree}}(\mathbb{B})\) be the signature consisting of the composition \(\|\), the constants \(\mathbf{c}\) and the unary function symbols \(\mathsf{append}_{b}\), for all edge labels \(c,b\in\mathbb{B}\), where \(\#c=1\) and \(\#b=2\). The _tree algebra_\(\mathfrak{T}(\mathbb{B})\) has one sort \(\{\mathfrak{r}\}\) and interprets the function symbols \(\|\), \(\mathbf{c}\) and \(\mathsf{append}_{b}\) as \(\|^{\mathfrak{G}}\), \(\mathbf{c}^{\mathfrak{G}}_{(\mathfrak{r})}\) and \(\mathsf{append}^{\mathfrak{G}}_{b}\), for all \(c,b\in\mathbb{B}\), respectively. The universe of \(\mathfrak{T}(\mathbb{B})\) is the set of _trees_\(\mathcal{T}(\mathbb{B})\), consisting of the values of the ground \(\mathcal{F}_{\mathsf{tree}}(\mathbb{B})\)-terms in \(\mathfrak{G}\). above definition. This definition of trees is more general than the classical ranked and ordered trees considered in the literature on tree automata [11, 17]. We introduce some standard terminology for trees. The vertices of a tree are called _nodes_. For an edge \(e\in E_{\tau}\) with \(\#(L_{\tau}(e))=2\), we say that \(\upsilon_{\tau}(e)_{1}\) is the _parent_ of \(\upsilon_{\tau}(e)_{2}\) and \(\upsilon_{\tau}(e)_{2}\) is a _child_ of \(\upsilon_{\tau}(e)_{1}\). A node with no children is called a _leaf_. Since trees are representable elements of \(\mathfrak{T}(\mathbb{B})\), any leaf is attached to some unary-labeled edge. Moreover, the trees from \(\mathcal{T}(\mathbb{B})\) are _unordered_, i.e., the children of each node in a tree can be represented by a set, instead of a sequence, as it is the case in the standard literature. A _descendent_ of a node \(v\in V_{\tau}\), is either \(v\) or a descendent of a child of \(v\). The _height_ of a tree is the maximal distance from the root to a leaf. The _rank_ of a node is the number of its children. A set of trees \(\mathcal{K}\subseteq\mathcal{T}(\mathbb{B})\) is _ranked_, if there is some \(k\geq 0\) such that the rank of each node in each tree \(T\in\mathcal{K}\) is at most \(k\) and _unranked_, otherwise. Proposition 1: _Every tree of height \(n\geq 0\) is the value in \(\mathfrak{T}(\mathbb{B})\) of a \(\mathcal{F}_{\mathsf{tree}}(\mathbb{B})\)-term of the form \(t=\big{(}\parallel_{i\in I}\mathbf{c}_{i}\big{)}\parallel\big{(}\parallel_{ j\in J}\mathsf{append}_{b_{j}}(t_{j})\big{)}\) where \(I=\emptyset\) only if \(J\neq\emptyset\) and \(t_{j}^{\mathfrak{T}(\mathbb{B})}\) are trees of height strictly less than \(n\), for all \(j\in J\). Moreover, all \(\mathcal{F}_{\mathsf{tree}}(\mathbb{B})\)-terms that represent the same tree are equal up to the commutativity and associativity of composition._ Proof: By induction on the height \(n\geq 0\) of \(T\). For the base case \(n=0\), the tree consists of a single root node attached to one or more edges labeled with unary symbols \(c_{i}\in\mathbb{B}\), for \(i\in I\). Then there exists a term \(\parallel_{i\in I}\mathbf{c}_{i}\) that represents \(T\) and this term is unique modulo the commutativity and associativity of composition. For the induction step \(n\geq 1\), the root of \(T\) has children \(T_{j}\) of height strictly less than \(n\), for \(j\in J\), where \(J\) is a nonempty set. By the inductive hypothesis, there exist terms \(t_{j}\) such that \(t_{j}^{\mathfrak{T}(\mathbb{B})}=T_{j}\), for all \(j\in J\). Moreover, these terms are unique modulo commutativity and associativity of the composition. Let \(c_{i}\in\mathbb{B}\), \(i\in I\) be the unary labels of the root of \(T\) and \(b_{j}\in\mathbb{B}\), \(j\in J\) be the binary labels of the edges to which the root of \(T\) is attached. We consider the term \(t\stackrel{{\mbox{\tiny{\raisebox{0.0pt}[0.0pt][0.0pt]{def}}}}}{{=}} \left(\parallel_{i\in I}\mathbf{c}_{i}\right)\parallel\big{(}\parallel_{j\in J }\mathsf{append}_{b_{j}}(t_{j})\big{)}\). Then, \(t^{\mathfrak{T}(\mathbb{B})}=T\) and, moreover, any other term \(u\) such that \(u^{\mathfrak{T}(\mathbb{B})}=T\) differs from \(t\) by a permutation of \(c_{i}\) and \(t_{j}\). ### Reduced Trees Graphs are represented by terms over the signature \(\mathcal{F}_{\mathsf{HR}}\). These terms can be viewed as trees whose nodes and edges are labeled with function symbols from \(\mathcal{F}_{\mathsf{HR}}\) and their children form an ordered sequence of length equal to the arity of the node label. However, the same graph can be represented by several non-isomorphic terms, due to the commutative and associative interpretation of the composition \(\parallel\) in the graph algebra \(\mathfrak{G}\). In order to have a unique representation of graphs as trees, Courcelle introduced _reduced trees_[7, SS3]. These are trees with edges labeled by symbols from \(\mathcal{F}_{\mathsf{HR}}\), obtained by merging all adjacent nodes labeled by \(\parallel\) in a \(\mathcal{F}_{\mathsf{HR}}\)-term. For self-containment reasons, we define reduced trees using our notation. Figure 3: Append (a) and Composition (b) of Trees In the rest of this paper we fix the alphabet \(\mathbb{B}_{\mathsf{parse}}\) of tree edge labels and the signature \(\mathcal{F}_{\mathsf{parse}}\) to be the following sets, respectively: 1. the unary labels \(\underline{\mathbf{0}}_{\tau}\in\mathbb{B}_{\mathsf{parse}}\) and constants \(\mathbf{0}_{\tau}\in\mathcal{F}_{\mathsf{parse}}\), for all finite sets \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), 2. the unary labels \(\underline{\mathbf{a}}_{(s_{1},\ldots,s_{\tilde{\tilde{\tilde{\tilde{\tilde{ \tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\)\)\ \ \ \) \) \) \ \) * if \(t=\mathsf{restrict}_{\tau}(u)\) then \(\mathbf{val}(t^{\mathfrak{R}})\stackrel{{\mbox{\tiny{\sf def}}}}{{=}} \mathsf{restrict}_{\tau}^{\mathfrak{G}}(\mathbf{val}(u^{\mathfrak{R}}))\), * if \(t=\mathsf{rename}_{\alpha}(u)\) then \(\mathbf{val}(t^{\mathfrak{R}})\stackrel{{\mbox{\tiny{\sf def}}}}{{=}} \mathsf{rename}_{f}^{\mathfrak{G}}(\mathbf{val}(u^{\mathfrak{R}}))\), * if \(t=t_{1}\parallel t_{2}\) then \(\mathbf{val}(t^{\mathfrak{R}})\stackrel{{\mbox{\tiny{\sf def}}}}{{=}} \mathbf{val}(t_{1}^{\mathfrak{R}})\parallel^{\mathfrak{G}}\mathbf{val}(t_{2}^ {\mathfrak{R}})\). We note that \(\mathbf{val}\) is indeed a function because the interpretation \(\parallel^{\mathfrak{G}}\) of the composition operation is associative and commutative, hence its value on a given tree does not depend on the particular order of the subterms composed via \(\parallel\) in the \(\mathcal{F}_{\mathsf{HR}}\)-term whose value that tree is. Moreover, because \(\mathfrak{R}\) is a representable algebra (since \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\) is representable) and each tree is uniquely represented by a term (Proposition 1), this function is unique. Henceforth, we refer to \(\mathbf{val}\) as the _canonical evaluation_. Figure 4 (right) shows the result of applying the canonical evaluation to a tree whose edges are labeled with symbols from \(\mathbb{B}_{\mathsf{parse}}\). We prove below that recognizability of a set of graphs in a subalgebra \(\mathfrak{G}^{\tau}\), for a finite set \(\tau\) of source labels, is preserved under inverse canonical evaluations5: Footnote 5: This is not a consequence of the closure of recognizable sets under inverse homomorphisms, because \(\mathbf{val}\) is not a homomorphism between \(\mathfrak{R}\) and \(\mathfrak{G}\). In fact, no such homomorphism exists because \(\mathfrak{R}\) has one sort \(\{\tau\}\), whereas \(\mathfrak{G}\) has infinitely many sorts, i.e., the finite subsets of \(\mathbb{S}\). Lemma 4: _Let \(\mathfrak{T}\subseteq_{\mathsf{fin}}\mathbb{S}\) be a finite set of source labels. Then \(\mathbf{val}^{-1}(\mathcal{L})\) is recognizable in \(\mathfrak{R}\) if \(\mathcal{L}\) is recognizable in \(\mathfrak{G}^{\tau}\)._ Proof: Let \(\mathfrak{B}=(\{\mathcal{B}^{\tau}\}_{\tau^{\prime}\subseteq\tau},\{f^{ \mathfrak{B}}\}_{f\in\mathcal{F}_{\mathsf{HR}}})\) be a locally finite algebra and \(h:\mathcal{G}^{\tau}\to\mathcal{B}\) be a homomorphism between \(\mathfrak{G}^{\tau}\) and \(\mathfrak{B}\), such that \(\mathcal{L}=h^{-1}(\mathcal{C})\), for a set \(\mathcal{C}\subseteq\mathcal{B}\). Let \(\mathfrak{B}^{\prime}=(\{\mathcal{B}\},\{f^{\mathfrak{B}}\}_{f\in\mathcal{F}_ {\mathsf{HR}}})\) be the algebra with a single sort \(\{\tau\}\) and finite \(\{\tau\}\)-universe consisting of the union of all \(\mathcal{B}^{\tau}\), for \(\tau^{\prime}\subseteq\tau\). Then, \(h\circ\mathbf{val}\) is a homomorphism between \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\) and \(\mathfrak{B}^{\prime}\) and, moreover, \(\mathbf{val}^{-1}(\mathcal{L})=(h\circ\mathbf{val})^{-1}(\mathcal{C})\). ### Graph Grammars We define context-free sets of graphs using graph grammars. Let \(\mathbb{U}\) be a set of _nonterminals_, ranged over by \(U,V\), etc. A _graph grammar_\(\Gamma\) is a finite set of _rules_ of the form \(U\to t\), where \(U\) is a nonterminal and \(t\) is a \(\mathcal{F}_{\mathsf{HR}}\)-term with variables from \(\mathbb{U}\). A _solution_ of \(\Gamma\) is a mapping \(\mathcal{S}:\mathcal{X}\to\operatorname{pow}(\mathcal{G})\) such that \(t^{\mathcal{S}}\subseteq\mathcal{S}(U)\) for each rule \(U\to t\in\Gamma\), where \(t^{\mathcal{S}}\) denotes the evaluation of the graph term with regard to the sets \(\mathcal{S}(U)\) for each variable \(U\in\mathbb{U}\). Since the evaluation of terms with set variables is monotonic with regard to set containment, a least solution exists and is unique. We denote by \(\mathcal{L}_{U}(\Gamma)\) the component corresponding to \(U\) within the least solution of \(\Gamma\). A set of graphs \(L\) is _context-free_ iff \(L=\mathcal{L}_{U}(\Gamma)\), for some \(U\in\mathbb{U}\) and some graph grammar \(\Gamma\). An important example is: Proposition 2: _The set of graphs \(\mathcal{G}^{\tau}_{\mathsf{rep}}\) is context-free, for any \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\)._ Figure 4: Canonical Evaluation of Parse Trees Proof: We fix some \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\). We consider the grammar \(\Gamma\) with a single non-terminal \(U\) and rules \(U\to\mathbf{0}_{\tau}\), for every set \(\tau^{\prime}\subseteq\tau\), \(U\to\mathbf{a}_{(s_{1},\ldots,s_{\mathit{sa}})}\), for every \(a\in\mathbb{A}\) and \(s_{1},\ldots,s_{\mathit{sa}}\in\tau\), \(U\to\mathsf{restrict}_{\tau^{\prime}}(U)\), for every \(\tau^{\prime}\subseteq\tau\), \(U\to\mathsf{rename}_{\alpha}(U)\), for every \(\tau\)-permutation \(\alpha\), and \(U\to U\parallel U\). As all rules of \(\Gamma\) only use \(\mathcal{F}_{\mathsf{HR}}^{\tau}\)-operations, we clearly have \(\mathcal{L}_{U}(\Gamma)\subseteq\mathcal{G}_{\mathsf{rep}}^{\tau}\). On the other hand, every ground term of \(\mathcal{G}_{\mathsf{rep}}^{\tau}\) can be constructed by rules of this grammar. Hence, \(\mathcal{L}_{U}(\Gamma)\supseteq\mathcal{G}_{\mathsf{rep}}^{\tau}\). An important property of context-free sets of graphs is that they are images of recognizable sets of trees under the canonical evaluation. We introduce _regular tree grammars_ as a special case of graph grammars, whose variables are split into two sets \(\mathcal{U}\) and \(\mathcal{V}\), ranged over by \(U,U_{1},U_{2}\) and \(V,V_{1},V_{2}\), respectively, and having rules of the form either: * \(V\to\underline{\mathbf{0}}_{\tau}\), for some finite set \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), * \(V\to\underline{\mathbf{a}}_{(s_{1},\ldots,s_{\mathit{sa}})}\), for some \(a\in\mathbb{A}\) and some \(s_{1},\ldots,s_{\mathit{sa}}\in\mathbb{S}\), * \(V\to\mathsf{append}_{\mathsf{restrict}_{\tau}}(U)\), for some finite set \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), * \(V\to\mathsf{append}_{\mathsf{rename}_{\alpha}}(U)\), for some finite permutation \(\alpha:\mathbb{S}\to\mathbb{S}\), * \(U\to V_{1}\parallel\ldots\parallel V_{n}\), for some integer \(n\geq 2\). Notice the alternation between the \(\mathcal{U}\) and \(\mathcal{V}\) variables, that occur on opposite sides of the rules. This particular form of the rules ensures that each (context-free) set of trees in the least solution of the grammar is also recognizable (Lemma 5). It is easy to see that a grammar with unrestricted rules produces non-recognizable sets. For instance, the grammar \(V\to V\parallel\mathbf{c}_{(\tau)}\parallel\mathbf{d}_{(\tau)},\;V\to\mathbf{0}_ {\{\tau\}}\) produces the set of graphs with one vertex and equal numbers of \(c\)- and \(d\)-labeled edges, which is provably not recognizable. Lemma 5: _Let \(\Gamma\) be a regular tree grammar. Then, for each nonterminal \(W\) that occurs in \(\Gamma\), the set \(\mathcal{L}_{W}(\Gamma)\) is recognizable in \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\)._ Proof: Let \(\mathcal{U}\) and \(\mathcal{V}\) be the partition of the nonterminals of \(\Gamma\), according to the definition of regular tree grammars. Let \(m=\max\{k\mid U\to V_{1}\parallel\ldots\parallel V_{k}\in\Gamma\}\). We define the locally finite algebra \(\mathfrak{B}=(\{\mathcal{B}\},\{f^{\mathfrak{B}}\}_{f\in\mathcal{F}_{\mathsf{ parse}}}))\), where \(\mathcal{B}\) is the set of multisets \([\![\mathcal{V}_{1},\ldots,\mathcal{V}_{k}]\!]\) with \(k\leq m\) and \(\mathcal{V}_{i}\subseteq\mathcal{V}\), for all \(i\in[1,k]\). The function symbols in \(\mathcal{F}_{\mathsf{parse}}\) are interpreted in \(\mathfrak{B}\) as follows: * \(\mathbf{c}^{\mathfrak{B}}\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}} {{=}}[\![\{V\in\mathcal{V}\mid V\to c\in\Gamma\}]\!]\), for all constants \(\mathbf{c}\in\mathcal{F}_{\mathsf{tree}}\), * \(\mathsf{append}_{b}^{\mathfrak{B}}([\![\mathcal{V}_{1},\ldots,\mathcal{V}_{k}] \!])\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}}\) \([\![\{V\in\mathcal{V}\mid V\to\mathsf{append}_{b}(U),\;U\to V_{1}\parallel \ldots\parallel V_{k}\in\Gamma\mbox{ with }V_{i}\in\mathcal{V}_{i}\}]\!]\), and * \([\![\mathcal{V}_{1},\ldots,\mathcal{V}_{k}]\!]\stackrel{{ \mbox{\tiny{\rm{\tiny def}}}}}{{=}}[\![\mathcal{V}_{1}^{\prime},\ldots,\mathcal{ V}_{\ell}^{\prime}]\!]\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}} \left\{\begin{array}{l}[\![\mathcal{V}_{1},\ldots,\mathcal{V}_{k},\mathcal{V}_{ 1}^{\prime\prime},\ldots,\mathcal{V}_{\ell}^{\prime\prime}]\!]\mbox{, if }k+\ell\leq m,\\ [\![\emptyset]\!]\mbox{, otherwise.}\end{array}\right.\) Let \(h:\mathcal{T}(\mathbb{B}_{\mathsf{parse}})\to\mathcal{B}\) be the unique homomorphism between \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\) and \(\mathfrak{B}\), defined inductively on the structure of \(\mathcal{F}_{\mathsf{parse}}\)-terms. Given \(W\in\mathcal{U}\cup\mathcal{V}\), the set \(\mathcal{C}_{W}\subseteq\mathcal{B}\) is defined as follows: \[\mathcal{C}_{W}\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}} \left\{\begin{array}{l}\{[\![\mathcal{V}_{1},\ldots,\mathcal{V}_{k}]\!]\}\mbox{, if }W\in\mathcal{U}\mbox{ and }U\to V_{1}\parallel\ldots\parallel V_{k}\in\Gamma\mbox{ with }V_{i}\in \mathcal{V}_{i}\\ \{[\![\mathcal{W}]\!]\}\mbox{, if }W\in\mathcal{V}\mbox{ and }W\in\mathcal{W}\end{array}\right.\] We are left with proving that \(\mathcal{L}_{W}(\Gamma)=h^{-1}(\mathcal{C}_{W})\), which is a routine check. Lemma 6: _Let \(\Gamma\) be a graph grammar and \(U\) be a nonterminal that occurs in \(\Gamma\). Then, there exists a ranked set of trees \(\mathcal{K}\subseteq\mathcal{T}(\mathbb{B}_{\mathsf{parse}})\) recognizable in \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\), such that \(\mathcal{L}_{U}(\Gamma)=\mathbf{val}(\mathcal{K})\)._ Proof: Assume w.l.o.g. that the rules in \(\Gamma\) have the form \(U\to c\), \(U\to b(U)\) and \(U\to U_{1}\parallel\ldots\parallel U_{k}\), for some constant \(\mathbf{c}\in\{\mathbf{0}_{\tau}\mid\tau\subseteq_{\mathit{fin}}\mathbb{S}\} \cup\{\mathbf{a}_{(s_{1},\ldots,s_{\mathit{in}})}\mid a\in\mathbb{A},\ s_{1}, \ldots,s_{\mathit{in}}\in\mathbb{S}\}\) and unary symbol \(b\in\{\mathsf{restriction}_{\mathsf{CT}}\mid\tau\subseteq_{\mathit{fin}} \mathbb{S}\}\cup\{\mathsf{rename}_{\alpha}\mid\alpha\mathbin{:}\mathbb{S} \rightarrow\mathbb{S}\text{ finite permutation}\}\). It is manifest that every graph grammar can be put into this form by introducing nonterminals and rules. Let \(U_{1},\ldots,U_{n}\) be the nonterminals from \(\Gamma\). We define the regular tree grammar \(\Gamma^{T}\) with the nonterminals \(U_{1},\ldots,U_{n},V_{1},\ldots,V_{n}\) and the following rules: * \(V_{i}\rightarrow\underline{\mathbf{c}}\), for each rule \(U_{i}\to c\in\Gamma\), * \(V_{i}\rightarrow\mathsf{append}_{\underline{b}}(U_{j})\), for each rule \(U_{i}\to b(U_{j})\in\Gamma\), * \(U_{i}\to V_{i_{1}}\parallel\ldots\parallel V_{i_{k}}\), for each rule \(U_{i}\to U_{i_{1}}\parallel\ldots\parallel U_{i_{k}}\in\Gamma\). It is easy to see that each set \(\mathcal{L}_{U_{i}}(\Gamma^{T})\) or \(\mathcal{L}_{V_{i}}(\Gamma^{T})\) is ranked, because of the alternation of \(U_{i}\) with \(V_{j}\) nonterminals between the left- and right-hand sides of the rules in \(\Gamma^{T}\). By Lemma 5, the sets \(\mathcal{L}_{U_{i}}(\Gamma^{T})\) and \(\mathcal{L}_{V_{i}}(\Gamma^{T})\) are recognizable in \(\mathfrak{T}\). Hence, it is sufficient to prove that \(\mathcal{L}_{U_{i}}(\Gamma)=\mathbf{val}(\mathcal{L}_{U_{i}}(\Gamma^{T})\cup \mathcal{L}_{V_{i}}(\Gamma^{T}))\), for all \(i\in[1,n]\). "\(\subseteq\)" Let \(G\in\mathcal{L}_{U_{i}}(\Gamma)\) be a graph. Because \(\mathcal{L}_{U_{i}}(\Gamma)\) is a component of the least solution of \(\Gamma\), it is the value of a ground \(\mathcal{F}_{\mathsf{HR}}\)-term \(t\), i.e. \(G=t^{\mathfrak{G}}\), by a standard least fixpoint argument. The ground term \(t\) is produced by a complete derivation of \(\Gamma\), that is mirrored by a complete derivation of \(\Gamma^{T}\). The result of the \(\Gamma^{T}\)-derivation is a ground \(\mathcal{F}_{\mathsf{parse}}\)-term \(u\). Let \(T\stackrel{{\mathit{def}}}{{=}}u^{\mathfrak{T}}\) be the tree that is the value of \(u\) in \(\mathfrak{T}\). It is straightforward to show that \(G=\mathbf{val}(T)\), by induction on the structure of \(t\). Moreover, we have \(T\in\mathcal{L}_{U_{i}}(\Gamma^{T})\cup\mathcal{L}_{V_{i}}(\Gamma^{T})\), because the start variable of the \(\Gamma^{T}\)-derivation is either \(U_{i}\) or \(V_{i}\), depending on the type of the first rule applied in the \(\Gamma\)-derivation. "\(\supseteq\)" Let \(T\in\mathcal{L}_{U_{i}}(\Gamma^{T})\) be a tree (the case \(T\in\mathcal{L}_{V_{i}}(\Gamma^{T})\) is similar and left to the reader). Then, there exists a complete derivation of \(\Gamma^{T}\) that produces a \(\mathcal{F}_{\mathsf{parse}}\)-term \(t\), such that \(T=t^{\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})}\). By the definition of \(\Gamma^{T}\), this derivation corresponds to a complete derivation of \(\Gamma\), that produces a \(\mathcal{F}_{\mathsf{HR}}\)-term \(u\), that differs from \(t\) only by the change of each constant \(\underline{\mathbf{c}}\) into \(\mathbf{c}\) and each unary function symbol \(\mathsf{append}_{\underline{b}}\) into \(b\). Then, we have \(G=u^{\mathfrak{G}}=\mathbf{val}(T)\in\mathcal{L}_{U_{i}}(\Gamma)\). We further state the following important closure property of context-free graphs (also known as Filtering Theorem, see Theorem 4.53 in [9]): Theorem 4.1: _Let \(\mathcal{L}\) be a context-free set of graphs, and let \(\mathcal{K}\) be a recognizable set of graphs. Then, \(\mathcal{L}\cap\mathcal{K}\) is context-free. Moreover, the grammar for \(\mathcal{L}\cap\mathcal{K}\) uses the same sources \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\) as the grammar for \(\mathcal{L}\)._ Proof: Because \(\mathcal{L}\) is context-free, there is a graph grammar \(\Gamma\) over some set of nonterminals \(\mathbb{U}\) such that \(L=\mathcal{L}_{V}(\Gamma)\), for some \(V\in\mathbb{U}\). Let \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\) be the set of all sources that appear in the rules of \(\Gamma\). Because \(\mathcal{K}\) is recognizable, there is a homomorphism \(h\) between \(\mathfrak{G}\) and some locally-finite algebra \(\mathfrak{B}\) such that \(\mathcal{K}=h^{-1}(\mathcal{C})\), for some set \(\mathcal{C}\subseteq\mathcal{B}\). We now consider the finite subalgebra \(\mathfrak{B}^{\prime}\) of \(\mathfrak{B}\) obtained by restricting \(\mathfrak{B}\) to sorts \(\tau^{\prime}\subseteq\tau\). We then define the set of non-terminals \(\mathbb{U}^{\prime}=\mathbb{U}\times\mathcal{B}^{\prime}\) as the product of the non-terminals \(\mathbb{U}\) and the universe of \(\mathfrak{B}^{\prime}\). We finally define a grammar \(\Gamma^{\prime}\) by including a rule \((U,b)\to t\) for every term \(t\) over variables \((U_{1},b_{1}),\ldots,(U_{n},b_{n})\), if \(U\to t\) is a rule over variables \(U_{1},\ldots,U_{n}\) in \(\Gamma\) and \(t^{\mathfrak{B}}(b_{1},\ldots,b_{n})=b\). It is then easy to verify that \(\mathcal{L}\cap\mathcal{K}=\bigcup_{b\in\mathcal{C}}\mathcal{L}_{(V,b)}(\Gamma^ {\prime})\). We then add a fresh non-terminal \(W\) and rules \(W\rightarrow(V,b)\) for all \(b\in\mathcal{C}\) to the grammar \(\Gamma^{\prime}\). This gives the desired grammar for \(\mathcal{L}\cap\mathcal{K}\). Moreover, we observe that the obtained grammar uses the same set of sources \(\tau\) as \(\Gamma\). ### Tree Decompositions We define the tree decomposition and treewidth of a graph, using trees over an alphabet of edge labels that consists of a single binary element, denoted as \(\mathsf{parent}\). Definition 3: A _tree decomposition_ of a concrete graph \(G\) is a pair \((T,\beta)\), where \(T\in\mathcal{T}(\mathsf{parent})\) is a tree and \(\beta:V_{T}\to\operatorname{pow}(V_{G})\) maps each node of \(T\) into a set of vertices from \(G\), such that: 1. for each edge \(e\in E_{G}\) there exists a node \(n\in V_{T}\), such that \(\upsilon_{G}(e)_{i}\in\beta(n)\), for all \(1\leq i\leq\#L_{T}(e)\), and 2. for each vertex \(v\in V_{G}\), the set of nodes \(B_{T}(v)\stackrel{{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\))))) \ \ \ \ \ \ \to\to\to Assuming basic acquaintance with the notion of grid and the fact that an \(n\times n\) square grid has treewidth \(n\)[1], one notices that a \(n\times n\) square grid with no sources belongs to \(\mathcal{G}^{\tau}\) but not to \(\mathcal{G}^{\tau}_{\mathsf{rep}}\), for any \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\) such that \(\operatorname{card}(\tau)<n\). ## 3 Counting Monadic Second Order Logic (\(\mathsf{CMS}\)) A _relational signature_\(\mathbb{R}\) is a finite set of _relation symbols_ ranged over by \(r,r_{1},r_{2}\) of arity \(\#r\geq 0\). A _\(\mathbb{R}\)-structure_ is a pair \(\mathsf{S}=(U,\sigma)\), where \(U\) is a finite set called _universe_ and \(\sigma:\mathbb{R}\to\operatorname{pow}(U^{+})\) maps each relation symbol \(r\) into a subset of \(U^{\#r}\). The set of \(\mathbb{R}\)-structures is denoted by \(\mathfrak{S}(\mathbb{R})\). The _Counting Monadic Second Order Logic_ (\(\mathsf{CMS}\)) is the set of formulae written using a set \(\mathcal{V}=\{x,y,\ldots\}\) of _first-order variables_, a set \(\mathcal{X}=\{X,Y,\ldots\}\) of _second-order variables_ and relation symbols \(r\in\mathbb{R}\), according to the following syntax: \[\psi:=x=y\mid r(x_{1},\ldots,x_{\#r})\mid X(x)\mid\operatorname{card}_{q,p}( X)\mid\neg\psi\mid\psi\wedge\psi\mid\exists x\.\ \psi\mid\exists X\.\ \psi\] where \(p,q\in\mathbb{N}\) are constants such that \(p\in[0,q-1]\). By \(\mathsf{MS}\) we denote the subset of \(\mathsf{CMS}\) formulae consisting of formulae that do not contain atomic subformulae of the form \(\operatorname{card}_{q,p}(X)\), also called _cardinality constraints_. A variable is _free_ in a formula \(\phi\) if it does not occur in the scope of a quantifier. A _sentence_ is a formula with no free variables. The semantics of \(\mathsf{CMS}\) is given by a satisfaction relation \((U,\sigma)\models^{\mathsf{S}}\psi\), where the store \(\mathsf{s}:\mathcal{V}\cup X\to U\cup\operatorname{pow}(U)\) maps each variable \(x\in\mathcal{V}\) to an element of the universe and each variable \(X\in\mathcal{X}\) to a finite subset of \(U\). The satisfaction relation is defined inductively on the structure of formulae: \((U,\sigma)\models^{\mathsf{S}}r(x_{1},\ldots,x_{k})\) iff \(\langle\sigma(x_{1}),\ldots,\sigma(x_{k})\rangle\in\sigma(r)\), \((U,\sigma)\models^{\mathsf{S}}X(x)\) iff \(\sigma(x)\in\mathsf{s}(X)\) and \((U,\sigma)\models^{\mathsf{S}}\operatorname{card}_{q,p}(X)\) iff \(\operatorname{card}(\mathsf{s}(X))=kq+p\), for some \(k\in\mathbb{N}\). The semantics of equality, negation, conjunction and quantification is standard and omitted for brevity. If \(\phi\) is a sentence, the satisfaction relation does not depend on the store and we write \((U,\sigma)\models\phi\) instead of \((U,\sigma)\models^{\mathsf{S}}\phi\). Two structures are _isomorphic_ iff they differ only by a renaming of their elements (a formal definition is given in [12, SSA3]). It is known that the satisfaction relation of \(\mathsf{CMS}\) does not distinguish between isomorphic structures. ### Graphs as Structures In order to describe sets of graphs using \(\mathsf{CMS}\), we need to encode graphs as structures. To this end, we consider the alphabet \(\mathbb{A}\) of edge labels to be finite. Both the alphabet \(\mathbb{A}\) of edge labels and the infinite set \(\mathbb{S}\) of source labels are considered to be fixed in the rest of this paper. Given a finite set \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\) of source labels, we define the relational signature \(\mathbb{R}^{\tau}_{\mathsf{graph}}\stackrel{{\mbox{\tiny{\it def }}}}{{=}}\{r_{a}\mid a\in\mathbb{A}\}\cup\{r_{s}\mid s\in\tau\}\) whose relation symbols have the arities \(\#_{a}\stackrel{{\mbox{\tiny{\it def}}}}{{=}}\#a+1\), for all \(a\in\mathbb{A}\), and \(\#_{s}\stackrel{{\mbox{\tiny{\it def}}}}{{=}}1\), for all \(s\in\tau\). Note that \(\mathbb{R}^{\tau}_{\mathsf{graph}}\) is finite because both \(\mathbb{A}\) and \(\tau\) are finite. The _encoding_ of a concrete graph \(G=\langle V_{G},E_{G},L_{G},\upsilon_{G},\xi_{G}\rangle\in\mathcal{G}^{\tau}\) is the \(\mathbb{R}^{\tau}_{\mathsf{graph}}\)-structure \(\|G\|=(V_{G}\cup E_{G},\sigma_{G})\), where: * \(\sigma_{G}(r_{a})\stackrel{{\mbox{\tiny{\it def}}}}{{=}}\{(e,v_{1 },\ldots,v_{m})\mid e\in E_{G},\ L_{G}(e)=a,\ \upsilon_{G}(e)=(v_{1},\ldots,v_{m})\}\), for all \(a\in\mathbb{A}\), * \(\sigma_{G}(r_{s})\stackrel{{\mbox{\tiny{\it def}}}}{{=}}\{\xi_{G} (s)\}\), for all \(s\in\tau\). Since a \(\mathsf{CMS}\) sentence \(\phi\) cannot distinguish between isomorphic structures, any set \(\{\mathsf{S}\mid\mathsf{S}\models\phi\}\) is a union of equivalence classes of isomorphism, hence defines a set of graphs. A set of graphs \(\mathcal{L}\subseteq\mathcal{G}^{\intercal}\) is \((\mathsf{C})\mathsf{MS}\)-_definable_ if there exists a \((\mathsf{C})\mathsf{MS}\) formula \(\phi\) over the relational signature \(\mathbb{R}^{\intercal}_{\mathsf{graph}}\) such that \(\|\mathcal{L}\|=\{\mathsf{S}\mid\mathsf{S}\models\phi\}\). Note that a set of graphs that is not included in \(\mathcal{G}^{\intercal}\), for any finite \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), is not definable, because a \(\mathsf{C}\mathsf{MS}\) formula can only speak of finitely many relation symbols. We recall the following result of Courcelle [6, Theorem 4.4]: Theorem 3.1 ([6]): _Any \(\mathsf{C}\mathsf{MS}\)-definable set of graphs is recognizable in \(\mathfrak{G}\)._ ### Trees as Structures Since trees are graphs, the encoding of trees as structures is no different from that of graphs. As before, we consider a finite set \(\mathbb{B}\) of edge labels and define the relational signature \(\mathbb{R}_{\mathsf{tree}}(\mathbb{B})\stackrel{{\mbox{\tiny{ def}}}}{{=}}\{r_{b}\mid b\in\mathbb{B}\}\cup\{r_{\mathfrak{r}}\}\), where \(\mathfrak{r}\) is the singleton source label associated to the root. In general, \((\mathsf{C})\mathsf{MS}\)-definability equals recognizability, when considering trees. Theorem 3.2 ([17]): _A ranked set of trees \(\mathcal{K}\subseteq\mathcal{T}(\mathbb{B})\) is recognizable in \(\mathfrak{T}(\mathbb{B})\) iff \(\mathcal{K}\) is MS-definable, for any finite alphabet \(\mathbb{B}\)._ This result has been later extended to unranked sets of trees, for which a strictly more expressive logic is required. It was established in [6, Proposition 6.2] that \(\mathsf{C}\mathsf{MS}\) is strictly more expressive than \(\mathsf{MS}\). Theorem 3.3 ([6]): _A set of trees \(\mathcal{K}\subseteq\mathcal{T}(\mathbb{B})\) is recognizable in \(\mathfrak{T}(\mathbb{B})\) iff it is \(\mathsf{C}\mathsf{MS}\)-definable, for any finite alphabet \(\mathbb{B}\)._ As a consequence, we obtain the equivalence of the recognizability of a set of trees in the graph and tree algebras, which is a result of independent interest: Corollary 1: _A set of trees is recognizable in \(\mathfrak{T}(\mathbb{B})\) iff it is recognizable in \(\mathfrak{G}\), for any finite alphabet \(\mathbb{B}\subseteq\mathbb{A}\)._ Proof: "\(\Rightarrow\)" By Theorem 3.3 any set of trees is recognizable in \(\mathfrak{T}(\mathbb{B})\) only if it is \(\mathsf{C}\mathsf{MS}\)-definable. By Theorem 3.1, any \(\mathsf{C}\mathsf{MS}\)-definable set of graphs is recognizable in \(\mathfrak{G}\). "\(\Leftarrow\)" By Lemmas 1 and 2, because \(\mathfrak{T}(\mathbb{B})\) is a derived subalgebra of \(\mathfrak{G}\). ### Tree Decompositions as Structures We shall also use structures to encode triples \((G,T,\beta)\), where \(G\in\mathcal{G}^{\intercal}\) is a concrete graph and \((T,\beta)\) is a tree decomposition of \(G\) (we recall that \(T\) is a tree without unary labels and with the single binary edge label parent). To this end, we use the relational signature \(\mathbb{R}^{\intercal}_{\mathsf{decomp}}\stackrel{{\mbox{\tiny{ def}}}}{{=}}\mathbb{R}^{\intercal}_{\mathsf{graph}}\cup\{\mathsf{node},\mathsf{ parent},\mathsf{bag}\}\), where \(\mathsf{node}\) is a unary relation symbol and \(\mathsf{parent},\mathsf{bag}\) are binary relation symbols, respectively. We encode \(G\) by a structure \(\|(G,T,\beta)\|=(\mathsf{U},\sigma)\) over the relational signature \(\mathbb{R}^{\intercal}_{\mathsf{graph}}\) (SS 3.1). The tree decomposition \((T,\beta)\) is encoded using the additional relation symbols, interpreted as \(\sigma(\mathsf{node})\stackrel{{\mbox{\tiny{ def}}}}{{=}}V_{T}\), \(\sigma(\mathsf{parent})\stackrel{{\mbox{\tiny{ def}}}}{{=}}\{(v,w)\in V_{T}\times V_{T}\mid\mbox{ there is an parent-labelled edge $e$}\in E_{T}$ with $\mathsf{v}_{G}(e)=(v,w)\}\) and \(\sigma(\mathsf{bag})\stackrel{{\mbox{\tiny{ def}}}}{{=}}\{(v,n)\in V_{G}\times V_{T}\mid v\in\beta(n)\}\). ## 4 Definable Transductions Given relational signatures \(\mathbb{R}\) and \(\mathbb{R}^{\prime}\), a _\((\mathbb{R},\mathbb{R}^{\prime})\)-transduction_ is a relation \(\delta\subseteq\mathfrak{S}(\mathbb{R})\times\mathfrak{S}(\mathbb{R}^{\prime})\). \(\delta\) is _isomorphism-preserving_ if, for each pair \((\mathsf{S}_{1},\mathsf{S}_{2})\in\delta\) and each \(\mathbb{R}\)-structure \(\mathsf{S}^{\prime}_{1}\) isomorphic to \(\mathsf{S}_{1}\), there exists a \(\mathbb{R}^{\prime}\)-structure \(\mathsf{S}^{\prime}_{2}\) isomorphic to \(\mathsf{S}_{2}\), such that \((\mathsf{S}^{\prime}_{1},\mathsf{S}^{\prime}_{2})\in\delta\). We consider only isomorphism-preserving transductions in this paper. A transduction first makes \(k\geq 1\) copies of the original structure, called _layers_, then defines the new structure within the \(k\)-times disjoint union of these copies. Let \(\mathbb{R}\otimes k\stackrel{{\mbox{\tiny def}}}{{=}}\{(\mathsf{ r},i_{1},\ldots,i_{\mathsf{rr}})\mid\mathsf{r}\in\mathbb{R},\ i_{1},\ldots,i_{\mathsf{rr}}\in[1,k]\}\) be a relational signature, where \(\#(r,i_{1},\ldots,i_{\mathsf{rr}})\stackrel{{\mbox{\tiny def}}}{{ =}}\#\mathsf{r}\). That is, a relation symbol \((\mathsf{r},i_{1},\ldots,i_{\mathsf{rr}})\) is interpreted over the layers \(i_{1},\ldots,i_{\mathsf{rr}}\) in the \(k\)-disjoint union of the original structure. The outcome of the transduction depends on the valutation of zero or more _parameters_\(X_{1},\ldots,X_{n}\in\mathcal{X}\) as subsets of the universe of the input structure. In the following, we consider \((\mathbb{R},\mathbb{R}^{\prime})\)-transductions defined by _transduction schemes_, i.e., finite tuples of formulae \(\Theta=\langle\varphi,\{\psi_{i}\}_{i\in[1,k]},\{\theta_{\mathsf{s}}\}_{ \mathsf{s}\in\mathbb{R}^{\prime}\otimes k}\rangle\), where: * \(\varphi\) has free variables \(X_{1},\ldots,X_{n}\) and defines the domain of the transduction, * each \(\psi_{i}\), for \(i\in[1,k]\), has free variables \(x_{1},X_{1},\ldots,X_{n}\) and defines the universe of the \(i\)-th layer of the result, * each \(\theta_{(\mathsf{r}^{\prime},i_{1},\ldots,i_{\mathsf{rr}^{\prime}})}\), for \((\mathsf{r}^{\prime},i_{1},\ldots,i_{\mathsf{rr}^{\prime}})\in\mathbb{R}^{ \prime}\otimes[1,k]\), has free variables \(x_{1},\ldots,x_{\mathsf{rr}^{\prime}},X_{1},\ldots,X_{n}\) and defines the interpretation of \((\mathsf{r}^{\prime},i_{1},\ldots,i_{\mathsf{rr}^{\prime}})\) in the result. For a structure \((\mathsf{U},\sigma)\in\mathfrak{S}(\mathbb{R})\) and a store \(\mathsf{s}\colon\mathcal{U}\cup X\to\mathsf{U}\cup\operatorname{pow}(\mathsf{ U})\), such that \((\mathsf{U},\sigma)\models^{\mathsf{s}}\varphi\), the structure \(\operatorname{def}^{\mathsf{s}}_{\Theta}(\mathsf{U},\sigma)=(\mathsf{U}^{ \prime},\sigma^{\prime})\in\mathfrak{S}(\mathbb{R}^{\prime})\) is defined as follows: * \(\mathsf{U}^{\prime}\stackrel{{\mbox{\tiny def}}}{{=}}\{(u,i)\in \mathsf{U}\times[1,k]\mid(\mathsf{U},\sigma)\models^{\mathsf{s}[x_{1}\gets u ]}\psi_{i}\}\), * \(\sigma^{\prime}(\mathsf{r}^{\prime},i_{1},\ldots,i_{\mathsf{rr}^{\prime}}) \stackrel{{\mbox{\tiny def}}}{{=}}\{\langle(u_{1},i_{1}),\ldots,(u_ {\mathsf{rr}^{\prime}},i_{\mathsf{rr}^{\prime}})\rangle\mid(\mathsf{U},\sigma) \models^{\mathsf{s}[x_{1}\gets u_{1},\ldots,x_{\mathsf{rr}^{\prime}} \gets u_{\mathsf{rr}^{\prime}}]}\theta_{(\mathsf{r}^{\prime},i_{1},\ldots, i_{\mathsf{rr}^{\prime}})}\}\) If every formula from \(\Theta\) belongs to \((\mathsf{C})\mathsf{MS}\), then \(\Theta\) is a \((\mathsf{C})\mathsf{MS}\)-_transduction scheme_. It is known that all \((\mathsf{C})\mathsf{MS}\)-transduction schemes are isomorphism-preserving. Let \(\operatorname{def}_{\Theta}:\mathfrak{S}(\mathbb{R})\times\mathfrak{S}( \mathbb{R}^{\prime})\) be the function that maps a structure \((\mathsf{U},\sigma)\) into a set of structures \(\operatorname{def}^{\mathsf{s}}_{\Theta}(\mathsf{U},\sigma)\), one for each interpretation of the parameters \(X_{1},\ldots,X_{n}\) by the store \(\mathsf{s}\). A transduction \(\delta\) is _\((\mathsf{C})\mathsf{MS}\)-definable_ if there exists a \((\mathsf{C})\mathsf{MS}\)-transduction scheme \(\Theta\), such that \(\delta=\operatorname{def}_{\Theta}\). The main property of \((\mathsf{C})\mathsf{MS}\)-definable transductions is the Backwards Translation Theorem (see e.g., [9, Theorem 1.40]): Theorem 4.1 ([9]): _If \(\mathcal{S}\subseteq\mathfrak{S}(\mathbb{R}^{\prime})\) is an \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable) set and \(\delta\) is an \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable) \((\mathbb{R},\mathbb{R}^{\prime})\)-transduction then the set \(\delta^{-1}(\mathcal{S})\) is \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable)._ The below properties of transduction schemes are immediate from the definition: Proposition 3: _(1) The composition of \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable) transductions is \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable). (2) The domain-restriction of a \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable) transduction by a \(\mathsf{MS}\)-definable set is \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable). (3) The domain of a \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable) transduction is \(\mathsf{MS}\)-definable (resp. \(\mathsf{C}\mathsf{MS}\)-definable)._ ### The Canonical Evaluation is \(\mathsf{MS}\)-Definable Given a finite set \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\) of source labels, we recall that \(\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})\) denotes the set of trees with unary (i.e., \(\underline{\mathbf{0}}_{\tau}\), \(\underline{\mathbf{a}}_{(s_{1},\ldots,s_{\mathsf{ss}_{\mathsf{ss}_{\mathsf{ss}_{ \mathsf{ss}}}}})}\)) and binary (i.e., \(\mathsf{restrict}_{\tau}\), \(\mathsf{reaname}_{\alpha}\)) labels involving only sources from \(\tau\) (in particular, each function \(\alpha\) is a \(\tau\)-permutation). Given a tree \(T\in\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})\), a source label \(s\) is _present_ in \(T\) if \(\mathbf{val}(T)\) has an \(s\)-source. The following lemma shows that the presence of a source label in a tree is an MS property: Lemma 8: _For each finite set \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\) of source labels and \(s\in\mathbb{S}\), one can build an MS sentence \(\phi\) such that \(\|T\|\models\phi\) iff \(s\) is present in \(T\), for each tree \(T\in\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})\)._ Proof: Let \(T\in\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})\) be a tree. The construction of \(\phi_{s}\) relies on the following equivalent condition, that can be easily expressed by a MS sentence: **Fact 1**: \(s\) _is present in_ \(T\) _iff there are_ \(n_{0},\ldots,n_{m}\in V_{T}\) _and_ \(s_{0},\ldots,s_{m}\in\tau\)_, such that:_ 1. \(n_{0}\) _is the root of_ \(T\) _and_ \(s_{0}=s\)_,_ 2. \(n_{i+1}\) _is a child of_ \(n_{i}\) _in_ \(T\)_, for all_ \(i\in[0,m-1]\)_,_ 3. \(n_{m}\) _is attached to an edge labeled by a unary label, which is either_ \(\underline{0}_{\tau}\) _and_ \(s_{m}\in\tau\)_, or_ \(\underline{a}_{(s^{\prime}_{1},\ldots,s^{\prime}_{k})}\) _and_ \(s_{m}\in\{s^{\prime}_{1},\ldots,s^{\prime}_{k}\}\)_,_ 4. _the edge between_ \(n_{i}\) _and_ \(n_{i+1}\) _is labeled by a binary label, either_ \(\mathsf{\mathsf{restrict}}_{\tau}\)_, and_ \(s_{i}=s_{i+1}\in\tau^{\prime}\)_, or_ \(\mathsf{\mathsf{rename}}_{\alpha}\) _and_ \(s_{i}=\alpha(s_{i+1})\)_._ Proof: "\(\Rightarrow\)" By Proposition 1, we have \(T=t^{\frac{\pi}{\pi}(\mathbb{B}_{\mathsf{parse}})}\), for some term \(t=\big{(}\parallel_{i\in I}\mathbf{c}_{i}\big{)}\parallel\big{(}\parallel_{j \in J}\mathsf{append}_{b_{j}}(t_{j})\big{)}\). Then \(s\) is present in \(T\) because either: * \(\mathbf{c}_{i}=\underline{0}^{\prime}_{\tau}\), for some \(i\in I\), such that \(s\in\tau^{\prime}\), * \(\mathbf{c}_{i}=\underline{a}_{(s^{\prime}_{1},\ldots,s^{\prime}_{k})}\), for some \(i\in I\), such that \(s\in\{s^{\prime}_{1},\ldots,s^{\prime}_{k}\}\), * \(b_{j}=\mathsf{restrict}_{\tau^{\prime}}\), for some \(j\in J\), such that \(s\in\tau^{\prime}\) and \(s\) is present in \(t^{\frac{\pi}{\pi}}_{j}\), * \(b_{j}=\mathsf{rename}_{\alpha}\), for some \(j\in J\), such that \(s=\alpha(s^{\prime})\) and \(s^{\prime}\) is present in \(t^{\frac{\pi}{\pi}}_{j}\), In the first two cases, we set \(m=0\), \(n_{0}\) the root of \(T\) and \(s_{0}\stackrel{{\mbox{\tiny$\triangle$}}}{{=}}s\). In the last two cases, we set \(n_{0}\) as the root of \(T\), \(s_{0}\stackrel{{\mbox{\tiny$\triangle$}}}{{=}}s\) and continue building \(n_{1},\ldots,n_{m}\) and \(s_{1},\ldots,s_{m}\) from \(t_{j}\). It is easy to check that the conditions (1-4) are satisfied by the sequences \(n_{0},\ldots,n_{m}\) and \(s_{0},\ldots,s_{m}\) built as described above. "\(\Leftarrow\)" Let \(T_{0},\ldots,T_{m}\) be the subtrees of \(T\) rooted in \(n_{0},\ldots,n_{m}\), respectively. By condition (2) \(T_{i+1}\) is a subtree of \(T_{i}\), for each \(i\in[0,m-1]\). By induction on \(m-i\), one shows that \(s_{i}\) is present in \(T_{i}\), for all \(i\in[0,m]\). The base case \(i=m\) follows from condition (3). The inductive case \(i<m\) follows from condition (4). Since \(s_{0}=s\) and \(n_{0}\) is the root of \(T\), by condition (1), we have that \(s\) is present in \(T\). Back to the construction of \(\phi\), the existence of a path starting in the root can be described by an MS formula \(\psi(X)\) in the relational signature \(\mathbb{R}_{\mathsf{tree}}(\mathbb{B}^{\tau}_{\mathsf{parse}})\). Further, the local conditions (3) and (4) can be encoded by formulae \(\eta_{1}(x)\) and \(\eta_{2}(x,y)\), respectively. Note that \(\tau\) being a given finite set is crucial in encoding the conditions such as \(s\in\tau^{\prime}\) and \(s=\alpha(s^{\prime})\), for \(\tau^{\prime}\subseteq\tau\) and \(\alpha\) a \(\tau\)-permutation by finite formulae. Finally, we define: \[\phi\stackrel{{\mbox{\tiny$\triangle$}}}{{=}}\exists X\.\ \psi(X)\wedge\big{(}\exists x\.\ X(x)\wedge\eta_{1}(x)\big{)}\wedge\big{(}\forall x \forall y\.\ X(x)\wedge X(y)\wedge\bigvee_{\stackrel{{ b\in\mathbb{B}}}{{ \#b-2}}}r_{b}(x,y)\to\eta_{2}(x,y)\big{)}\] A statement similar to the next lemma is proved in [9, Proposition 7.48]). For reasons of self-containment, we give a proof using our notation: Lemma 9: _For every finite set \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\) of source labels, the function \(\mathbf{val}|_{\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})}\) is an MS-definable \((\mathbb{R}_{\mathsf{tree}}(\mathbb{B}^{\tau}_{\mathsf{parse}}),\ \mathbb{R}^{\tau}_{\mathsf{graph}})\)-transduction._ Proof: We first prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First, we prove the lemma. First we prove the lemma. First, we prove Proof: We define the desired transduction in two steps. In the first step, we expand each node of the input tree into at most \(\operatorname{card}(\tau)+1\) many nodes, one for each source that is present in the respective subtree plus one extra node that represents an edge. In the second step, we merge the nodes that are fused by the composition operations. The first step uses an extra binary relation \(\operatorname{symbol}\equiv\) that keeps track of the nodes which are to be merged in the second step. This relation symbol is interpreted over different layers by formulae in the relational signature \(\mathbb{R}^{\tau}_{\operatorname{graph}}\). We now describe the first step. We use a transduction that creates \(\operatorname{card}(\tau)+1\) copies of the input structure. We will use the sources in \(\tau\) and an additional source label \(\square\in\mathbb{S}\setminus\tau\) to index the copies of the input structure. Formally, we define a transduction scheme \(\Theta=\langle\varphi,\{\psi_{s}\}_{s\in\tau\cup\{\square\}},\{\theta_{(a,s_ {1},\ldots,s_{\#a})}\}_{a\in\mathbb{A},s_{1},\ldots,s_{\#a\in\tau}}\cup\{\theta_ {(\equiv,s,t)}\}_{s,t\in\tau}\rangle\), as follows: * the transduction scheme is parameter-less, i.e., we do not use free variables \(X_{1},\ldots,X_{n}\). * \(\varphi\) specifies the domain of the transduction, i.e., \(\varphi\) expresses that the input graph is the encoding \(\|T\|\) of some tree \(T\in\mathcal{T}(\mathbb{B}^{\tau}_{\operatorname{parse}})\). It is easy to verify that such an \(\operatorname{\mathsf{MS}}\)-formula can be built. * \(\psi_{s}\) defines the universe of the \(s\)-th layer of the result, for each \(s\in\tau\). Namely, \(\psi_{s}\) holds for an element of the universe of the input graph iff this element is a vertex and if \(s\) is present at the subtree rooted at this vertex (these elements will represent the vertices of the output graph). Such a formula can be built according to Lemma 8. Moreover, \(\psi_{\square}\) holds for all elements of the input graph that are edges labeled by unary symbols \(\underline{\mathbf{a}}_{(s_{1},\ldots,s_{n})}\) (these elements will represent the edges of the output graph). * each \(\theta_{(a,s_{1},\ldots,s_{\#a})}\) has free variables \(x_{0},x_{1},\ldots,x_{\#a}\) and defines the interpretation of \((r_{a},s_{1},\ldots,s_{\#a})\) in the result, for all \(\underline{\mathbf{a}}_{(s_{1},\ldots,s_{\#a})}\in\mathbb{B}^{\tau}\). We define \(\theta_{(a,s_{1},\ldots,s_{\#a})}\) to hold for tuples \(((u_{0},\square),(u_{1},s_{1}),\ldots,(u_{\#a},s_{\#a}))\) iff \(u_{0}\) represents a graph edge labeled by the unary symbol \(\underline{\mathbf{a}}(s_{1},\ldots,s_{\#a})\), such that \(u_{1}=\ldots=u_{\#a}\) is a tree node incident to \(u_{0}\). It is easy to build an \(\operatorname{\mathsf{MS}}\)-formula \(\theta_{(a,s_{1},\ldots,s_{\#a})}\) defining these properties. * each \(\theta_{(\equiv,s,t)}\), for \(s,t\in\tau\), has free variables \(x_{1}\) and \(x_{2}\), and defines the interpretation of \((\equiv,s,t)\) in the result. For a tuple \(((u_{1},s),(u_{2},t))\), we define \(\theta_{(s,\equiv,t)}\) to hold iff \(u_{1}\) and \(u_{2}\) are tree nodes, such that \(u_{2}\) is the child of \(u_{1}\), for some tree edge labeled by one of the following symbols: * \(\operatorname{\mathsf{restrict}}_{\tau^{\prime}}\), such that \(s\in\tau^{\prime}\) and \(s=t\), or * \(\operatorname{\mathsf{rename}}_{\tau}\), such that \(s=\alpha(t)\). It is easy to build an \(\operatorname{\mathsf{MS}}\) formula \(\theta_{(\equiv,s,t)}\) defining these properties. The second step of the construction is a transduction that takes the least equivalence relation that subsumes the \(\operatorname{\mathsf{MS}}\)-definable relation \(\theta_{\equiv}\stackrel{{\operatorname{def}}}{{=}}\bigvee_{s,t \in\tau}\theta_{(\equiv,s,t)}\) and constructs its quotient structure. It is well known that the quotient structure with regard to an \(\operatorname{\mathsf{MS}}\)-definable equivalence relation can be expressed as an \(\operatorname{\mathsf{MS}}\)-transduction, e.g., see [7, Lemma 2.4]. It is now routine to verify that the composition of the two transductions above has the desired properties. Moreover, by Proposition 3 (2), the composition of \(\operatorname{\mathsf{MS}}\)-definable transductions is an \(\operatorname{\mathsf{MS}}\)-definable transduction. ### Defining Context-Free Sets of Graphs via (C)MS-Transductions With Lemma 9 at hand, we obtain a characterization of context-free sets of graphs as images of \(\operatorname{\mathsf{MS}}\)-definable transductions of recognizable sets of trees. Proposition 4: _Let \(\mathcal{L}\) be a context-free set of graphs. Then, there exists a finite set \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\) of source labels, a ranked set \(\mathcal{K}\subseteq\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})\) of trees recognizable in \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\), and an MS-definable \((\mathbb{R}_{\mathsf{tree}}(\mathbb{B}^{\tau}_{\mathsf{parse}}),\ \mathbb{R}^{\tau}_{\mathsf{graph}})\)-transduction \(F\), such that \(\|\mathcal{L}\|=F(\|\mathcal{K}\|)\)._ Proof: Since \(\mathcal{L}\) is context-free, there exists a grammar \(\Gamma\) such that \(\mathcal{L}=\mathcal{L}_{U}(\Gamma)\), for some nonterminal \(U\) of \(\Gamma\). Let \(\tau\) be the finite set of source labels that occur in the rules of \(\Gamma\). The alphabet of edge labels is the finite set \(\mathbb{B}^{\tau}\) of edge labels containing only source labels from \(\tau\). By Lemma 6, there exists a ranked set of trees \(\mathcal{K}\) that is recognizable in \(\mathfrak{T}\) such that \(\mathcal{L}=\mathbf{val}(\mathcal{K})\). Since, moreover, each graph in \(\mathcal{L}\) is built using only source labels from \(\tau\), we obtain \(\mathcal{K}\subseteq\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})\), hence \(\mathcal{L}=\mathbf{val}|_{\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})} \ (\mathcal{K})\). By Lemma 9, \(\mathbf{val}|_{\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})}\) is an MS-definable \((\mathbb{R}_{\mathsf{tree}}(\mathbb{B}^{\tau}_{\mathsf{parse}}),\ \mathbb{B}^{\tau}_{\mathsf{graph}})\)-transduction. Courcelle and Engelfriet extended Proposition 4 to the following equivalent characterization [8, Theorems 1.10 and 2.1], restated below according to our definitions6: Footnote 6: The result of [8] considers trees with node labels, taken from a finite alphabet. One can encode trees with node labels into trees with edge labels by appending an extra edge to the root and assigning to each edge between a node and a child the label of its child. Theorem 4.1 ([8]): _A set of graphs \(\mathcal{L}\subseteq\mathcal{G}^{\emptyset}\) is context-free iff there exists (1) a finite alphabet \(\mathbb{B}\) of edge labels, (2) a ranked set \(\mathcal{K}\subseteq\mathcal{T}(\mathbb{B})\) of trees recognizable in \(\mathfrak{T}(\mathbb{B})\) and (3) an MS-definable \((\mathbb{R}_{\mathsf{tree}}(\mathbb{B}),\ \mathbb{R}^{\emptyset}_{\mathsf{graph}})\)-transduction \(F\), such that \(\|\mathcal{L}\|=F(\|\mathcal{K}\|)\)._ The following corollary extends the result of Theorem 4.1 from ranked to unranked trees. We recall that the unranked recognizable sets of trees are precisely the CMS-definable ones (Theorem 4.1), hence they strictly subsume the ranked recognizable sets of trees, which are the MS-definable ones (Theorem 4.1). Corollary 2: _A set \(\mathcal{L}\) of graphs is context-free iff there exists (1) a finite alphabet \(\mathbb{B}\) of edge labels, (2) a set \(\mathcal{K}\subseteq\mathcal{T}(\mathbb{B})\) of trees recognizable in \(\mathfrak{T}(\mathbb{B})\) and (3) an MS-definable \((\mathbb{R}_{\mathsf{tree}}(\mathbb{B}),\ \mathbb{R}^{\emptyset}_{\mathsf{graph}})\)-transduction \(F\), such that \(\|\mathcal{L}\|=F(\|\mathcal{K}\|)\)._ Proof: "\(\Rightarrow\)" By Proposition 4, since each ranked set of trees is also an unranked set of trees. "\(\Leftarrow\)" We recall that trees can be viewed as graphs. Then, let \(\mathcal{B}\subseteq\mathcal{T}(\mathbb{B}^{\{\mathsf{t},\mathsf{t}\}}_{ \mathsf{parse}})\) be the set of binary trees (i.e., trees of rank at most two) that encode trees over the alphabet \(\mathbb{B}\) (we recall that the sources \(\{\mathsf{t},\mathsf{r}\}\) are sufficient to encode trees, see Section 2.3). We note that for every tree \(T\in\mathcal{T}(\mathbb{B})\) there is a tree \(T^{\prime}\in\mathcal{B}\) such that \(\mathbf{val}(T^{\prime})=T\). This is because every (unranked) tree can be encoded by a binary tree as follows. Consider an arbitrary tree \(T^{\circ}\in\mathbf{val}^{-1}(\{T\})\). Then, take some term that represents \(T^{\circ}\) (note that such a term exists because every tree is representable). Clearly, the term representation is binary. Then, we add the operation \(\mathsf{rename}_{id}\) in front of every \(\|\) operation in this term, where \(id\) is the identity function. The tree corresponding to this modified term is the desired binary tree \(T^{\prime}\), as we clearly have \(\mathbf{val}(T^{\prime})=\mathbf{val}(T^{\circ})=T\). By Corollary 1, \(\mathcal{K}\) is recognizable in \(\mathfrak{G}\), hence it is also recognizable in \(\mathfrak{G}^{\{\mathsf{t},\mathsf{r}\}}\). By Lemma 4, \(\mathbf{val}^{-1}(\mathcal{K})\) is recognizable in \(\mathfrak{R}\) and also in \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\), by Lemma 3. Moreover, the set of binary trees \(\mathcal{B}\) is recognizable in \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\). Then, \(\mathcal{K}^{\prime}=\mathbf{val}^{-1}(\mathcal{K}\cap\mathcal{B})\) is recognizable in \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\) and we have \(\mathbf{val}(\mathcal{K}^{\prime})=\mathcal{K}\). By Lemma 9, \(\mathbf{val}|_{\mathcal{T}(\mathbb{B}^{\{\mathsf{t},\mathsf{r}\}}_{\mathsf{parse}})}\) is an MS-definable transduction. Then, \(F\circ\mathbf{val}|_{\mathcal{T}(\mathbb{B}^{\{\mathsf{t},\mathsf{r}\}}_{ \mathsf{parse}})}\) is also an MS-definable transduction, by Proposition 3 (1), and we have \(F\circ\mathbf{val}|_{\mathcal{T}(\mathbb{B}^{\{\mathsf{t},\mathsf{r}\}}_{ \mathsf{parse}})}\ (\mathcal{K}^{\prime})=F\circ\mathbf{val}|_{\mathcal{T}(\mathbb{B}^{\{ \mathsf{t},\mathsf{r}\}}_{\mathsf{parse}})}\ (\mathcal{K}^{\prime})=F(\mathcal{K})=\mathcal{L}\). By Theorem 4.1, \(\mathcal{L}\) is context-free, as the image of the ranked recognizable set of trees \(\mathcal{K}^{\prime}\) via the \(\mathsf{MS}\)-definable transduction \(F\circ\mathbf{val}|_{{}_{\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})}}\). ### Parsable Sets of Graphs The purpose of this paper is characterizing the graph languages that are both context-free and \(\mathsf{CMS}\)-definable. Following Courcelle [7], we introduce the following notion: Definition 4: Let \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\) be a finite set of source labels. A set of graphs \(\mathcal{L}\subseteq\mathcal{G}\) is \(\tau\)-parsable iff there exists a \(\mathsf{CMS}\)-definable \((\mathbb{R}^{\tau}_{\mathsf{graph}},\ \mathbb{R}_{\mathsf{tree}}(\mathbb{B}^{\tau}_{\mathsf{parse}}))\)-transduction \(\pi\) such that (1) \(\|\mathcal{L}\|=\mathrm{dom}(\pi)\), and (2) \((\|G\|,\|T\|)\in\pi\) only if \(\mathbf{val}(T)=G\). We call a set of graphs \(\mathcal{L}\subseteq\mathcal{G}\)_parsable_, if \(\mathcal{L}\) is \(\tau\)-parsable for some \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\). Note that any parsable set of graphs is \(\mathsf{CMS}\)-definable, by point (1) above, since the domain of a \(\mathsf{CMS}\)-definable transduction is \(\mathsf{CMS}\)-definable itself, by Proposition 3 (3). Courcelle introduces the closely related notion of _strongly context-free_ sets of graphs, obtained as the image of some set of trees \(\mathcal{K}\) under the canonical evaluation, such that there exists a transduction \(\pi\subseteq(\mathbf{val}|_{\mathcal{K}})^{-1}\) with the properties of Definition 4, where the term _parsable_ only refers to the existence of this transduction. We simplify the definition by leaving the set of trees implicit \(\mathcal{K}\) and by calling the sets of graphs parsable, instead of strongly context-free. A conjecture left open is whether the context-free \(\mathsf{CMS}\)-definable sets coincide with the parsable ones [7, Conjecture 3]. We prove this conjecture by first giving a downward-closure property of parsable sets: Lemma 10: _Let \(\tau\subseteq\mathbb{S}\) be a finite set of source labels, \(\mathcal{L}\) be an \(\tau\)-parsable set of graphs and \(\mathcal{L}^{\prime}\subseteq\mathcal{L}\) a set recognizable in \(\mathcal{G}^{\tau}\). Then, \(\mathcal{L}^{\prime}\) is \(\tau\)-parsable._ Proof: Let \(\pi\) be the \(\mathsf{CMS}\)-definable \((\mathbb{R}^{\tau}_{\mathsf{graph}},\ \mathbb{R}_{\mathsf{tree}}(\mathbb{B}^{\tau}_{ \mathsf{parse}}))\)-transduction that witnesses the parsability of \(\mathcal{L}\) as in Definition 4. Let \(\mathcal{K}\stackrel{{\mbox{\tiny{\sf def}}}}{{=}}\mathbf{val}^{-1 }(\mathcal{L}^{\prime})\) be a set of trees. Then \(\mathcal{K}\) is recognizable in \(\mathfrak{R}\), by Lemma 4. By Lemma 3, \(\mathcal{K}\) is also recognizable in \(\mathfrak{T}(\mathbb{B}_{\mathsf{parse}})\), hence \(\mathcal{K}\) is \(\mathsf{CMS}\)-definable, by Theorem 4. By Theorem 5, we obtain that \(\mathcal{L}^{\prime}=\pi^{-1}(\mathcal{K})\) is \(\mathsf{CMS}\)-definable, hence the domain-restriction of \(\pi\) to \(\mathcal{L}^{\prime}\) is \(\mathsf{CMS}\)-definable, by Proposition 3 (1). The proof of the "only if" direction of [7, Conjecture 3] is given next. The "if" direction will be proved as part of Theorem 9 (SS5). Proposition 5: _Any \(\tau\)-parsable set of graphs is both \(\mathsf{CMS}\)-definable and context-free wrt a grammar with \(\mathcal{F}^{\tau}_{\mathsf{HR}}\)-operations._ Proof: Let \(\mathcal{L}\) be a \(\tau\)-parsable set of graphs, for a finite set \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\) of source labels. Then, \(\mathcal{L}\) is \(\mathsf{CMS}\)-definable because \(\pi\) is \(\mathsf{CMS}\)-definable, thus \(\|\mathcal{L}\|=\mathrm{dom}(\pi)\) is \(\mathsf{CMS}\)-definable. By Theorem 2, \(\mathcal{L}\) is recognizable in \(\mathfrak{G}\). Since \(\mathcal{L}\) is \(\tau\)-parsable, we obtain that \(\mathcal{L}\) is recognizable \(\mathcal{G}^{\tau}_{\mathsf{rep}}\). By Proposition 2, we have that set of graphs \(\mathcal{G}^{\tau}_{\mathsf{rep}}\) is context-free. Hence, \(\mathcal{L}=\mathcal{L}\cap\mathcal{G}^{\tau}_{\mathsf{rep}}\) is context-free by Theorem 1. ### Parsing with Tree Decompositions The definition of parsable sets of graphs requires an MS-definable transduction from graphs to trees that produces, for each input graph, a derivation tree of that graph relative to a given grammar, that does not depend on the input. An ingredient for building such a parse tree is a tree decomposition that witnesses the treewidth of a graph. We recover such an optimal tree decomposition from a seminal result of Bojanczyk and Pilipczuk that states the existence of an MS-definable transduction which computes tree decompositions for a given \(k\in\mathbb{N}\). The theorem below is the combination of [3, Theorem 2.4] and [4, Theorem 2.1]7: Footnote 7: These results are stated for graphs without source labels, i.e., of sort \(\tau=\emptyset\). As source labels are simply encoded as unary relations, their existence does not impact the cited results. Theorem 4.1 ([3,4]): _For every \(k\in\mathbb{N}\), there exists an MS-definable \((\mathbb{R}^{\tau}_{\mathsf{graph}},\mathbb{R}^{\tau}_{\mathsf{decomp}})\)-transduction \(I\), such that the following holds:_ 1. \(\mathsf{S}\in\operatorname{dom}(I)\) _iff_ \(\mathsf{S}=\|G\|\) _for some graph_ \(G\) _with_ \(\operatorname{twd}(G)\leq k\)_,_ 2. _if_ \((\|(G)\|,\mathsf{S})\in I\) _for some graph_ \(G\)_, then we have_ \(\mathsf{S}=\|(G,T,\beta)\|\) _for some tree decomposition_ \((T,\beta)\) _of_ \(G\) _of width at most_ \(k\)_._ Bojanczyk and Pilipczuk use the above theorem to prove a conjecture of Courcelle [6], stating that _recognizability coincides with_ CMS_-definability for graphs of bounded treewidth_. Here, we use it for proving that the set of graphs of treewidth at most \(k\) is parsable, for every \(k\geq 0\) (Theorem 4.1). We show next that a structure that encodes a graph together with a tree decomposition of it can be mapped back to a tree that evaluates to the input graph via the canonical evaluation: Lemma 11: _For every finite set \(\tau\subset_{\mathsf{fin}}\mathbb{S}\) of source labels, there is an MS-definable \((\mathbb{R}^{\tau}_{\mathsf{decomp}},\ \mathbb{R}^{\tau}_{\mathsf{tree}}(\mathbb{B}^{\tau}_{ \mathsf{parse}}))\)-transduction \(J\), such that:_ 1. \(\mathsf{S}\in\operatorname{dom}(J)\) _iff_ \(\mathsf{S}=\|(G,D,\beta)\|\) _for some graph_ \(G\) _with_ \(\operatorname{twd}(G)\leq\operatorname{card}(\tau)\)_, witnessed by a tree decomposition_ \((D,\beta)\) _such that every_ \(s\)_-source of_ \(G\)_, with_ \(s\in\tau\)_, appears in the bag associated to the root of_ \(D\)_, and_ 2. \((\|(G,D,\beta)\|,\|T\|)\in J\) _only if_ \(T\in\mathcal{T}(\mathbb{B}^{\tau}_{\mathsf{parse}})\) _and_ \(\operatorname{\mathbf{val}}(T)=G\)_._ Proof: Let \(k=\operatorname{card}(\tau)\). The idea of the transduction \(J\) is to use the tree \(D\), encoded by the interpretation of the node and parent relation symbols from \(\mathbb{R}^{\tau}_{\mathsf{decomp}}\), as the skeleton for the output tree \(T\). In order to label the edges of \(T\) with unary and binary edge labels, we guess a coloring of the vertices in the input graph, using the parameters \(\{X_{s}\}_{s\in\tau}\), such that every vertex is labeled by exactly one color \(X_{s}\). Given a node \(n\in V_{D}\), let \(\mathtt{colors}(n)\stackrel{{\mbox{\tiny{\tiny{def}}}}}{{=}}\{s\mid\mbox{ there is a vertex $v$ colored by $X_{s}$ and $\mathsf{bag}(v,n)$ holds}\}\) be the colors of the vertices in the bag \(\beta(n)\). Moreover, for every edge \(e\in E_{G}\), we let \(\mathtt{node}(e)\) be the closest node \(n\) to the root with \(\upsilon_{G}(e)_{i}\in\beta(n)\), for all \(1\leq i\leq\#L_{T}(e)\). Note that \(\mathtt{node}(e)\) exists by Definition 3 (2) and that \(\mathtt{node}(e)\) is unique. We are going to use a transduction that creates three layers (i.e., copies of the input structure) indexed by the names vertex, source and edge, respectively. Then, \(J\) is the transduction defined by the scheme \[\begin{array}{l}\Theta\stackrel{{\mbox{\tiny{\tiny{def}}}}}{{=}} \langle\varphi,\psi_{\mathsf{vertex}},\psi_{\mathsf{source}},\psi_{\mathsf{edge}},\{\theta_{\mathsf{restrict},\tau}\}_{\tau\subset\tau},\{\theta_{\mathsf{ rename}_{\alpha}}\}_{\alpha\mbox{ is $\tau$-permutation}},\\ \{\theta_{\mathsf{a}(s_{1},\ldots,s_{\mathsf{fa}})}\}_{\alpha\in\mathbb{A},s_{1}, \ldots,s_{\mathsf{fa}}\in\tau},\{\theta_{\mathsf{b}_{\tau}}\}_{\tau^{\prime} \subset\tau}\rangle,\end{array}\] where: * \(\varphi(\{X_{s}\}_{s\in\tau})\) defines the domain of the transduction, i.e., it checks whether (1) the bags of the tree decomposition are all of size at most \(k\), (2) the vertices attached to an edge of \(G\) belong to \(\beta(n)\), for some \(n\in V_{D}\) and for each vertex \(v\in V_{G}\), (3) the set of nodes \(\{n\in V_{D}\mid v\in\beta(n)\}\) is non-empty and connected in \(D\), and (4) the sets \(\{X_{s}\}_{s\in\tau}\) form a partition of \(V_{G}\) that is consistent with the tree decomposition, i.e., that in each bag there is at most one vertex labelled by \(X_{s}\), for all \(s\in\tau\), (5) that for each \(s\)-source \(v\in V_{G}\), the color of \(v\) is indeed \(X_{s}\), and \(s\) belongs to the bag associated to the root of \(D\). * \(\psi_{\mathsf{vertex}}(x_{1})\stackrel{{\mbox{\tiny{\it def}}}}{{= }}\mathsf{node}(x_{1})\) represents the nodes of the output tree \(T\). * \(\psi_{\mathsf{source}}(x_{1})\stackrel{{\mbox{\tiny{\it def}}}}{{= }}\mathsf{node}(x_{1})\) represents the unary edges with labels \(\underline{\mathbf{0}}_{\tau^{\prime}}\) of \(T\). * \(\psi_{\mathsf{edge}}(x_{1})\) holds for those elements where \(\mathsf{node}(x_{1})\) holds, except for the root of the tree; these elements represent the binary \(\mathsf{\underline{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ }}}}}}}}}}}}}}\) edges of \(T\). \(\psi_{\mathsf{edge}}(x_{1})\) further holds for the elements that encode the edges of \(G\); these elements represent the unary \(\underline{\mathbf{a}}_{(s_{1},\ldots,s_{n})}\) edges of \(T\). * \(\theta_{\mathsf{\underline{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}( \mathbf{1})\mathsf{\mathsf{ predicted}}}(x_{1},x_{2},x_{3},\{X_{s}\}_{s\in\tau})\) defines the interpretation of the ternary relation symbol \(\mathsf{r}_{\mathsf{\underline{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}\) in \(\|T\|\), i.e., all triples \(\langle(n_{1},\mathsf{edge}),(n_{2},\mathsf{vertex}),(n_{3},\mathsf{ vertex})\rangle\in V_{T}^{3}\), such that \(n_{3}\) is the parent of \(n_{2}\), \(n_{1}=n_{2}\) and \(\tau^{\prime}=\mathsf{colors}(n_{2})\cap\mathsf{colors}(n_{3})\). * \(\theta_{\mathsf{\underline{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ for suitably chosen edge labels \(a_{i}\in\mathbb{A}\) and source labels \(s^{i}_{j}\in\tau\). We observe that: \[\mathtt{graph}(n)=G_{0}\parallel^{\mathfrak{G}}(\|_{i=1..k}^{\mathfrak{G}} \ \mathsf{restrict}_{\tau_{i}}^{\mathfrak{G}}(\mathtt{graph}(n_{i})))\quad\text{(*)}\] By the definition of the canonical evaluation, we have: \[\mathbf{val}(T_{n})=\mathbf{0}_{\tau_{0}}^{\mathfrak{G}}\parallel^{\mathfrak{ G}}(\|_{i}^{\mathfrak{G}}\ \mathbf{a_{i}}(s^{i}_{1},\ldots,s^{i}_{n})^{\mathfrak{G}})\parallel^{\mathfrak{ G}}(\|_{i=1..k}^{\mathfrak{G}}\ \mathsf{restrict}_{\tau_{i}}^{\mathfrak{G}}(\mathbf{val}(T_{n_{i}})))\] The claim follows by the above equation, (*) and the inductive hypothesis. The proof is concluded by choosing \(n\) as the root of \(T\) in the above fact, which leads to \(\mathbf{val}(T)=G\), as required. Corollary 3: _For every finite set \(\tau\subseteq_{\text{fin}}\mathbb{S}\) of source labels, there exists an \(\mathsf{MS}\)-definable \((\mathbb{R}_{\mathtt{graph}}^{\tau},\ \mathbb{R}_{\mathtt{tree}}(\mathbb{B}_{ \mathtt{parse}}^{\tau}))\)-transduction \(K\), such that:_ 1. \(\mathbb{S}\in\mathrm{dom}(K)\) _iff_ \(\mathbb{S}=\|G\|\)_, for some graph_ \(G\) _with_ \(\mathrm{twd}(G)\leq\mathrm{card}(\tau)\)_, witnessed by a tree decomposition_ \((D,\mathfrak{B})\) _such that every_ \(s\)_-source of_ \(G\)_, with_ \(s\in\tau\)_, appears in the bag associated to the root of_ \(D\)_, and_ 2. \((\|G\|,\|T\|)\in K\) _only if_ \(T\in\mathcal{T}(\mathbb{B}_{\mathtt{parse}}^{\tau})\) _and_ \(\mathbf{val}(T)=G\)_._ Proof: The result will be a consequence of composing transductions \(I\) (Theorem 3.1) and \(J\) (Lemma 11). However, as \(I\) might produce some tree decomposition as output, where the \(s\)-sources, for \(s\in\tau\), do not appear in the bag associated to the root of the tree decomposition, we need to add some pre-processing that ensures that \(I\) produces a tree decomposition with this property. We will define two further \(\mathsf{MS}\)-definable transductions \(A\) and \(B\) such that the desired transduction \(K\) is the result of composing \(A\), \(I\), \(B\) and \(J\) (in this order). The pre-processing requires the use of some fresh (temporary) edge label \(a\) of arity \(\mathrm{card}(\tau)\). We now define \(A\) as the transduction that simply outputs the encoding of an input graph \(G\in\mathcal{G}_{\mathsf{rep}}^{\tau}\) and additionally add an \(a\)-labelled edge between all \(s\)-sources, with \(s\in\tau\). This has the effect that for every tree-decomposition of \(G\) (in particular for the output of the composition of transductions \(A\) and \(I\)) there is a node such that the bag associated to this node contains all \(s\)-sources, for \(s\in\tau\), of \(G\). Next, we would like to apply transduction \(J\). However, in order to do so, we need to ensure that the \(s\)-sources in fact appear in the root of the tree decomposition. We do so by defining the transduction \(B\) that inputs the encoding a graph \(G\) and a tree decomposition \((D,\mathfrak{B})\) and outputs an encoding of \(G\) and a tree decomposition \((D^{\prime},\mathfrak{B}^{\prime})\), which is obtained from \((D,\mathfrak{B})\) by rotating the tree-decomposition (i.e., suitably reversing the order of the parent relation) such that the node of \(D\) that contains the \(s\)-sources becomes the root of the tree decomposition; further, \(B\) deletes the \(a\)-labelled edges that have been added by \(A\). It is now easy to verify that the composition of \(A\), \(I\), \(B\) and \(J\) has the desired properties. The above corollary provides a powerful result. It states that every graph \(G\in\mathcal{G}^{\mathfrak{G}}\) of tree width \(k\), is the image of a tree under an \(\mathsf{MS}\)-transduction, where it is sufficient to consider trees over some set of source labels \(\tau\) of cardinality \(k\). Moreover, condition 1 of Corollary 3 applies to all representable graphs: Proposition 6: _Let \(G\in\mathcal{G}_{\mathsf{rep}}^{\tau}\) be a graph. Then, \(\mathrm{twd}(G)\leq\mathrm{card}(\tau)\), witnessed by a tree decomposition \((D,\mathfrak{B})\) such that every \(s\)-source of \(G\), with \(s\in\tau\), appears in the bag associated to the root of \(D\)._ Proof: We have tree-width \(\operatorname{twd}(G)\leq\operatorname{card}(\tau)\), see Lemma 7; moreover, the sources appear in the bag of root in the tree-decomposition constructed by the lemma. We are now ready to present the main result of this section: Theorem 4.1: _The set \(\mathcal{G}_{\text{rep}}^{\star}\) is \(\tau\)-parsable, for each finite set \(\tau\subseteq_{\text{fin}}\mathbb{S}\) of source labels._ Proof: By Proposition 6 we have \(\operatorname{twd}(G)\leq\operatorname{card}(\tau)\) for every graph \(G\in\mathcal{G}_{\text{rep}}^{\star}\), witnessed by a tree decomposition \((D,\beta)\) such that every \(s\)-source of \(G\), with \(s\in\tau\), appears in the bag associated to the root of \(D\). Hence, \(\|\mathcal{G}_{\text{rep}}^{\star}\|\subseteq\operatorname{dom}(K)\), where \(K\) is the MS-definable transduction \(K\) from Corollary 3. Moreover, we have that \((\|G\|,\|T\|)\in K\) implies \(T\in\mathcal{T}(\mathbb{B}_{\text{parse}}^{\tau})\) and \(\operatorname{\mathbf{val}}(T)=G\). Hence, \(\|\mathcal{G}_{\text{rep}}^{\star}\|\supseteq\operatorname{dom}(K)\). Finally, \(K\) plays the role of the \(\pi\) transduction, required by Definition 4. ## 5 Characterization of Context-free and \(\mathsf{CMS}\)-definable Graphs We combine the previously obtained results in a general characterization of the graphs that are both context-free and \(\mathsf{CMS}\)-definable. We state these characterization results in two versions. In the first theorem we explicitly keep track of the set of sources \(\tau\subseteq_{\text{fin}}\mathbb{S}\) that witness the context-freeness of some set of graphs \(\mathcal{L}\): Theorem 5.1: _For every set \(\mathcal{L}\subseteq\mathcal{G}\) of graphs and sort \(\tau\subseteq_{\text{fin}}\mathbb{S}\), the following statements are equivalent:_ 1. \(\mathcal{L}\) _is_ \(\mathsf{CMS}\)_-definable and context-free wrt a grammar with_ \(\mathcal{F}_{\mathsf{HR}}^{\tau}\)_-operations,_ 2. \(\mathcal{L}\) _is recognizable in_ \(\mathfrak{G}\) _and_ \(\mathcal{L}\subseteq\mathcal{G}_{\text{rep}}^{\tau}\)_,_ 3. \(\mathcal{L}\) _is recognizable in_ \(\mathfrak{G}^{\tau}\) _and_ \(\mathcal{L}\subseteq\mathcal{G}_{\text{rep}}^{\tau}\)_,_ 4. \(\mathcal{L}\) _is_ \(\tau\)_-parsable,_ Proof: (1) \(\Rightarrow\) (2) By Theorem 2 every \(\mathsf{CMS}\)-definable set of graphs is recognizable in the algebra \(\mathfrak{G}\). Let \(\Gamma\) be a grammar such that \(\mathcal{L}=\mathcal{L}_{U}(\Gamma)\), for some nonterminal \(U\) of \(\Gamma\). By assumption, \(\Gamma\) uses only \(\mathcal{F}_{\mathsf{HR}}^{\tau}\)-operations. Hence, \(\mathcal{L}\subseteq\mathcal{G}^{\tau}\). Moreover, since \(\mathcal{L}_{U}(\Gamma)\) denotes the least solution, we have the every graph \(G\in\mathcal{L}_{U}(\Gamma)\) is representable. Thus, \(\mathcal{L}\subseteq\mathcal{G}_{\text{rep}}^{\tau}\). (2) \(\Rightarrow\) (3) By Lemma 2, since \(\mathfrak{G}^{\tau}\) is a subalgebra of \(\mathfrak{G}\). (3) \(\Rightarrow\) (4) By Theorem 4.1, the set of graphs \(\mathcal{G}_{\text{rep}}^{\tau}\) is \(\tau\)-parsable, and by Lemma 10, the restriction of a \(\tau\)-parsable set to a recognizable set in \(\mathfrak{G}^{\tau}\) is \(\tau\)-parsable. (4) \(\Rightarrow\) (1) By Proposition 5. The second theorem is more coarse, in that we implicitly quantify over some set \(\tau\subseteq_{\text{fin}}\mathbb{S}\) resp. some bound for the tree-width in each item: Theorem 5.2: _For every set \(\mathcal{L}\subseteq\mathcal{G}^{\emptyset}\) of graphs with no sources, the following statements are equivalent:_ 1. \(\mathcal{L}\) _is_ \(\mathsf{CMS}\)_-definable and context-free,_ 2. \(\mathcal{L}\) _is recognizable and has bounded tree-width,_ 3. \(\mathcal{L}\) _is parsable,_ 4. _There exists a finite set_ \(\mathbb{B}\) _of edge labels, an_ \(\mathsf{MS}\)_-definable_ \((\mathbb{R}_{\text{tree}}(\mathbb{B}),\ \mathbb{R}_{\text{graph}}^{\emptyset})\)_-transduction_ \(F\) _and a_ \(\mathsf{CMS}\)_-definable_ \((\mathbb{R}_{\text{graph}}^{\emptyset},\ \mathbb{R}_{\text{tree}}(\mathbb{B}))\)_-transduction_ \(H\)_, such that_ 1)__\(\operatorname{dom}(F\circ H)=\|\mathcal{L}\|\) _and 2)_ \(F\circ H\) _is the identity on_ \(\|\mathcal{L}\|\)_._ Proof: (1) \(\Rightarrow\) (2) As every grammar uses only \(\mathcal{F}_{\mathsf{HR}}^{\tau}\)-operations for the set of sources \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\) that appear in the grammar, we obtain that \(\mathcal{L}\) is recognizable in \(\mathfrak{G}\) and \(\mathcal{L}\subseteq\mathcal{G}_{\mathsf{rep}}^{\tau}\) from Theorem 4.2. Then, \(\operatorname{twd}(G)\leq\operatorname{card}(\tau)\), for every graph \(G\in\mathcal{G}_{\mathsf{rep}}^{\tau}\) by Lemma 7. (2) \(\Rightarrow\) (3) Let \(k\geq 0\) be such that \(\operatorname{twd}(G)\leq k\) for all graphs \(G\in\mathcal{L}\). Because of the assumption \(\mathcal{L}\subseteq\mathcal{G}^{0}\), graphs \(G\in\mathcal{L}\) do not have sources. Hence, condition 1 of Corollary 3 is satisfied for any finite set \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\) with \(k\leq\operatorname{card}(\tau)\). Then, condition 2 of Corollary 3 guarantees that for every \(G\in\mathcal{L}\) there is a \(T\in\mathcal{T}(\mathbb{B}_{\mathsf{parse}}^{\tau})\) with \(\operatorname{\mathbf{val}}(T)=G\). Hence, \(\mathcal{L}\subseteq\mathcal{G}_{\mathsf{rep}}^{\tau}\). Thus, \(\mathcal{L}\) is \(\tau\)-parsable by Theorem 4.2. (3) \(\Rightarrow\) (4) We have that \(\mathcal{L}\) is \(\tau\)-parsable for some \(\tau\subseteq_{\mathsf{fin}}\mathbb{S}\). Let the alphabet \(\mathbb{B}\) be \(\mathbb{B}_{\mathsf{parse}}^{\tau}\). Then, there exists a \(\mathsf{CMS}\)-definable \((\mathbb{R}_{\mathsf{graph}}^{\tau},\ \mathbb{R}_{\mathsf{tree}}(\mathbb{B}_{\mathsf{parse}}^{\tau}))\)-transduction \(H\) that witnesses the \(\tau\)-parability of \(\mathcal{L}\). Moreover, \(F\stackrel{{\mbox{\tiny{\it def}}}}{{=}}\operatorname{\mathbf{val }}_{\mid\mathcal{T}^{\tau}}\) is \(\mathsf{MS}\)-definable, by Lemma 9 and \(F\circ H\) is the identity on \(\|\mathcal{L}\|\), as required. (4) \(\Rightarrow\) (1) By Proposition 3 (1), \(F\circ H\) is a \(\mathsf{CMS}\)-definable transduction, hence \(\operatorname{dom}(F\circ H)=\|\mathcal{L}\|\) is \(\mathsf{CMS}\)-definable, by Proposition 3 (3). Then, \(\mathcal{K}\stackrel{{\mbox{\tiny{\it def}}}}{{=}}F^{-1}(\mathcal{L})\) is \(\mathsf{CMS}\)-definable, by Theorem 4.2. By Theorem 4.2, \(\mathcal{K}\) is recognizable in \(\mathfrak{T}(\mathbb{B})\) and \(\mathcal{L}\) is context-free, by Corollary 2. We remark that the idea of characterizing the recognizable sets of graphs of bounded tree-width in terms of a pair of \(\mathsf{MS}\)-definable transductions (item 4) has also been developed concurrently (and in more generality) in [2]. We further note that item 4 of Theorem 4.2 is missing from Theorem 4.2 because there is no easy upper bound on the tree-width of \(\mathcal{L}\). That is, we identify the following problem for future work: given \(\mathsf{MS}\)-definable transductions \(F\) and \(H\) as stated in item 4 of Theorem 4.2, compute a bound on the tree-width of \(\mathcal{L}\) based on \(F\) and \(H\). We remark that the construction of [8] can be used to derive an upper bound, but that this bound likely not is optimal. On the other hand, item 3 of Theorem 4.2 could be added to Theorem 4.2 (but we choose to omit it to make the theorem more concise). We further note that the problem whether one of the conditions from Theorem 4.2 holds for a given set of graphs is undecidable, by the following argument. The problem whether a given context-free word grammar defines a recognizable (and hence \(\mathsf{MS}\)-definable) word language is undecidable, according to a result by Greibach [14]. Then, an algorithm for the former problem would answer the latter, which is impossible. Several classes of graph languages are known to be parsable, such as those defined by _regular graph grammars_ (i.e., hyperedge replacement grammars with additional local connectivity requirements) or _series-parallel graphs_ (i.e., graphs with two sources that can be cascaded or overlapped) [7]. The above theoretical impossibility can be partially circumvented by using the following notion of _invertible_\(\mathsf{MS}\)-transduction, that provides a constructive method of obtaining new parsable sets from known ones: Definition 5: An \(\mathsf{MS}\)-definable \((\mathbb{R},\mathbb{R}^{\prime})\)-transduction \(F\) is _invertible_ iff there exists an \(\mathsf{MS}\)-definable \((\mathbb{R}^{\prime},\mathbb{R})\)-transduction such that \(G\subseteq F^{-1}\) and \(\operatorname{dom}(G)=\operatorname{dom}(F^{-1})\). The composition of invertible transductions \(F_{1}\) and \(F_{2}\) is a partial operation defined only if \(\operatorname{dom}(F_{2})\subseteq\operatorname{dom}(F_{1}^{-1})\). Proposition 7: _The composition of invertible \(\mathsf{MS}\)-transductions is an invertible \(\mathsf{MS}\)-transduction._ Proof: Let \(F_{1}\) and \(F_{2}\) be invertible \(\mathsf{MS}\)-transductions, such that \(\operatorname{dom}(F_{2})\subseteq\operatorname{dom}(F_{1}^{-1})\) and let \(F\stackrel{{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\leftleftleftleftleftleft({2}}\right)}} \right.}}}\right.}}}}{{=}}F_{2}\circ F_{1}\). By Proposition 3, \(F\) is \(\mathsf{MS}\)-definable. Let \(G_{i}\subseteq F_{i}^{-1}\) be \(\mathsf{MS}\)-transductions, such that \(\operatorname{dom}(G_{i})=\operatorname{dom}(F_{i}^{-1})\), for \(i=1,2\). Then \(G\stackrel{{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\left({\leftleft({\leftleft({\leftleftleftleft({\leftleftleftleftleft({4}} \right)}\right\)}.}}}}}G_{1}\circ G_{2}\) is \(\mathsf{MS}\)-definable, by Proposition 3. We compute: \[G =G_{1}\circ G_{2}\subseteq F_{1}^{-1}\circ F_{2}^{-1}=(F_{2} \circ F_{1})^{-1}=F^{-1}\] \[\operatorname{dom}(F^{-1}) =\operatorname{dom}(F_{1}^{-1}\circ F_{2}^{-1})=\{\mathsf{S}\mid \exists\mathsf{S}^{\prime}\in\operatorname{dom}(F_{1}^{-1})\.\ (\mathsf{S},\mathsf{S}^{ \prime})\in F_{2}^{-1}\}\] \[=\{\mathsf{S}\mid\exists\mathsf{S}^{\prime}\in\operatorname{dom} (F_{1}^{-1})\cap\operatorname{dom}(F_{2})\.\ (\mathsf{S}^{\prime},\mathsf{S})\in F_{2}\}\] \[=\{\mathsf{S}\mid\exists\mathsf{S}^{\prime}\in\operatorname{dom} (F_{2})\.\ (\mathsf{S},\mathsf{S}^{\prime})\in F_{2}^{-1}\}\ [ \operatorname{dom}(F_{2})\subseteq\operatorname{dom}(F_{1}^{-1})]\] \[=\operatorname{dom}(F_{2}^{-1})=\operatorname{dom}(G_{2})\] \[=\{\mathsf{S}\mid\exists\mathsf{S}^{\prime}\in\operatorname{ img}(G_{2})\.\ (\mathsf{S},\mathsf{S}^{\prime})\in G_{2}\}\] \[=\{\mathsf{S}\mid\exists\mathsf{S}^{\prime}\in\operatorname{dom} (G_{1})\.\ (\mathsf{S},\mathsf{S}^{\prime})\in G_{2}\}\ [ \operatorname{img}(G_{2})\subseteq\operatorname{dom}(F_{2})\subseteq \operatorname{dom}(F_{1}^{-1})=\operatorname{dom}(G_{1})]\] \[=\operatorname{dom}(G_{2}\circ G_{1})=\ \operatorname{dom}(G)\quad\sqcap\] We obtain new parsable languages, as images of known parsable languages under \(\mathsf{MS}\)-transductions that are both invertible and functional (i.e., parameterless). Corollary 4: _Let \(\mathcal{L}\) be a parsable set of graphs and \(F\) be an invertible functional \(\mathsf{MS}\)-transduction, such that \(\operatorname{dom}(F)\subseteq\mathcal{L}\). Then \(F(\mathcal{L})\) is a parsable set of graphs._ Proof: By Theorem 4.1 (4) and Proposition 7. ## 6 Finite versus Locally Finite Recognizability of Graphs The notion of recognizability for graphs proposed by Courcelle (Definition 1) [6] requires the existence of a locally finite algebra, i.e., an algebra with a finite universe for each sort. One cannot help but notice the fundamental difference with recognizability of words [5] and trees [11] that use finite automata, or equivalently, in algebraic terms, algebras with a finite global universe. Note that a locally finite algebra \(\mathfrak{B}=(\{\mathcal{B}^{\sigma}\}_{\sigma\in\Sigma},\{f^{\mathfrak{B}} \}_{f\in\mathcal{F}})\) is finite iff all but finitely many sorts have empty universe, i.e., \(\mathcal{B}^{\sigma}=\emptyset\) for all but finitely many \(\sigma\in\Sigma\). However, when the set of sorts \(\Sigma\) is infinite, one cannot use a finite algebra \(\mathfrak{B}\) to recognize sets of a (possibly infinite) algebra \(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{f^{\mathfrak{A}} \}_{f\in\mathcal{F}})\), because the existence of a homomorphism \(h:\mathfrak{A}\to\mathcal{B}\) implies \(h(\mathcal{A}^{\sigma})\subseteq\mathcal{B}^{\sigma}\), thus \(\mathcal{B}^{\sigma}\neq\emptyset\), for all \(\sigma\in\Sigma\). Hence, the only possibility of using a finite algebra \(\mathfrak{B}\) as a recognizer is when the set of sorts is finite. We recall that trees (and, implicitly, words) are defined as graphs over an algebra with a single sort \(\{\mathfrak{r}\}\) (Definition 2), which justifies using the standard notion of recognizability by finite algebras in this case. Theorem 6.1 proves the equivalence between locally finite (2) and finite (3) recognizability for sets of graphs of bounded treewidth. In other words, this means that bounded treewidth sets of graphs can be recognized using finite algebras, just like trees. This has been initially proved by Courcelle and Lagergren [10] using a different argument and is also stated in [3] as a consequence of [3, Theorem 2.4] and [4, Theorem 2.1] (cited here as Theorem 7). In this section we prove that locally finite recognizability for graphs is the limit of recognizability in an infinite increasing sequence of finite underapproximations (Theorem 11). This means that the equivalence between locally finite and finite recognizability for bounded treewidth sets of graphs (points (2) and (3) of Theorem 9) is actually a cut-off in this infinite increasing sequence. We recall the classical definition of recognizability by locally finite congruences. Let \(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{f^{\mathfrak{A}} \}_{f\in\mathcal{F}})\) be an algebra. An equivalence relation \(\cong\) on the universe of \(\mathfrak{A}\) is a _congruence_ iff \(a\cong b\) only if (1) \(\mathsf{sort}(a)=\mathsf{sort}(b)\) and (2) \(a_{i}\cong b_{i}\) only if \(f^{\mathfrak{A}}(a_{1},\ldots,a_{\#f})\cong f^{\mathfrak{A}}(b_{1},\ldots,b_{ \#f})\), for all \(f\in\mathcal{F}\). Note that an equivalence class of a congruence is necessarily a subset of a universe \(\mathcal{A}_{\sigma}\), for some sort \(\sigma\in\Sigma\). A congruence \(\cong\) is _locally finite_ iff \(\cong\) has finitely many equivalence classes of each sort. A congruence \(\cong\)_saturates_ a set \(\mathcal{L}\subseteq\mathcal{A}\) iff \(a\cong b\) only if \(a\in\mathcal{L}\iff b\in\mathcal{L}\), for all \(a,b\in\mathcal{A}\), i.e., \(\mathcal{L}\) is a union of equivalence classes of \(\cong\). It is straightforward to show that a set is recognizable (Definition 1) iff there exists a locally-finite congruence that saturates it, see, e.g. [9, Proposition 3.64]. It is also well-known that, for any (not necessarily recognizable) set, there exists a unique coarsest congruence that saturates it: Definition 6 (Syntactic Congruence): The _syntactic congruence_ of a set \(\mathcal{L}\subseteq\mathcal{A}\) in an \(\mathcal{F}\)-algebra \(\mathfrak{A}=(\{\mathcal{A}^{\sigma}\}_{\sigma\in\Sigma},\{f^{\mathfrak{A}} \}_{f\in\mathcal{F}})\) is the relation \(a\cong^{\mathfrak{A}}_{\mathcal{L}}\) by iff \(\mathsf{sort}(a)=\mathsf{sort}(b)\) and \(t^{\mathfrak{A}}(a,c_{1},\ldots,c_{k})\in\mathcal{L}\Leftrightarrow t^{ \mathfrak{A}}(b,c_{1},\ldots,c_{k})\), for all \(\mathcal{F}\)-terms \(t\) and \(c_{1},\ldots,c_{k}\in\mathcal{A}\). The proof that \(\cong^{\mathcal{L}}_{\mathfrak{A}}\) is a congruence and that any other congruence which saturates \(\mathcal{L}\) is included in \(\cong^{\mathcal{L}}_{\mathfrak{A}}\) is given in, e.g., [9, Proposition 3.66]. Then, \(\mathcal{L}\) is recognizable iff \(\cong^{\mathcal{L}}_{\mathfrak{A}}\) is locally finite. We specialize the above notions to the graph algebra \(\mathfrak{G}\) and sets \(\mathcal{L}\subseteq\mathcal{G}\) of graphs. The following lemma is an equivalent characterization of the syntactic congruence that uses only terms of a restricted form: Lemma 12: _Let \(\mathcal{L}\subseteq\mathcal{G}\) be a set of graphs. Then, \(G_{1}\cong^{\mathfrak{G}}_{\mathcal{L}}G_{2}\) iff \(\mathsf{sort}(G_{1})=\mathsf{sort}(G_{2})\) and \(\mathsf{rename}^{\mathfrak{G}}_{\alpha}\circ\mathsf{restrict}^{\mathfrak{G}}_ {\tau}(G_{1}\parallel G)\in\mathcal{L}\Leftrightarrow\mathsf{rename}^{\mathfrak{ G}}_{\alpha}\circ\mathsf{restrict}^{\mathfrak{G}}_{\tau}(G_{2}\parallel G)\in\mathcal{L}\), for all graphs \(G,G_{1},G_{2}\), finite permutations \(\alpha:\mathbb{S}\rightarrow\mathbb{S}\) and sets \(\tau\subseteq_{\#\mathfrak{H}}\mathbb{S}\)._ Proof: We define the equivalence relation \(\cong\) by setting \(G_{1}\cong G_{2}\) iff \(\mathsf{sort}(G_{1})=\mathsf{sort}(G_{2})\) and \(\mathsf{rename}^{\mathfrak{G}}_{\alpha}\circ\mathsf{restrict}^{\mathfrak{G}}_ {\tau}(G_{1}\parallel G)\in\mathcal{L}\Leftrightarrow\mathsf{rename}^{ \mathfrak{G}}_{\alpha}\circ\mathsf{restrict}^{\mathfrak{G}}_{\tau}(G_{2} \parallel G)\in\mathcal{L}\), for all graphs \(G,G_{1},G_{2}\), finite permutations \(\alpha:\mathbb{S}\rightarrow\mathbb{S}\) and sets of sources \(\tau\subseteq_{\#\mathfrak{H}}\mathbb{S}\). We now consider some congruence \(\equiv\) that saturates \(\mathcal{L}\). Then, \(G_{1}\equiv G_{2}\) implies that \(G_{1}\cong G_{2}\), i.e., \(\equiv\subseteq\cong\) (*). This is because \(G_{1}\equiv G_{2}\) only if \(\mathsf{rename}^{\mathfrak{G}}_{\alpha}\circ\mathsf{restrict}^{\mathfrak{G}}_ {\tau}(G_{1}\parallel G)\equiv\mathsf{rename}^{\mathfrak{G}}_{\alpha}\circ \mathsf{restrict}^{\mathfrak{G}}_{\tau}(G_{2}\parallel G)\) (as \(\equiv\) is some congruence), and hence \(\mathsf{rename}^{\mathfrak{G}}_{\alpha}\circ\mathsf{restrict}^{\mathfrak{G}}_ {\tau}(G_{1}\parallel G)\in\mathcal{L}\Leftrightarrow\mathsf{rename}^{ \mathfrak{G}}_{\alpha}\circ\mathsf{restrict}^{\mathfrak{G}}_{\tau}(G_{2} \parallel G)\in\mathcal{L}\) (as \(\equiv\) saturates \(\mathcal{L}\)). We will establish below that \(\cong\) saturates \(\mathcal{L}\). By definition of \(\cong\), we have that \(G_{1}\cong G_{2}\) implies that \(G_{1}\in\mathcal{L}\) iff \(G_{1}\in\mathcal{L}\), since we can choose \(\alpha\) as the identity, \(\tau=\mathsf{sort}(G_{1})\) and \(G=\mathbf{0}_{\mathsf{sort}(G_{1})}\). By (*) we obtain that \(\cong\) is the coarsest relation that saturates \(\mathcal{L}\). It remains to establish that \(\cong\) is a congruence. Let us consider some graphs \(G_{1}\cong G_{2}\). We show the closure under the operations of the graph algebra, by a case distinction: * \(G_{1}\parallel^{\mathfrak{G}}G\cong G_{2}\parallel^{\mathfrak{G}}G\) and \(G\parallel^{\mathfrak{G}}G_{1}\cong G\parallel^{\mathfrak{G}}G_{2}\), for all graphs \(G\): By the commutativity of \(\parallel^{\mathfrak{G}}\), it is sufficient to show one of the implications. Let us assume \(G_{1}\cong G_{2}\) and let \(G\) be some graph. To establish that \(G_{1}\parallel^{\mathfrak{G}}G\cong G_{2}\parallel^{\mathfrak{G}}G\), consider some graph \(G^{\prime}\), bijective function \(\alpha\) and \(\tau\subseteq_{\#\mathfrak{H}}\mathbb{S}\) such that \(\mathsf{rename}^{\mathfrak{G}}_{\alpha}\circ\mathsf{restrict}^{\mathfrak{G}}_ {\tau}((G_{1}\parallel^{\mathfrak{G}}\) \(G\)) \(\|^{\Phi}\ G^{\prime}\rangle\in\mathcal{L}\). Then, \(\mathsf{rename}^{\Phi}_{\alpha}\circ\mathsf{restrict}^{\Phi}_{\tau}((G_{2}\ \|^{\Phi}\ G)\ \|^{\Phi}\ G^{\prime})\in\mathcal{L}\) follows from \(G_{1}\cong G_{2}\), by the definition of \(\cong\), using the associativity of \(\|^{\Phi}\) and chosing the graph \(G^{\prime\prime}=G\ \|\ G^{\prime}\). * \(\mathsf{restrict}_{\tau^{\circ}}(G_{1})\cong\mathsf{restrict}_{\tau^{\circ}}(G_ {2})\), for all \(\tau^{\circ}\subseteq_{\mathit{fin}}\mathbb{S}\): We consider some graph \(G\), finite permutation \(\alpha\) and \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\) such that \(\mathsf{rename}^{\Phi}_{\alpha}\circ\mathsf{restrict}^{\Phi}_{\tau}(\mathsf{ restrict}_{\tau^{\circ}}(G_{1})\ \|^{\Phi}\ G)\in\mathcal{L}\). We need to show that \(\mathsf{rename}^{\Phi}_{\alpha}\circ\mathsf{restrict}^{\Phi}_{\tau}(\mathsf{ restrict}_{\tau^{\circ}}(G_{2})\ \|^{\Phi}\ G)\in\mathcal{L}\). We now verify that we can choose \(G^{\prime}\), \(\alpha^{\prime}\) and \(\tau^{\prime}\) such that \(\mathsf{rename}_{\alpha^{\prime}}\circ\mathsf{restrict}_{\tau^{\prime}}(G_{i} \ \|^{\Phi}\ G^{\prime})=\mathsf{rename}^{\Phi}_{\alpha}\circ\mathsf{restrict}^{ \Phi}_{\tau}(\mathsf{restrict}_{\tau^{\circ}}(G_{i})\ \|^{\Phi}\ G)\) for \(i=1,2\). Indeed, we can choose \(G^{\prime}=\mathsf{rename}_{\beta}(G)\), for some permutation \(\beta\) that renames the sources \(\mathsf{sort}(G)\setminus\tau^{\circ}\) to some fresh sources \(\tau^{\prime\prime}\), choosing \(\tau^{\prime}=\tau\cup\beta(\tau)\) and setting \(\alpha^{\prime}\) as the permutation that does all renamings of \(\alpha\) and \(\beta^{-1}\). The claim then follows from \(G_{1}\cong G_{2}\). * \(\mathsf{rename}_{\beta}(G_{1})\cong\mathsf{rename}_{\beta}(G_{2})\), for all finite permutations \(\beta\): We consider a graph \(G\), finite permutation \(\alpha\) and \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), such that \(\mathsf{rename}^{\Phi}_{\alpha}\circ\mathsf{restrict}^{\Phi}_{\tau}(\mathsf{ rename}_{\beta}(G_{1})\ \|^{\Phi}\ G)\in\mathcal{L}\). We need to show that \(\mathsf{rename}^{\Theta}_{\alpha}\circ\mathsf{restrict}^{\Theta}_{\tau}( \mathsf{rename}_{\beta}(G_{2})\ \|^{\Phi}\ G)\in\mathcal{L}\). We now observe that \(\mathsf{rename}^{\Theta}_{\alpha\otimes}\circ\mathsf{restrict}^{\Theta-1( \tau)}_{\tau}(G_{i}\ \|^{\Phi}\ \mathsf{rename}_{\beta^{-1}}(G))=\mathsf{rename}^{\Theta}_{\alpha} \circ\mathsf{restrict}^{\Theta}_{\tau}(\mathsf{rename}_{\beta}(G_{i})\ \|^{\Phi}\ G)\) for \(i=1,2\). The claim follows from \(G_{1}\cong G_{2}\). The next step is proving that the syntactic congruences of a language of graphs of empty sort agree over the algebras \(\mathfrak{G}\) and \(\mathfrak{G}^{\tau}\), for any finite set \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\): **Lemma 13**.: _Given a language \(\mathcal{L}\subseteq\mathcal{G}^{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}}}\) and a finite set \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), we have \(G_{1}\cong^{\mathfrak{G}}_{\mathcal{L}}G_{2}\) iff \(G_{1}\cong^{\mathfrak{G}^{\tau}}_{\mathcal{L}}G_{2}\), for all graphs \(G_{1},\mathcal{G}_{2}\in\mathcal{G}^{\tau}\)._ Proof.: "\(\Rightarrow\)" Because \(\cong^{\mathfrak{G}}_{\mathcal{L}}\cap(\mathcal{G}^{\tau}\times\mathcal{G}^{ \tau})\) is a congruence that saturates \(\mathcal{L}\) w.r.t the algebra \(\mathfrak{G}^{\tau}\) and \(\cong^{\mathfrak{G}^{\tau}}_{\mathcal{L}}\) is the greatest such congruence. "\(\Leftarrow\)" Let us consider a graph \(G\), finite set \(\tau^{\prime}\subseteq_{\mathit{fin}}\mathbb{S}\) and finite permutation \(\alpha\), such that \(\mathsf{rename}^{\Theta}_{\alpha}\circ\mathsf{restrict}^{\Theta}_{\tau}(G_{1}\ \|^{\Theta}\ G)\in\mathcal{L}\). By Lemma 12, we need to show that \(\mathsf{rename}^{\Theta}_{\alpha}\circ\mathsf{restrict}^{\Theta}_{\tau}(G_{2}\ \|^{\Theta}\ G)\in\mathcal{L}\). Since \(\mathcal{L}\subseteq\mathcal{G}^{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak}}}}}}}}}}}}}\), we can assume that \(\mathsf{restrict}_{\tau^{\prime}}(G_{1}\ \|^{\Theta}\ G)\in\mathcal{L}\) and we need to show that \(\mathsf{restrict}^{\Theta}_{\tau}(G_{2}\ \|^{\Theta}\ G)\in\mathcal{L}\). Because of \(G_{1},\mathcal{G}_{2}\in\mathcal{G}^{\tau}\), it suffices to prove \(\mathsf{restrict}^{\Theta^{\tau}}_{\tau^{\prime}\cap\tau}(A\ \|^{\Theta}\ \mathsf{restrict}_{\tau^{\prime}\setminus\tau}(G))^{\Theta^{\tau}}\in \mathcal{L}\) implies that \(\mathsf{restrict}^{\Theta^{\tau}}_{\tau^{\prime}\cap\tau}(B\ \|^{\Theta}\ \mathsf{restrict}^{\Theta^{\tau}}_{\tau^{\prime} \setminus\tau}(G))\in\mathcal{L}\), which follows from \(G_{1}\cong^{\mathfrak{G}^{\tau}}_{\mathcal{L}}G_{2}\). Finally, we relate recognizability of a set of graphs of empty sort in the graph algebra \(\mathfrak{G}\) and any of its subalgebras \(\mathfrak{G}^{\tau}\): **Theorem 11**.: _Let \(\mathcal{L}\subseteq\mathcal{G}^{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrakmathfrakmathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{}}}}}}}}}}}}}\) be a set of graphs of sort \(\emptyset\). Then, \(\mathcal{L}\) is recognizable in the graph algebra \(\mathfrak{G}\) iff \(\mathcal{L}\) is recognizable in the algebra \(\mathfrak{G}^{\tau}\), for each \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\)._ Proof.: "\(\Rightarrow\)" Immediate, by Lemma 2. "\(\Leftarrow\)" We observe that \(\cong^{\mathfrak{G}}_{\mathcal{L}}=\bigcup_{\tau\subseteq_{\mathit{fin}} \mathbb{S}}\cong^{\mathfrak{G}^{\tau}}_{\mathcal{L}}\) by Lemma 13. Moreover, \(\cong^{\mathfrak{G}}_{\mathcal{L}}\) is locally finite because, for each sort \(\tau\subseteq_{\mathit{fin}}\mathbb{S}\), the equivalence classes of \(\cong^{\mathfrak{G}}_{\mathcal{L}}\) are the same as the equivalence classes of \(\cong^{\mathfrak{G}^{\tau}}_{\mathcal{L}}\) and, moreover, there are finitely many such classes, by the assumption that \(\mathcal{L}\) is recognizable in \(\mathcal{G}^{\tau}\). ## 7 Conclusions We give a five-point characterization of \(\mathsf{CMS}\)-definable context-free sets of graphs, by showing their equivalence with bounded treewidth recognizable (using either locally finite or finite algebras), parsable sets (where the parse trees can be recovered from the graph by a \(\mathsf{MS}\)-definable transduction) and images of recognizable unranked sets of trees under \(\mathsf{MS}\)-definable transductions whose inverses are \(\mathsf{MS}\)-definable as well. We finalize our study with a discussion on recognizability and a proof that locally finite recognizer algebras are limits of infinite sequences of finite recognizer algebras.
2308.01456
Decomposing a signed graph into rooted circuits
We prove a precise min-max theorem for the following problem. Let $G$ be an Eulerian graph with a specified set of edges $S \subseteq E(G)$, and let $b$ be a vertex of $G$. Then what is the maximum integer $k$ so that the edge-set of $G$ can be partitioned into $k$ non-zero $b$-trails? That is, each trail must begin and end at $b$ and contain an odd number of edges from $S$. This theorem is motivated by a connection to vertex-minors and yields two conjectures of M\'{a}\v{c}ajov\'{a} and \v{S}koviera as corollaries.
Rose McCarty
2023-08-02T22:12:36Z
http://arxiv.org/abs/2308.01456v1
# Decomposing a signed graph into rooted circuits ###### Abstract. We prove a precise min-max theorem for the following problem. Let \(G\) be an Eulerian graph with a specified set of edges \(S\subseteq E(G)\), and let \(b\) be a vertex of \(G\). Then what is the maximum integer \(k\) so that the edge-set of \(G\) can be partitioned into \(k\) non-zero \(b\)-trails? That is, each trail must begin and end at \(b\) and contain an odd number of edges from \(S\). This theorem is motivated by a connection to vertex-minors and yields two conjectures of Macajova and Skoviera as corollaries. Key words and phrases:Eulerian graphs, signed graphs, decompositions, vertex-minors 2020 Mathematics Subject Classification: 05C70 Rose McCarty is supported by the National Science Foundation under Grant No. DMS-2202961. ## 1. Introduction We prove a precise min-max theorem (stated as Theorem 4) for the following problem. We consider finite graphs that are allowed to have loops and multiple edges. Informally, a _signed graph_ is a graph whose edges are labelled by the two element group \(\mathbb{Z}_{2}\). (It is more standard to specify a signed graph by its set of edges with label \(1\), but this other formulation is more convenient for us.) A _circuit_ is a closed trail, that is, a walk which begins and ends at the same vertex and has no repeated edges (though it may visit vertices multiple times). A circuit _hits_ a vertex \(b\) if it has an edge incident to \(b\), and is _non-zero_ if the sum of the labels of its edges is \(1\) (instead of \(0\)). A _circuit-decomposition_ is a collection of circuits so that each edge of the graph is used by exactly one circuit in the collection. **Problem 1**.: _Given a signed graph with a vertex \(b\), what is the maximum size of a circuit-decomposition where each circuit is non-zero and hits \(b\)?_ If there is no such circuit-decomposition (for instance if the graph is not Eulerian), then we consider the maximum to be \(0\). A major motivation for Problem 1 is a connection with vertex-minors due to Bouchet [3] and Kotzig [12]. Roughly, the _vertex-minors_ of a simple graph \(G\) are the graphs that can be obtained from \(G\) by performing "local complementations at vertices" (that is, by replacing the induced subgraph on the neighborhood of a vertex by its complement) and by deleting vertices. Vertex-minors were discovered by Bouchet [1, 2] through his work on isotropic systems and have since found many applications. In particular, they have been used to characterize circle graphs [3] and classes of bounded rank-width [8] and rank-depth [13]. Oum [17, Question 6] conjectures that _every_ graph class which is closed under vertex-minors and isomorphism can be characterized by finitely many obstructions. This is true for classes of circle graphs (using well-quasi-ordering for immersion minors [20] and the work of Kotzig [12]) and for classes of bounded rank-width [16]. Motivated by Oum's conjecture, Geelen conjectures that every proper vertex-minor-closed class has a very simple structure; this conjecture is analogous to the Graph Minors Structure Theorem of Robertson and Seymour [19], but for vertex-minors instead of minors. The conjectured structure is based on the known cases mentioned above; see [15] for a formal statement. We believe that our main theorem, Theorem 4, will be useful in solving these conjectures of Oum and Geelen. Unfortunately, we also believe that we will need a more general theorem about decomposing graphs whose edges are labelled by the group \(\mathbb{Z}_{2}^{k}\) (that is, the unique abelian group with \(k\) generators, all of order \(2\)). In joint work with Jim Geelen and Paul Wollan, we believe that we can prove such a min-max theorem, even for arbitrary groups. That paper is in preparation, but we present the case of signed graphs separately for several reasons. First of all, the other proof (and even the other theorem statement) are considerably more technical for general groups. Secondly, the proof we present here for signed graphs relies on finding a matroid underlying Problem 1. While we believe that there is also a matroid underlying the group-labelled version of Problem 1, we do not know how to use that matroid to prove a min-max theorem for any other group. It is tempting to hope that this connection might eventually lead to a common generalization with the theorem of Chudnovsky, Geelen, Gerards, Goddyn, Lohman, and Seymour [4]; they used a similar matroid-based approach to prove a min-max theorem for vertex-disjoint non-zero "rooted" paths in group-labelled graphs. Their proof is in turn based on a short proof of the Tutte-Berge Formula using the matching matroid (see [4]). Moreover, the case of signed graphs already lets us prove two conjectures of Macajova and Skoviera [14] as corollaries. The first of these corollaries is particularly interesting because it does not have a fixed "root" vertex. **Corollary 2**.: _For any positive integer \(\ell\) and any connected \(2\ell\)-regular graph with an odd number of vertices, there exists a circuit-decomposition of size \(\ell\) where all circuits have an odd number of edges and begin and end at the same vertex._ We prove Corollary 2 by reducing it to (a very slight strengthening of) the other conjecture of Macajova and Skoviera from [14]. That other corollary does have a fixed root vertex and follows directly from our min-max theorem. Finally, we use the min-max theorem to relate Problem 1 to well-known "packing problems" for signed graphs. These problems ask for a collection of edge-disjoint circuits instead of a circuit-decomposition. Such problems have been particularly well-studied in relation to the Erdos-Posa Property; see [6, 10, 11, 18]. Furthermore, Churchley [5, Lemma 3.5] observed that a min-max theorem for the packing version of Problem 1 follows from the theorem in [4] mentioned above. The following corollary of Theorem 4 shows that "packing" and "decomposing" are related when the graph has an Eulerian circuit and a little edge-connectivity. **Corollary 3**.: _For any signed \(4\)-edge-connected Eulerian graph and any vertex \(b\), if there is a collection of \(\ell\) edge-disjoint non-zero circuits which hit \(b\), then there is a circuit decomposition of size \(\lceil\ell/2\rceil\) where each circuit is non-zero and hits \(b\)._ The bound is best possible, and \(4\)-edge-connectivity is necessary. This paper is adapted from Chapter 4 of the author's PhD thesis [15]. In Section 2 we give some important definitions and state the min-max theorem (Theorem 4). In Section 3 we define the matroid and prove that it is, in fact, a matroid. Finally, in Section 4 we complete the proof of Theorem 4, and in Section 5 we prove its corollaries. ## 2. The min-max theorem In this section we give some preliminary definitions, state the min-max theorem (Theorem 4), and outline its proof. ### Preliminaries We use standard graph-theoretic notation; see Diestel [7]. For a graph \(G\) with a set of vertices \(X\), we write \(E(X)\) (respectively \(\delta(X)\)) for the set of edges of \(G\) with both ends (respectively, exactly one end) in \(X\). We write \(G-X\) for the induced subgraph of \(G\) on vertex-set \(V(G)-X\). If \(v\) is a vertex of \(G\), then we write \(G-v\) for \(G-\{v\}\) and \(\deg(v)\) for the degree of \(v\). We think of graphs as having half-edges; this formulation is used to resolve technical issues with loops. So an _edge_ is an unordered pair of half-edges and an _arc_ is an ordered pair of half-edges. (It is convenient to use arcs so that trails have a defined "beginning" and "end".) Thus an edge \(\{h_{1},h_{2}\}\) has two corresponding arcs, \((h_{1},h_{2})\) and \((h_{2},h_{1})\). The _tail_ (respectively _head_) of an arc \((h_{1},h_{2})\) is the vertex that is incident to \(h_{1}\) (respectively \(h_{2}\)). A _trail_ is then a sequence of arcs so that the corresponding edges are all distinct and the head of each arc, other than the last one, is the tail of the next. The _tail_ of a trail is the tail of its first arc, and the _head_ of a trail is the head of its last arc. A _subtrail_ of a trail \(T\) is any trail which can be obtained from \(T\) by deleting zero or more arcs at its beginning and end. A _circuit_ is a trail which has the same head and tail. If \(C\) is a circuit whose sequence of arcs is \(a_{1},\ldots,a_{t}\), then we say that any circuit of the form \(a_{i},a_{i+1},\ldots,a_{t},a_{1},a_{2},\ldots,a_{i-1}\) is _obtained from \(C\) by cyclically re-ordering its arcs_. Thus a circuit _hits_ a vertex \(v\) if it can be cyclically re-ordered so as to have \(v\) as its tail and head. A _circuit-decomposition_ is a collection of circuits so that each edge of the graph is used by exactly one circuit in the collection. A graph is _Eulerian_ if it is connected and every vertex has even degree (or, equivalently, if it has an Eulerian circuit). A _signed graph_ is a tuple \((G,\gamma)\) so that \(G\) is a graph and \(\gamma\) is a function from the edge-set of \(G\) to the \(2\)-element group \(\mathbb{Z}_{2}\). The function \(\gamma\) is called a _signature of \(G\)_. (This formulation is non-standard; \(\gamma\) is typically specified by the set of edges of \(G\) which are sent to \(1\). However we find this functional formulation more convenient for our purposes.) Given a signed graph \((G,\gamma)\), the _weight_ of an edge \(e\) is the corresponding group element \(\gamma(e)\). The _weight_ of a trail \(T\) is the sum (in \(\mathbb{Z}_{2}\)) of the weights of the edges of \(T\); it is denoted by \(\gamma(T)\). An edge or a trail is called _zero_ or _non-zero_ depending on its weight. Signatures are only used to specify which circuits of a graph are zero/non-zero; so there is an equivalence relation on signatures as follows. First, _shifting at_ a vertex means to add \(1\) to the weight of each incident non-loop edge (see Figure 1). A _shifting_ of a signature \(\gamma\) is any signature that can be obtained from \(\gamma\) by performing a sequence of shiftings at vertices. Equivalently, a shifting is obtained from \(\gamma\) by adding \(1\) to the weight of each edge in a cut. Thus shifting is an equivalence relation that does not change the weight of any circuit. (Harary [9] also proved a converse; if each circuit of a graph has the same weight according to two signatures, then the signatures are shiftings of each other.) Throughout the paper we are interested in "rooted Eulerian signed" graphs; so we call an _RES-graph_ a tuple \((G,\gamma,b)\) so that \(G\) is an Eulerian graph, \(\gamma\) is a signature of \(G\), and \(b\) is a vertex of \(G\). We call \(b\) the _root_ of \((G,\gamma,b)\). We write \(\tilde{\nu}(G,\gamma,b)\) for the answer to Problem 1; that is, \(\tilde{\nu}(G,\gamma,b)\) is the maximum size of a circuit-decomposition where each circuit is non-zero and hits \(b\). We call \(\tilde{\nu}(G,\gamma,b)\) the _flooding number_ of \((G,\gamma,b)\), and consider it to be zero if no such circuit-decomposition exists. ### Theorem statement Let us consider why an RES-graph \((G,\gamma,b)\) might have small flooding number. One reason is that, after shifting, there is a small edge-cut so that the side containing \(b\) has few non-zero edges. Formally, for a set of edges \(F\) and a shifting \(\gamma^{\prime}\) of \(\gamma\), we write \(\gamma^{\prime}(F)\) for the number of non-zero edges in \(F\) according to \(\gamma^{\prime}\) Figure 1. Shifting a signature at a vertex \(v\) and then a vertex \(u\). Throughout the paper, non-zero edges are depicted in bold red. Using this notation we can state the following upper bound; \[\tilde{\nu}(G,\gamma,b)\leq\min_{\gamma^{\prime},X}\left(\gamma^{\prime}(E(X))+ \frac{1}{2}|\delta(X)|\right), \tag{1}\] where the minimum is taken over all shiftings \(\gamma^{\prime}\) of \(\gamma\) and all sets of vertices \(X\) which contain \(b\). If we did not require the edge-sets of the circuits to partition the edge-set of \(G\), but just to be disjoint, then inequality (1) would be tight; this fact was observed by Churchley [5, Lemma 3.5] following from [4]. For the flooding number, however, inequality (1) is not tight. Intuitively, this is because parity matters; since we are interested in circuit-decompositions, the flooding number must have the same parity as \(\gamma(E(G))\). So in Figure 2, for instance, the flooding number must be odd. Therefore, while that example has \(\deg(b)/2=4\) edge-disjoint non-zero circuits which hit \(b\), its flooding number is just three. It turns out that "parity" is the only possible problem; the min-max theorem says that if we subtract one for each component of \(G-X\) where "the parity is wrong", then inequality (1) becomes tight. To state this formally, let \((G,\gamma,b)\) be an RES-graph, and let \(\gamma^{\prime}\) be a shifting of \(\gamma\). A set of vertices \(Y\) is \(\gamma^{\prime}\)_-odd_ if the parity of \(\gamma^{\prime}(E(Y)\cup\delta(Y))\) is different from the parity of \(|\delta(Y)|/2\). Then, for a set of vertices \(X\) which contains \(b\), we write \(\operatorname{odd}_{\gamma^{\prime}}(G-X)\) for the number of components of \(G-X\) whose vertex-set is \(\gamma^{\prime}\)-odd. Now we can state the theorem. **Theorem 4**.: _For any RES-graph \((G,\gamma,b)\),_ \[\tilde{\nu}(G,\gamma,b)=\min_{\gamma^{\prime},X}\left(\gamma^{\prime}(E(X))+ \frac{1}{2}|\delta(X)|-\operatorname{odd}_{\gamma^{\prime}}(G-X)\right),\] _where the minimum is taken over all shiftings \(\gamma^{\prime}\) of \(\gamma\) and all sets of vertices \(X\) which contain \(b\)._ We go ahead and prove the easy direction of Theorem 4 now: that the right-hand side of the equation is an upper-bound for the flooding number. Figure 2. An RES-graph with four edge-disjoint non-zero circuits which hit \(b\), but with flooding number three. **Lemma 5**.: _For any RES-graph \((G,\gamma,b)\), shifting \(\gamma^{\prime}\) of \(\gamma\), and set of vertices \(X\) which contains \(b\),_ \[\tilde{\nu}(G,\gamma,b)\leq\gamma^{\prime}(E(X))+\frac{1}{2}|\delta(X)|-\mathrm{ odd}_{\gamma^{\prime}}(G-X). \tag{2}\] Proof.: Since shifting does not change the flooding number, we may assume that \(\gamma^{\prime}=\gamma\). Now let \(\mathcal{C}\) be a circuit-decomposition of size \(\tilde{\nu}(G,\gamma,b)\) so that each circuit in \(\mathcal{C}\) is non-zero and hits \(b\). We may assume that each circuit in \(\mathcal{C}\) has \(b\) as its tail and head by cyclically re-ordering it. We now "split up" the circuits in \(\mathcal{C}\) into a collection \(\mathcal{T}\) of edge-disjoint trails. First add a trail to \(\mathcal{T}\) for each edge in \(E(X)\), oriented the same way as in \(\mathcal{C}\). Then add each trail \(T\) which satisfies the following conditions to \(\mathcal{T}\): 1. \(T\) is a subtrail of a circuit in \(\mathcal{C}\), 2. the tail and head of \(T\) are in \(X\), and 3. the first and last edges of \(T\) are in \(\delta(X)\), and no other edges of \(T\) are. Observe that \(\mathcal{T}\) has size \(|E(X)|+|\delta(X)|/2\), and that the edge-sets of the trails in \(\mathcal{T}\) partition \(E(G)\). We now show that the number of non-zero trails in \(\mathcal{T}\) is at most the right-hand side of inequality (2); this will complete the proof of Lemma 5 since each circuit in \(\mathcal{C}\) contributes at least one non-zero trail to \(\mathcal{T}\). It is enough to show that, for each component of \(G-X\) whose vertex-set \(Y\) is \(\gamma\)-odd, there exists a trail in \(\mathcal{T}\) which has weight zero and hits a vertex in \(Y\). There are \(|\delta(Y)|/2\) trails in \(\mathcal{T}\) which hit a vertex in \(Y\). Moreover, the sum of their weights (in \(\mathbb{Z}_{2}\)) is equal to the parity of \(\gamma(E(Y)\cup\delta(Y))\). So at least one of these trails has weight zero since, by the definition of odd components, the parity of \(\gamma(E(Y)\cup\delta(Y))\) is different from the parity of \(|\delta(Y)|/2\). ### Proof outline We now outline the proof of Theorem 4. Let \((G,\gamma,b)\) be an RES-graph. A _certificate_ for \((G,\gamma,b)\) is a tuple \((X,\gamma^{\prime})\) so that \(\gamma^{\prime}\) is a shifting of \(\gamma\), \(X\) is a set of vertices that contains \(b\), and inequality (2) from Lemma 5 is tight. Thus, to prove Theorem 4, we just need to show that every RES-graph has a certificate. The key observation is that we can allow circuit-decompositions to contain zero circuits. Formally, a _flooding_ is a circuit-decomposition of size \(\deg(b)/2\) where each circuit has \(b\) as its tail and head. (This assumption is simply more convenient than saying that \(b\) is hit.) A flooding is _optimal_ if it contains the maximum number of non-zero circuits in any flooding of \((G,\gamma,b)\). This gives an alternate definition of the flooding number as follows. **Lemma 6**.: _For any RES-graph \((G,\gamma,b)\), the maximum number of non-zero circuits in a flooding is equal to \(\tilde{\nu}(G,\gamma,b)\)._ Proof.: First consider a circuit-decomposition \(\mathcal{C}\) of size \(\tilde{\nu}(G,\gamma,b)\) so that each circuit in \(\mathcal{C}\) is non-zero and hits \(b\). By "splitting up the circuits in \(\mathcal{C}\) at \(b\)", we can find a flooding that contains at least \(\tilde{\nu}(G,\gamma,b)\)-many non-zero circuits. (Each non-zero circuit yields an odd number of non-zero circuits after "splitting".) In the other direction, consider an optimal flooding \(\mathcal{C}\). All of the zero circuits in \(\mathcal{C}\) can be "combined" with a non-zero circuit in \(\mathcal{C}\) to obtain a circuit-decomposition where each circuit is non-zero and hits \(b\). (We may assume that \(\mathcal{C}\) contains a non-zero circuit since otherwise this direction holds trivially.) We work with this alternate definition of the flooding number from now on. Informally, our approach to Theorem 4 is to consider why the zero circuits in an optimal flooding cannot be "turned into" non-zero circuits. Every optimal flooding contains exactly \((\deg(b)/2-\tilde{\nu}(G,\gamma,b))\)-many zero circuits. In Section 3 we define a matroid of this rank. The bases of this matroid are obtained by selecting a "representative" for each zero circuit in an optimal flooding. Roughly, a representative is specified by 1) a "split" of the circuit into two subtrails, one with \(b\) as its tail and one with \(b\) as its head, and 2) the weight of those two subtrails (they must have the same weight since the circuit has weight zero). Then in Section 4 we prove Theorem 4 by reducing to the case that this matroid has rank 1. Thus \(\tilde{\nu}(G,\gamma,b)=\deg(b)/2-1\), and "parity" already shows that \((\{b\},\gamma)\) is a certificate. We discuss this reduction in a little more detail in the next section, after defining the matroid. For this approach, it is convenient to have some notation about how to "combine" and "split up" trails. If \(T_{1}\) and \(T_{2}\) are edge-disjoint trails so that the head of \(T_{1}\) is the tail of \(T_{2}\), then we can compose them into a new trail denoted \((T_{1},T_{2})\). If \(\gamma\) is a signature, then we denote the weight of \((T_{1},T_{2})\) by \(\gamma(T_{1},T_{2})\) for short. Likewise, we can reverse a trail \(T\) to obtain another trail denoted \(T^{-1}\). We also use this notation if \(f\) is an arc; so \(f^{-1}\) is the arc with the same edge, but in the reverse direction. As an example of this notation, we have \((T_{1},T_{2})^{-1}=(T_{2}^{-1},T_{1}^{-1})\). We use transitions to "split up" trails; a _transition_ is a set of two half-edges which are incident to the same vertex. If that vertex is \(v\), we say the transition is _at \(v\)_. The _transitions of a trail_\(T\) are the transitions \(\{h_{1},h_{2}\}\) so that \(T\) has two consecutive arcs of the form \((h_{1}^{\prime},h_{1})\) and \((h_{2},h_{2}^{\prime})\). So a trail with \(\ell\) arcs is fully determined by its first arc and its \(\ell-1\) transitions. The _transitions of a flooding_ are the transitions of its circuits. Finally, if \(R_{1},\ldots,R_{k}\) are distinct transitions of a trail \(T\), then there are unique non-empty trails \(T_{1},\ldots,T_{k+1}\) so that \(T=(T_{1},\ldots,T_{k+1})\) and none of \(R_{1},\ldots,R_{k}\) are transitions of any of \(T_{1},\ldots,T_{k+1}\). We say that \((T_{1},\ldots,T_{k+1})\) is the _split of \(T\) specified by \(R_{1},\ldots,R_{k}\)_. If \(\mathcal{T}\) is a collection of edge-disjoint trails and \(R_{1},\ldots,R_{k}\) are transitions of a single trail \(T\in\mathcal{T}\), then we also call \((T_{1},\ldots,T_{k+1})\) the _split of \(\mathcal{T}\) specified by \(R_{1},\ldots,R_{k}\)_. ## 3. The flooding matroid Let \((G,\gamma,b)\) be an RES-graph, and let \(C\) be a zero circuit which has \(b\) as its tail and head. A _representative for \(C\)_ is a tuple \((f,\alpha)\) so that \(f\) is an arc of and \(\alpha\in\{0,1\}\) is the weight of the subtrail of \(C\) which is obtained by deleting all arcs after \(f\); see Figure 3. A _system of representatives_ for a flooding \(\mathcal{C}\) is a set \(B\) that consists of one representative for each zero circuit in \(\mathcal{C}\). We define the _flooding matroid_\(M(G,\gamma,b)\) by its ground set and its bases. The ground set of \(M(G,\gamma,b)\) is the set of all tuples \((f,\alpha)\) so that \(f\) is an arc of \((G,\gamma,b)\) and \(\alpha\in\{0,1\}\). A set \(B\) is a basis of \(M(G,\gamma,b)\) if it is a system of representatives for an optimal flooding. (If the flooding number is equal to \(\deg(b)/2\), then we view the empty set as a system of representatives for an optimal flooding; this guarantees that \(M(G,\gamma,b)\) always has a basis.) Recall that, to prove Theorem 4, we will reduce to the case that \(M(G,\gamma,b)\) has rank \(1\). The key step is show that if \((G,\gamma,b)\) is a counterexample to Theorem 4 which is, in a certain sense, "minimal", then for each arc \(f\) of \(G-b\), both \((f,0)\) and \((f,1)\) are non-loop elements of \(M(G,\gamma,b)\). Then we will use the transitivity of parallel pairs and the following key lemma. The proof of the lemma does not use the fact that \(M(G,\gamma,b)\) is a matroid, just the definition of its bases. **Lemma 7**.: _If \((G,\gamma,b)\) is an RES-graph and \(f_{0}\) and \(f_{1}\) are arcs with the same head, then there is no basis of \(M(G,\gamma,b)\) which contains both \((f_{0},0)\) and \((f_{1},1)\)._ Proof.: Suppose to the contrary that there is such a basis. Then there exists an optimal flooding \(\mathcal{C}\) that contains distinct zero circuits with \((f_{0},0)\) and \((f_{1},1)\) as their respective representatives. Thus there are trails \(T_{0},S_{0},T_{1},S_{1}\) so that \((T_{0},S_{0})\) and \((T_{1},S_{1})\) are distinct zero circuits in \(\mathcal{C}\), the trail \(T_{0}\) has weight \(0\), the trail \(T_{1}\) has weight \(1\), and \(T_{0}\) and \(T_{1}\) have the same head. We can obtain another flooding \(\mathcal{C}^{\prime}\) from \(\mathcal{C}\) by replacing \((T_{0},S_{0})\) and \((T_{1},S_{1})\) with the circuits \((T_{0},T_{1}^{-1})\) and \((S_{0}^{-1},S_{1})\). However this contradicts the optimality of \(\mathcal{C}\) as the two new circuits are both non-zero. Let \((G,\gamma,b)\) be an RES-graph. The rest of this section is dedicated to proving that \(M(G,\gamma,b)\) is a matroid. To do so, we will prove that the basis exchange axiom holds in Lemma 9. The proof reduces to the \(4\)-edge-connected case; \((G,\gamma,b)\) is \(4\)_-edge-connected_ if there is no set \(Y\subseteq V(G)-\{b\}\) so that \(|\delta(Y)|=2\). Then we use the following key lemma to find a transition which works for two different Figure 3. A circuit which is represented by \((f,0)\), where \(f\) is the third arc of the circuit. bases; a transition \(R\)_works for_ a basis \(B\) of \(M(G,\gamma,b)\) if there exists an optimal flooding which has \(R\) as a transition and \(B\) as a system of representatives. **Lemma 8**.: _For any \(4\)-edge-connected RES-graph \((G,\gamma,b)\), vertex \(v\neq b\), half-edge \(h\) incident to \(v\), and basis \(B\) of \(M(G,\gamma,b)\), more than half of the transitions at \(v\) which contain \(h\) work for \(B\)._ Proof.: Say that a transition is _valid_ if it is a transition at \(v\) that contains \(h\). So we are trying to show that more than half of the valid transitions work for \(B\). Fix an optimal flooding \(\mathcal{C}\) which has \(B\) as a system of representatives. There is a unique half-edge \(h^{\prime}\) so that \(\{h,h^{\prime}\}\) is a transition of \(\mathcal{C}\). So \(\{h,h^{\prime}\}\) works for \(B\). Furthermore, there are exactly \(\deg(v)-1\) valid transitions, and \(\deg(v)-1\) is odd. So it suffices to show that half of the other valid transitions also work for \(B\). We will do this by proving that for each transition \(\{r,r^{\prime}\}\neq\{h,h^{\prime}\}\) of \(\mathcal{C}\) at \(v\), either \(\{h,r\}\) or \(\{h,r^{\prime}\}\) works for \(B\). We break into two cases. _Case 1:_\(\{h,h^{\prime}\}\) and \(\{r,r^{\prime}\}\) are transitions of the same circuit in \(\mathcal{C}\). Let \((T_{1},L,T_{2})\) be the split of \(\mathcal{C}\) specified by \(\{h,h^{\prime}\}\) and \(\{r,r^{\prime}\}\); see Figure 4. We can obtain a new flooding from \(\mathcal{C}\) by replacing \((T_{1},L,T_{2})\) with either \((T_{1},L^{-1},T_{2})\) or its reversal \((T_{1},L^{-1},T_{2})^{-1}\). If \(B\) is a system of representatives for either of these floodings, then we are done; so we may assume otherwise. It follows that \((T_{1},L,T_{2})\) is a zero circuit, the arc of its representative is in \(L\), and \(\gamma(T_{1})\neq\gamma(T_{2})\). Next we use \(4\)-edge-connectivity to "find another place to put \(L\)"; this is the only time we will use \(4\)-edge-connectivity. Refer to Figure 5 for the following definitions. For this paragraph, consider only the transitions of the circuits in \(\mathcal{C}-\{(T_{1},L,T_{2})\}\cup\{(T_{1},T_{2})\}\). Let \(R\) be the transition which specifies the split \((T_{1},T_{2})\). Since \((G,\gamma,b)\) is \(4\)-edge-connected, there exists another transition \(R^{\prime}\neq R\) which is at a vertex \(u\) that is hit by \(L\). Let \(L^{\prime}\) be a circuit which begins and ends at \(u\) and is obtained by cyclically re-ordering \(L\). Let \((S_{1},S_{2})\) be the split specified Figure 4. The transitions \(\{r,r^{\prime}\}\) and \(\{h,h^{\prime}\}\) and the split \((T_{1},L,T_{2})\) from _Case 1_ of Lemma 8. Throughout the paper, transitions are depicted as thick dashed curves. by \(R^{\prime}\). (It is possible that \((S_{1},S_{2})=(T_{1},T_{2})\).) By "attaching \(L^{\prime}\) onto \((S_{1},S_{2})\)", we obtain a flooding \(\mathcal{C}^{\prime}\) so that \(R\) is a transition of \(\mathcal{C}^{\prime}\) and \((S_{1},L^{\prime},S_{2})\in\mathcal{C}^{\prime}\). Notice that when we removed \(L\), we gained the non-zero circuit \((T_{1},T_{2})\). So by the optimality of \(\mathcal{C}\), we must have lost a non-zero circuit when we added \(L^{\prime}\) to \((S_{1},S_{2})\). It follows that \(\mathcal{C}^{\prime}\) is optimal, \((S_{1},L^{\prime},S_{2})\) is a zero circuit, and \(B\) does not contain an element whose arc is in \((S_{1},S_{2})\). Thus, since \(L\) is a non-zero circuit that contains an arc of an element in \(B\), the set \(B\) must contain a representative for either \((S_{1},L^{\prime},S_{2})\) or \((S_{2}^{-1},L^{\prime},S_{1}^{-1})\). This completes the first case; we just replace \((S_{1},L^{\prime},S_{2})\) by \((S_{2}^{-1},L^{\prime},S_{1}^{-1})\) in \(\mathcal{C}^{\prime}\) if necessary. _Case 2:_\(\{h,h^{\prime}\}\) and \(\{r,r^{\prime}\}\) are transitions of different circuits in \(\mathcal{C}\). Let \((T_{1},T_{2})\) and \((S_{1},S_{2})\) be the splits of \(\mathcal{C}\) specified by \(\{h,h^{\prime}\}\) and \(\{r,r^{\prime}\}\), respectively. There are three ways to partition \(\{T_{1},T_{2},S_{1},S_{2}\}\) into two parts of size two, as depicted in Figure 6. Each of these three ways yields a unique flooding of \((G,\gamma,b)\), up to reversing the two circuits that contain any of \(h,h^{\prime},r,r^{\prime}\). We call these two circuits the _new circuits_ of the flooding. One of these three floodings is our original flooding \(\mathcal{C}\); we are interested in the other two floodings, which we denote by \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). Note that reversing the new circuits of \(\mathcal{C}_{1}\) (respectively \(\mathcal{C}_{2}\)) does not affect whether or not \(\mathcal{C}_{1}\) (respectively \(\mathcal{C}_{2}\)) is optimal. However, it might affect whether or not \(B\) is a system of representatives. There is an obvious choice to make though; if \(L\) is a new circuit of \(\mathcal{C}_{1}\) (respectively \(\mathcal{C}_{2}\)) so that more elements of \(B\) have arcs in \(L^{-1}\) than in \(L\), then replace \(L\) with Figure 5. Finding another place to put \(L\) in _Case 1_ of Lemma 8. Figure 6. The three ways to partition \(\{T_{1},T_{2},S_{1},S_{2}\}\) into two parts of size two, from _Case 2_ of Lemma 8. \(L^{-1}\). We claim that, with this choice, there exists \(i\in\{1,2\}\) so that \(\mathcal{C}_{i}\) is an optimal flooding and \(B\) is a system of representatives for \(\mathcal{C}_{i}\). This will complete the proof of Lemma 8. We now break into cases. _Case 2.1:_ Both \((T_{1},T_{2})\) and \((S_{1},S_{2})\) are non-zero circuits. Then, as a multi-set, \(\{\gamma(T_{1}),\gamma(T_{2}),\gamma(S_{1}),\gamma(S_{2})\}=\{0,0,1,1\}\). So for some \(i\in\{1,2\}\), both of the new circuits of \(\mathcal{C}_{i}\) are non-zero. Then \(\mathcal{C}_{i}\) is an optimal flooding and \(B\) is a system of representatives for \(\mathcal{C}_{i}\). _Case 2.2:_ Exactly one of \((T_{1},T_{2})\), \((S_{1},S_{2})\) is a non-zero circuit. Then, as a multi-set, \(\{\gamma(T_{1}),\gamma(T_{2}),\gamma(S_{1}),\gamma(S_{2})\}\) is either \(\{0,0,0,1\}\) or \(\{0,1,1,1\}\). So in fact both \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are optimal. Let \(f\) be the arc of the element of \(B\) that represents whichever of \((T_{1},T_{2})\), \((S_{1},S_{2})\) is a zero circuit. Then \(f\) is in a zero circuit in either \(\mathcal{C}_{1}\) or \(\mathcal{C}_{2}\), and \(B\) is a system of representatives for that flooding. _Case 2.3:_ Both \((T_{1},T_{2})\) and \((S_{1},S_{2})\) are zero circuits. As \(\mathcal{C}\) is optimal, it follows that, as a multi-set, \(\{\gamma(T_{1}),\gamma(T_{2}),\gamma(S_{1}),\gamma(S_{2})\}\) is either \(\{0,0,0,0\}\) or \(\{1,1,1,1\}\). So both \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are optimal. Now let \(f_{1}\) (respectively \(f_{2}\)) be the arc of the element in \(B\) that represents \((T_{1},T_{2})\) (respectively \((S_{1},S_{2})\)). Then \(f_{1}\) and \(f_{2}\) are in distinct circuits in either \(\mathcal{C}_{1}\) or \(\mathcal{C}_{2}\), and \(B\) is a system of representatives for that flooding. This completes all possible cases and therefore the proof of Lemma 8. Now we are ready to prove that the basis exchange axiom holds, which is the final lemma of this section. As the flooding matroid always has a basis (possibly the empty set), this lemma proves that the flooding matroid is in fact a matroid. **Lemma 9**.: _For any RES-graph \((G,\gamma,b)\), bases \(B_{1}\) and \(B_{2}\) of \(M(G,\gamma,b)\), and \(b_{1}\in B_{1}-B_{2}\), there exists \(b_{2}\in B_{2}-B_{1}\) so that \((B_{1}-\{b_{1}\})\cup\{b_{2}\}\) is a basis of \(M(G,\gamma,b)\)._ Proof.: Going for a contradiction, suppose that the lemma is false. Then choose a counterexample so that \((G,\gamma,b)\) has as few edges as possible, and, subject to that, as many vertices as possible. This assumption may seem strange now but will prove to be convenient later. Such a choice is possible since an Eulerian graph with \(m\) edges has at most \(m\) vertices. Our aim is to apply Lemma 8. So we need a vertex other than \(b\), and we need \((G,\gamma,b)\) to be \(4\)-edge-connected. We take care of these things now. **Claim 9.1**.: _There exists a vertex other than \(b\)._ Proof.: If not, then every zero circuit in a flooding consists of a single loop \(f\) and must be represented by \((f,0)\). We can reverse such a circuit to obtain a zero circuit represented by \((f^{-1},0)\). Then the element \(b_{1}\) is of the form \((f,0)\), and we can take \(b_{2}\) to be the element \((f^{-1},0)\in B_{2}-B_{1}\). The next claim is actually the hardest part of the proof. **Claim 9.2**.: _The graph \((G,\gamma,b)\) is \(4\)-edge-connected._ Proof.: Otherwise, there exists a set \(Y\subseteq V(G)-\{b\}\) with \(|\delta(Y)|=2\). Let \((\hat{G},\hat{\gamma},b)\) be the RES-graph that is obtained from \((G,\gamma,b)\) by deleting all vertices in \(Y\) and then adding a new edge \(\hat{e}\) whose ends are the neighbours of \(Y\) (possibly \(\hat{e}\) is a loop) and whose weight is the sum of the weights of the edges in \(E(Y)\cup\delta(Y)\). Note that \(\tilde{\nu}(\hat{G},\hat{\gamma},b)=\tilde{\nu}(G,\gamma,b)\). The proof of the claim is fairly straightforward from here; we apply Lemma 9 to the graph \((\hat{G},\hat{\gamma},b)\), which has fewer edges than \((G,\gamma,b)\). It is somewhat technical to state this precisely though. We begin by giving some definitions related to \(B_{1}\) and \(B_{2}\). So let \(i\in\{1,2\}\). Fix an optimal flooding \(\mathcal{C}_{i}\) of \((G,\gamma,b)\) so that \(B_{i}\) is a system of representatives for \(\mathcal{C}_{i}\). Let \(T_{i}\) be the unique subtrail of a circuit in \(\mathcal{C}_{i}\) so that the edge-set of \(T_{i}\) is \(E(Y)\cup\delta(Y)\). Then there exists an optimal flooding \(\hat{\mathcal{C}}_{i}\) of \((\hat{G},\hat{\gamma},b)\) which is obtained from \(\mathcal{C}_{i}\) by replacing \(T_{i}\) with an arc \(\hat{f}_{i}\) whose edge is \(\hat{e}\). Now, if no element of \(B_{i}\) has an arc in \(T_{i}\), then \(B_{i}\) is also a system of representatives for \(\hat{\mathcal{C}}_{i}\) and we set \(\hat{B}_{i}\coloneqq B_{i}\). Otherwise, let \((f_{i},\alpha_{i})\in B_{i}\) be the element whose arc is in \(T_{i}\); then there exists \(\hat{\alpha}_{i}\in\{0,1\}\) so that \((B_{i}-\{(f_{i},\alpha_{i})\})\cup\{(\hat{f}_{i},\hat{\alpha}_{i})\}\) is a system of representatives for \(\hat{\mathcal{C}}_{i}\), and we let \(\hat{B}_{i}\) be this set. This completes the definitions. Next we apply Lemma 9 to \((\hat{G},\hat{\gamma},b)\), which has fewer edges than \((G,\gamma,b)\). So let \(\hat{b}_{1}\) be the element of \(\hat{B}_{1}\) that corresponds to \(b_{1}\). It is possible that \(\hat{b}_{1}\) is in \(\hat{B}_{2}\). In this case, \(\hat{b}_{1}=(\hat{f}_{1},\hat{\alpha}_{1})=(\hat{f}_{2},\hat{\alpha}_{2})\), and \((B_{1}-\{b_{1}\})\cup\{(f_{2},\alpha_{2})\}\) is a basis of \(M(G,\gamma,b)\). To see this, note that it is a system of representatives for the flooding that is obtained from \(\hat{\mathcal{C}}_{1}\) by replacing the arc \(\hat{f}_{1}\) with the trail \(T_{2}\). So we may assume that \(\hat{b}_{1}\in\hat{B}_{1}-\hat{B}_{2}\). Then there exists \(\hat{b}_{2}\in\hat{B}_{2}-\hat{B}_{1}\) so that \((\hat{B}_{1}-\{\hat{b}_{1}\})\cup\{\hat{b}_{2}\}\) is a basis of \(M(\hat{G},\hat{\gamma},b)\). If \(\hat{b}_{2}\) is in \(B_{2}\) as well, then \((B_{1}-\{b_{1}\})\cup\{\hat{b}_{2}\}\) is a basis of \(M(G,\gamma,b)\); we replace \(\hat{f}_{1}\) or its reversal by \(T_{1}\) or its reversal. Otherwise, \(\hat{b}_{2}=(\hat{f}_{2},\hat{\alpha}_{2})\), and instead \((B_{1}-\{b_{1}\})\cup\{(f_{2},\alpha_{2})\}\) is a basis of \(M(G,\gamma,b)\); we replace \(\hat{f}_{2}\) by \(T_{2}\). We note that \((f_{2},\alpha_{2})\) is not in \(B_{1}\) simply because \((B_{1}-\{b_{1}\})\cup\{(f_{2},\alpha_{2})\}\) corresponds to an optimal flooding and therefore has the same size as \(B\). This completes the proof of Claim 9.2. Now, fix a vertex \(v\neq b\) and a half-edge \(h\) incident to \(v\). By Lemma 8 applied to \(B_{1}\) and \(B_{2}\), there exists a transition \(\{h,h^{\prime}\}\) at \(v\) so that there are optimal floodings \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) which both have \(\{h,h^{\prime}\}\) as a transition, and which have \(B_{1}\) and \(B_{2}\) (respectively) as a system of representatives. Let \((\hat{G},\hat{\gamma},b)\) be the RES-graph that is obtained from \((G,\gamma,b)\) by adding a new vertex \(v^{\prime}\) and making the half-edges \(h\) and \(h^{\prime}\) incident to \(v^{\prime}\) instead of \(v\). Then \(B_{1}\) and \(B_{2}\) are both bases of \(M(\hat{G},\hat{\gamma},b)\). Moreover, Lemma 9 holds for \((\hat{G},\hat{\gamma},b)\) since it has the same number of edges as \((G,\gamma,b)\) but more vertices. It follows that the lemma holds for \((G,\gamma,b)\) as well. This is a contradiction and completes the proof of Lemma 9. ## 4. Completing the proof As discussed earlier, we will prove Theorem 4 by reducing to the case that the flooding matroid has rank \(1\). The key step is to show that in a "minimum counterexample", every arc which is not incident to the root is in two non-loop elements of the flooding matroid. We accomplish this step in the next two lemmas. First we need to define the "reduction move"; refer to Figure 7 for an example. Let \((G,\gamma,b)\) be an RES-graph, and let \(e\) be an edge of \(G-b\). For convenience, let \(h\) and \(r\) be the half-edges so that \(e=\{h,r\}\). Then an \(e\)_-reduction of \((G,\gamma,b)\)_ is any RES-graph \((\hat{G},\hat{\gamma},b)\) that has a transition \(\{h^{\prime},r^{\prime}\}\) at \(b\) so that 1. \(\hat{G}\) is obtained from \(G\) by deleting \(e\) and adding the edges \(\{h,h^{\prime}\}\) and \(\{r,r^{\prime}\}\), and 2. the weights of \(\{h,h^{\prime}\}\) and \(\{r,r^{\prime}\}\) (according to \(\hat{\gamma}\)) sum to \(\gamma(e)\). Note that \((\hat{G},\hat{\gamma},b)\) is not unique since there are two ways to weight its new edges. Furthermore, \(\hat{G}-b\) has fewer edges than \(G-b\); this is the sense in which \(\hat{G}\) is "smaller" than \(G\). Sometimes we do not want to specify the edge \(e\); so we say that a _reduction of \((G,\gamma,b)\)_ is any RES-graph which is an \(e\)-reduction of \((G,\gamma,b)\) for some edge \(e\) of \(G-b\). The first lemma shows that if there is a reduction of \((G,\gamma,b)\) whose flooding number is not too much larger, then we can "lift" its certificate to \((G,\gamma,b)\). **Lemma 10**.: _If \((G,\gamma,b)\) is an RES-graph which has a reduction \((\hat{G},\hat{\gamma},b)\) so that \((\hat{G},\hat{\gamma},b)\) has flooding number at most \(\tilde{\nu}(G,\gamma,b)+1\) and \((\hat{G},\hat{\gamma},b)\) has a certificate, then \((G,\gamma,b)\) also has a certificate._ Proof.: Let \(e\) be the edge of \(G-b\) so that \((\hat{G},\hat{\gamma},b)\) is an \(e\)-reduction of \((G,\gamma,b)\). Observe that \(\gamma(E(G))\) has the same parity as \(\hat{\gamma}(E(\hat{G}))\). So the flooding numbers also have the same parity and in fact \(\tilde{\nu}(\hat{G},\hat{\gamma},b)\leq\tilde{\nu}(G,\gamma,b)\). Figure 7. An RES-graph (left), and an \(e\)-reduction of it (right). Let \((X,\hat{\gamma}^{\prime})\) be a certificate of \((\hat{G},\hat{\gamma},b)\). By performing the same sequence of shiftings in \((G,\gamma,b)\), we can find a shifting \(\gamma^{\prime}\) of \(\gamma\) so that \((\hat{G},\hat{\gamma}^{\prime},b)\) is an \(e\)-reduction of \((G,\gamma^{\prime},b)\). We now show that \((X,\gamma^{\prime})\) is a certificate for \((G,\gamma,b)\) by breaking into cases based on where the ends of \(e\) "lie". _Case 1:_ Both ends of \(e\) are in \(X\). If \(\gamma^{\prime}(e)=0\) then certainly \((X,\gamma^{\prime})\) is a certificate. Otherwise, one of the new edges of \(\hat{G}\) is non-zero according to \(\hat{\gamma}^{\prime}\), and again \((X,\gamma^{\prime})\) is a certificate. _Case 2:_ Exactly one end of \(e\) is in \(X\). Then the other end of \(e\) is in a component of \(\hat{G}-X\); write \(Y\) for the vertex-set of that component. The only way \((X,\gamma^{\prime})\) might not be a certificate is if \(Y\) is odd in \((\hat{G},\hat{\gamma}^{\prime},b)\) but not in \((G,\gamma^{\prime},b)\). However, if this occurs, then the new edge of \((\hat{G},\hat{\gamma}^{\prime},b)\) which has both ends in \(X\) is non-zero. Therefore, it contributed to \(\hat{\gamma}^{\prime}(E(X))\) but not to \(\gamma^{\prime}(E(X))\); so again \((X,\gamma^{\prime})\) is a certificate. _Case 3:_ The ends of \(e\) are in the same component of \(\hat{G}-X\). Then \(|\delta_{G}(X)|=|\delta_{\hat{G}}(X)|-2\), and regardless of whether the vertex set of that component is odd in \((G,\gamma^{\prime},b)\), we still have that \((X,\gamma^{\prime})\) is a certificate. _Case 4:_ The ends of \(e\) are in different components of \(\hat{G}-X\). Let \(Y_{1}\) and \(Y_{2}\) be the vertex-sets of those two components. Again we have that \(|\delta_{G}(X)|=|\delta_{\hat{G}}(X)|-2\). So the only possible problem is if \(Y_{1}\) and \(Y_{2}\) are both odd in \((\hat{G},\hat{\gamma}^{\prime},b)\) and \(Y_{1}\cup Y_{2}\) is not odd in \((G,\gamma^{\prime},b)\). However this cannot occur since \(|\delta_{G}(Y_{1}\cup Y_{2})|/2=|\delta_{\hat{G}}(Y_{1})|/2+|\delta_{\hat{G}}( Y_{2})|/2-1\), while the parity of the number of relevant non-zero edges just sums (as in Figure 8). So indeed \((X,\gamma^{\prime})\) is a certificate for \((G,\gamma,b)\). This completes all of the cases and therefore also the proof of Lemma 10. The second lemma essentially says that if Lemma 10 cannot be applied, then the flooding matroid has many non-loop elements. **Lemma 11**.: _If \((G,\gamma,b)\) is an RES-graph whose reductions all have flooding number at least \(\tilde{\nu}(G,\gamma,b)+2\), then each arc of \(G-b\) is in two non-loop elements of the flooding matroid \(M(G,\gamma,b)\)._ Proof.: Let \(f\) be an arc of \(G-b\), and let \(e\) be the corresponding edge. We are trying to prove that for \(i=0,1\), there exists an optimal flooding of \((G,\gamma,b)\) that contains a zero circuit represented by \((f,i)\). Let \((\hat{G},\hat{\gamma}_{0},b)\) be an \(e\)-reduction of \((G,\gamma,b)\); such an RES-graph exists. Let \(\hat{e}_{1}\) and \(\hat{e}_{2}\) denote its two new edges. By adding \(1\) to the weights of \(\hat{e}_{1}\) and \(\hat{e}_{2}\), we can obtain another \(e\)-reduction \((\hat{G},\hat{\gamma}_{1},b)\) of \((G,\gamma,b)\). Both of these two new RES-graphs have flooding number at least \(\tilde{\nu}(G,\gamma,b)+2\). Let \(\hat{\mathcal{C}}_{0}\) and \(\hat{\mathcal{C}}_{1}\) be optimal floodings of \((\hat{G},\hat{\gamma}_{0},b)\) and \((\hat{G},\hat{\gamma}_{1},b)\), respectively. First we prove a claim. **Claim 11.1**.: _Neither \(\hat{e}_{1}\) nor \(\hat{e}_{2}\) is in a zero circuit in \(\hat{\mathcal{C}}_{0}\) or \(\hat{\mathcal{C}}_{1}\)._ Proof.: Going for a contradiction, suppose that for some \(i\in\{0,1\}\), the flooding \(\hat{C}_{i}\) of \((\hat{G},\hat{\gamma}_{i},b)\) does contain such a zero circuit. If \(\hat{e}_{1}\) and \(\hat{e}_{2}\) are in different circuits in \(\hat{C}_{i}\), then we obtain a contradiction to the fact that the reduction has larger flooding number. Otherwise, \(\hat{e}_{1}\) and \(\hat{e}_{2}\) are in the same circuit \(\hat{C}\in\hat{C}_{i}\). Then in \((G,\gamma,b)\), the circuit \(\hat{C}\) becomes a circuit which contains \(e\). Since \((G,\gamma,b)\) is connected, this circuit can be "attached" back onto some other circuit in \(\hat{\mathcal{C}}_{i}-\{\hat{C}\}\). This again contradicts the fact that the reduction has larger flooding number. Now we break into two cases based where \(\hat{e}_{1}\) and \(\hat{e}_{2}\) "lie" in \(\hat{\mathcal{C}}_{0}\) and \(\hat{\mathcal{C}}_{1}\). _Case 1:_ There exists \(i\in\{0,1\}\) so that \(\hat{e}_{1}\) and \(\hat{e}_{2}\) are in the same circuit in \(\hat{\mathcal{C}}_{i}\). Let \(\hat{C}\in\hat{\mathcal{C}}_{i}\) be that circuit. Then \(\hat{C}\) is non-zero by Claim 11.1. So, similarly to before, we can obtain a flooding \(\mathcal{C}\) of \((G,\gamma,b)\) by "attaching" this circuit onto some circuit of \(\hat{\mathcal{C}}_{i}-\{\hat{C}\}\). Since the flooding number of the reduction is least two higher, \(\mathcal{C}\) is an optimal flooding of \((G,\gamma,b)\). Moreover, after possibly reversing a circuit in \(\mathcal{C}\), we can find trails \(T_{1},C,T_{2}\) so that \((T_{1},C,T_{2})\) is a zero circuit in \(\mathcal{C}\), and \(C\) is a non-zero circuit which contains the arc \(f\). We can obtain another optimal flooding of \((G,\gamma,b)\) by replacing \((T_{1},C,T_{2})\) with the circuit \((T_{2}^{-1},C,T_{1}^{-1})\). Then one of \((T_{1},C,T_{2})\), \((T_{2}^{-1},C,T_{1}^{-1})\) is represented by \((f,0)\), the other is represented by \((f,1)\). That completes this case. _Case 2:_ The edges \(\hat{e}_{1}\) and \(\hat{e}_{2}\) are in distinct circuits in each of \(\hat{\mathcal{C}}_{0}\) and \(\hat{\mathcal{C}}_{1}\). Again we use the fact that these circuits are non-zero by Claim 11.1. Now, up to symmetry between \(\hat{e}_{1}\) and \(\hat{e}_{2}\), we may assume that \(f\) is an ordered pair of half-edges \((h_{1},h_{2})\), where \(h_{1}\) is a half-edge of \(\hat{e}_{1}\) and \(h_{2}\) is a half-edge of \(\hat{e}_{2}\). Then the flooding \(\hat{\mathcal{C}}_{0}\) of \((\hat{G},\hat{\gamma}_{0},b)\) yields an optimal flooding of \((G,\gamma,b)\) where \((f,1+\hat{\gamma}_{0}(\hat{e}_{2}))\) represents a zero circuit. Likewise, \(\hat{\mathcal{C}}_{1}\) yields an optimal flooding of \((G,\gamma,b)\) where \((f,1+\hat{\gamma}_{1}(\hat{e}_{2}))\) represents a zero circuit. Since \(\hat{\gamma}_{0}(\hat{e}_{2})\neq\hat{\gamma}_{1}(\hat{e}_{2})\), this completes the proof of Lemma 11. We are ready to prove Theorem 4, which is restated below for convenience. **Theorem 4**.: _For any RES-graph \((G,\gamma,b)\),_ \[\tilde{\nu}(G,\gamma,b)=\min_{\gamma^{\prime},X}\left(\gamma^{\prime}(E(X))+ \frac{1}{2}|\delta(X)|-\operatorname{odd}_{\gamma^{\prime}}(G-X)\right),\] _where the minimum is taken over all shiftings \(\gamma^{\prime}\) of \(\gamma\) and all sets of vertices \(X\) which contain \(b\)._ Proof.: Suppose for a contradiction that the theorem is false. Let \((G,\gamma,b)\) be a counterexample so that \(|E(G-b)|\) is as small as possible and, subject to that, so that \(|E(G)|\) is as small as possible. Recall that we have already shown one direction of the inequality in Lemma 5. By the choice of \((G,\gamma,b)\), all of its reductions have a certificate. So by Lemma 10, all reductions have flooding number at least \(\tilde{\nu}(G,\gamma,b)+2\). Thus, by Lemma 11, each arc of \(G-b\) is in two non-loop elements of the flooding matroid \(M(G,\gamma,b)\). Our goal is to show that \(M(G,\gamma,b)\) has rank \(1\) and thereby find a contradiction. However, first we need the following straightforward claim. **Claim 4.1**.: _There are no loops at \(b\), the graph \(G-b\) is connected, and \(E(G-b)\) is non-empty._ Proof.: There is no loop at \(b\) since, otherwise, any certificate for the graph obtained from \((G,\gamma,b)\) by deleting that loop also yields a certificate for \((G,\gamma,b)\). Now suppose for a contradiction that the graph \(G-b\) is not connected. Then there are RES-graphs \((G_{1},\gamma_{1},b)\) and \((G_{2},\gamma_{2},b)\) so that \(G=G_{1}\cup G_{2}\), the only vertex in common between \(G_{1}\) and \(G_{2}\) is \(b\), and both \(V(G_{1})-\{b\}\) and \(V(G_{2})-\{b\}\) are non-empty. Then by the choice of \((G,\gamma,b)\), it follows that there are certificates \((X_{1},\gamma_{1}^{\prime})\) and \((X_{2},\gamma_{2}^{\prime})\) for \((G_{1},\gamma_{1},b)\) and \((G_{2},\gamma_{2},b)\), respectively. We may assume that \(\gamma_{1}^{\prime}\) and \(\gamma_{2}^{\prime}\) are obtained without shifting at \(b\); any time we wish to shift at \(b\), we can instead shift at every vertex other than \(b\). Thus there exists a shifting \(\gamma^{\prime}\) of \(\gamma\) which agrees with \(\gamma_{1}^{\prime}\) on \(E(G_{1})\) and \(\gamma_{2}^{\prime}\) on \(E(G_{2})\). Then \(\tilde{\nu}(G,\gamma,b)=\tilde{\nu}(G_{1},\gamma_{1},b)+\tilde{\nu}(G_{2}, \gamma_{2},b)\), and \((X_{1}\cup X_{2},\gamma^{\prime})\) is a certificate for \((G,\gamma,b)\). This is a contradiction, which shows that \(G-b\) is connected. Finally, suppose for a contradiction that \(E(G-b)\) is empty. From the last two paragraphs, this means that \((G,\gamma,b)\) has two vertices and no loops. Let \(\mathcal{C}\) be an optimal flooding of \((G,\gamma,b)\). If \(\mathcal{C}\) has no zero circuits, then \(\tilde{\nu}(G,\gamma,b)=\deg(b)/2\) and \((\{b\},\gamma)\) is a certificate. So we may assume that \(\mathcal{C}\) contains a zero circuit \(C\). Then, after possibly shifting at \(b\), we may assume that both edges of \(C\) have weight zero. Then, since every other circuit of \(\mathcal{C}\) hits \(C\) at the vertex other than \(b\), this means that every zero circuit in \(\mathcal{C}\) has both of its edges of weight zero (otherwise \(\mathcal{C}\) would not be optimal). It follows that \((V(G),\gamma)\) is a certificate. This is again a contradiction and completes the proof of Claim 4.1. The next claim almost completes the proof of the theorem. **Claim 4.2**.: _The matroid \(M(G,\gamma,b)\) has rank \(1\)._ Proof.: Let \(F\) be the set of elements of \(M(G,\gamma,b)\) whose arc is not incident to \(b\); so \(F\) has no loops because each arc of \(G-b\) is in two non-loop elements of \(M(G,\gamma,b)\). Thus, for each arc \(f\) of \(G-b\), the four elements in \(F\) whose arc is \(f\) or \(f^{-1}\) are all parallel. It follows that \(F\) has rank \(1\) from Lemma 7, transitivity of parallel pairs, and the fact that \(G-b\) is connected and has an edge (see Claim 4.1). Now consider an arc \(f\) whose tail is \(b\). Note that \((f^{-1},1)\) is a loop element of the matroid. Furthermore, if \((f^{-1},0)\) is a non-loop element of the matroid, then so is \((f,\gamma(f))\). Furthermore, by Claim 4.1, there exists an arc of \(G-b\) with the same head as \(f\). Then as before, it follows from Lemma 7 that all of the non-loop elements of \(M(G,\gamma,b)\) with \(f\) or \(f^{-1}\) as an arc are in the parallel class of \(F\). Thus \(M(G,\gamma,b)\) has rank \(1\), as desired. Since \(M(G,\gamma,b)\) has rank \(1\), the flooding number of \((G,\gamma,b)\) is \(\deg(b)/2-1\). So the parity of \(\gamma(E(G))\) is different from the parity of \(\deg(b)/2\). Since there are no loops at \(b\) and \(G-b\) is connected by Claim 4.1, we get that \((\{b\},\gamma)\) is a certificate with one odd component. This is a contradiction, which completes the proof of Theorem 4. ## 5. Corollaries In this section we prove the corollaries of Theorem 4 which were mentioned in the introduction, as well two more corollaries of interest. **Regular graphs**.: We begin by proving the two conjectures of Macajova and Skoviera [14] about regular graphs. First we prove a corollary about "rooted" graphs which are \(2\ell\)-regular except for possibly the root, whose degree can be smaller. We consider the signature where every edge has weight \(1\). The corollary says that if the root has degree \(2d\) and the graph is \(2d\)-edge-connected and has an odd number of vertices, then the flooding number is \(d\). Note that we cannot replace the condition that "there are an odd number of vertices" with the condition that "\(|E(G)|\) and \(d\) have the same parity"; in that case the graph could be bipartite (for instance when \(d=\ell\) and \(\ell\) is even) and thus have flooding number ero. When \(d=\ell\) the following corollary is precisely Conjecture 2 from [14]. After proving the corollary, we use it to prove Corollary 2 from the introduction. **Corollary 12**.: _For any positive integers \(\ell\) and \(d\) with \(d\leq\ell\) and any \(2d\)-edge-connected graph \(G\) with an odd number of vertices and a vertex \(b\) of degree \(2d\) so that every other vertex has degree \(2\ell\), there exists a circuit-decomposition of size \(d\) where each circuit has an odd number of edges and begins and ends at \(b\)._ Proof.: Throughout the proof we write \(n\equiv m\) to mean that integers \(n\) and \(m\) are equivalent modulo \(2\). The proof is straightforward, although the case analysis is somewhat tedious. First of all, we may assume that \(d\geq 2\) since otherwise the corollary holds just because \(G\) is an Eulerian graph with an an odd number of edges. (Note that \(|E(G)|\equiv\ell(|V(G)|-1)+d\equiv d\) since \(|V(G)|\) is odd.) Now let \(\gamma\) denote the signature of \(G\) where every edge is given weight \(1\); thus we are trying to show that the RES-graph \((G,\gamma,b)\) has flooding number \(d\). By Theorem 4, this RES-graph has a certificate \((X,\gamma^{\prime})\). That is, \(X\) is a set of vertices which contains \(b\) and \(\gamma^{\prime}\) is a shifting of \(\gamma\) so that the flooding number is equal to \[\gamma^{\prime}(E(X))+\frac{1}{2}|\delta(X)|-\operatorname{odd}_{\gamma^{ \prime}}(G-X). \tag{3}\] Shifting at a vertex outside of \(X\) does not change equation (3); in particular, it does not change the odd components because every vertex has even degree. (If we shift at a vertex inside a set \(Y\subseteq V(G)\), then the parity of \(\gamma^{\prime}(E(Y)\cup\delta(Y))\) does not change.) Thus we may assume that \(\gamma^{\prime}\) is obtained from \(\gamma\) by shifting once at each vertex inside a set \(A\subseteq X\). We set \(B\coloneqq X-A\) so that \((A,B)\) partitions \(X\). We may assume that \(|B|\leq|A|\); there is symmetry between \(A\) and \(B\) since we can also shift at all vertices in \(A\cup B\) without changing equation (3). Suppose first that \(X=V(G)\). Then since \(|V(G)|\) is odd we actually have \(|B|\leq|A|-1\). By counting the edges between \(A\) and \(B\) in two ways, we find that \[\sum_{v\in A}\deg(v)-2|E(A)|=\sum_{v\in B}\deg(v)-2|E(B)|\leq 2\ell|B|\leq 2 \ell(|A|-1).\] Using the lower bound of \(2\ell(|A|-1)+2d\) for \(\sum_{v\in A}\deg(v)\), we obtain \(|E(A)|\geq d\). So the flooding number is at least \(d\) since \(\gamma^{\prime}(E(X))\geq|E(A)|\). Thus we may assume that \(G-X\) has at least one component. Let \(Y\) be the vertex-set of a component of \(G-X\). Notice that \(Y\) "contributes" either \(|\delta(Y)|/2-1\) or \(|\delta(Y)|/2\) to equation (3), depending on whether it is \(\gamma^{\prime}\)-odd. Since \(G\) is \(2d\)-edge-connected, each component must therefore "contribute" at least \(d-1\) to equation (3). So, since \(d\geq 2\), we may assume that \(Y=V(G)-X\), that \(Y\) is \(\gamma^{\prime}\)-odd and has \(|\delta(Y)|=2d\), and that \(\gamma^{\prime}(E(X))=0\) (or, equivalently, that \(E(A)\) and \(E(B)\) are empty). We now aim for a contradiction. Let \(k_{1}\) and \(k_{2}\) denote the number of edges between \(Y\) and \(A\) and between \(Y\) and \(B\), respectively. Using a counting argument for the equality step, we get that \[2\ell(|A|-1)\leq\sum_{v\in A}\deg(v)-2d\leq\sum_{v\in A}\deg(v)-k_{1}=\sum_{v \in B}\deg(v)-k_{2}\leq 2\ell|B|.\] So \(|A|-1\leq|B|\leq|A|\), and if \(|B|=|A|-1\) then every step of the inequality above is tight. We split into cases based on the size of \(B\). Both cases use the fact that \(k_{1}+1\equiv\ell|Y|\); this follows from the fact that \(Y\) is \(\gamma^{\prime}\)-odd and has \(|\delta(Y)|=2d\) (because then \(d+1\equiv\gamma^{\prime}(E(Y)\cup\delta(Y))\equiv|E(Y)\cup\delta(Y)|+k_{1} \equiv\ell|Y|+d+k_{1}\)). _Case 1:_\(|B|=|A|-1\) Then \(|Y|\equiv 0\) since \(1\equiv|V(G)|\equiv|A|+|B|+|Y|\). So \(k_{1}+1\equiv\ell|Y|\equiv 0\). However, since every step of the inequality mentioned above is tight, we also have that \(k_{1}=2d\), a contradiction. _Case 2: \(|B|=|A|\)_ We may assume that \(b\in B\) since now \(A\) and \(B\) are symmetric again. So we have that \(k_{1}-k_{2}=2\ell-2d\); this can be verified from the equations, but intuitively the extra edges from \(A\) (rather than \(B\)) to \(Y\) must make up for the smaller degree of \(b\). We also have the equation \(k_{1}+k_{2}=|\delta(Y)|=2d\). Summing the corresponding sides, we obtain \(2k_{1}=2\ell\). However this again contradicts the fact that \(k_{1}+1\equiv\ell|Y|\) since \(|Y|\equiv|V(G)|+|A|+|B|\equiv|V(G)|\equiv 1\). This finishes the two cases and thus the proof of Corollary 12. Now we use Corollary 12 to prove the other conjecture of Macajova and Skoviera. This corollary was mentioned in the introduction and is re-stated below. **Corollary 2**.: _For any positive integer \(\ell\) and any connected \(2\ell\)-regular graph with an odd number of vertices, there exists a circuit-decomposition of size \(\ell\) where all circuits have an odd number of edges and begin and end at the same vertex._ Proof.: Going for a contradiction, suppose not. Choose a counterexample \(G\) with as few vertices as possible. Then \(G\) is not \(2\ell\)-edge-connected, since otherwise we could apply Corollary 12 with any "root" vertex. Now let \(S\) be a set of vertices so that both \(S\) and \(V(G)-S\) are non-empty; subject to that, choose \(S\) so that \(|\delta(S)|\) is as small as possible. Since \(G\) has an odd number of vertices, we may assume that \(|S|\) is odd and \(|V(G)-S|\) is even. Since \(G\) is Eulerian, there is a positive integer \(d\) so that \(|\delta(S)|=2d\). Finally, since \(G\) is not \(2\ell\)-edge-connected, we have \(d<\ell\). Now let \(H\) be the graph which is obtained from \(G\) by identifying \(S\) to a single vertex \(b\) and then deleting all loops at \(b\). (That is, \(H\) has vertex-set \((V(G)-S)\cup\{b\}\), the induced subgraph of \(H\) on \(V(G)-S\) is the same as the induced subgraph of \(G\) on \(V(G)-S\), and \(H\) has one edge with ends \(b\) and \(x\) for each edge in \(\delta(S)\) whose end outside of \(S\) is \(x\).) We claim that \(H\) and \(b\) satisfy all of the conditions of Corollary 12. The key point is that \(H\) is \(2d\)-edge-connected by the minimality of \(|\delta(S)|\) in \(G\); for each subset \(X\) of \(V(H)\) which contains \(b\), the number of edges in \(\delta(X)\) in \(H\) is equal to the number of edges in \(\delta(X-\{b\}\cup S)\) in \(G\). Thus, by Corollary 12, the graph \(H\) has a circuit-decomposition of size \(d\) where each circuit has an odd number of edges and begins and ends at \(b\). This yields a collection \(\mathcal{T}\) of \(d\) trails in \(G\) so that 1. \(E(G)-E(S)\) is the disjoint union of the edge-sets of the trails in \(\mathcal{T}\), and 2. each trail in \(\mathcal{T}\) has an odd number of edges and begins and ends in \(X\). Let \(G^{\prime}\) be the graph which is obtained from the subgraph of \(G\) induced by \(S\) by adding, for each trail \(T\in\mathcal{T}\), an edge with the same ends as \(T\). This graph \(G^{\prime}\) is \(2\ell\)-regular, has an odd number of vertices, and has fewer vertices than \(G\). It is also connected; otherwise, if \(Y\) was the vertex-set of one of its components, then in \(G\) we would have \(|\delta(Y)|<|\delta(S)|\) and a contradiction to the choice of \(S\). So, since \(G\) is a minimum counterexample to Corollary 2, the graph \(G^{\prime}\) has a circuit-decomposition \(\mathcal{C}^{\prime}\) of size \(\ell\) where all circuits have an odd number of edges and begin and end at the same vertex. We can obtain a circuit-decomposition \(\mathcal{C}\) of \(G\) with the same properties by replacing the \(d\) new edges of \(G^{\prime}\) with the corresponding trails in \(\mathcal{T}\). This contradicts the fact that \(G\) is a counterexample and completes the proof of Corollary 2. ### Packing and the Erdos-Posa property Now we prove the corollary from the introduction that relates "packing" and "decomposing". It is re-stated below for convenience. **Corollary 3**.: _For any signed \(4\)-edge-connected Eulerian graph and any vertex \(b\), if there is a collection of \(\ell\) edge-disjoint non-zero circuits which hit \(b\), then there is a circuit decomposition of size \(\lceil\ell/2\rceil\) where each circuit is non-zero and hits \(b\)._ Proof.: Let \((G,\gamma,b)\) be a \(4\)-edge-connected RES-graph so that there exists a collection of \(\ell\) edge-disjoint non-zero circuits which hit \(b\). By Theorem 4, there is a certificate \((X,\gamma^{\prime})\) for the flooding number. Moreover, since \(G\) is \(4\)-edge-connected, \(\operatorname{odd}_{\gamma^{\prime}}(G-X)\leq\frac{1}{4}|\delta(X)|\). So \[2\tilde{\nu}(G,\gamma,b) =2\gamma^{\prime}(E(X))+|\delta(X)|-2\operatorname{odd}_{ \gamma^{\prime}}(G-X)\] \[\geq 2\gamma^{\prime}(E(X))+\frac{1}{2}|\delta(X)|\] \[\geq\gamma^{\prime}(E(X))+\frac{1}{2}|\delta(X)|\] \[\geq\ell,\] since each of the \(\ell\) edge-disjoint non-zero circuits which hit \(b\) must use either a non-zero edge in \(E(X)\), or at least two edges in \(\delta(X)\). It follows that \(\tilde{\nu}(G,\gamma,b)\geq\lceil\ell/2\rceil\) since \(\tilde{\nu}(G,\gamma,b)\) is an integer. This proof of Corollary 3 also shows how to construct an example where the bound is tight; see Figure 9. A similar construction, but where each component of \(G-b\) has two edges to \(b\), shows that \(4\)-edge-connectivity is necessary. Figure 9. A \(4\)-edge-connected RES-graph where the bound in Corollary 3 is tight. We mentioned in the introduction that packing problems have been particularly well-studied in relation to the Erdos-Posa property. In fact, Corollary 3 can be combined with a theorem of Kakimura, Kawarabayashi, and Kobayashi [10] to obtain the Erdos-Posa property for the flooding number of a \(4\)-edge-connected RES-graph. The following final corollary of Theorem 4 obtains this type of property directly; Figure 9 shows that the bounds are tight. **Corollary 13**.: _If \((G,\gamma,b)\) is a \(4\)-edge-connected RES-graph with \(\tilde{\nu}(G,\gamma,b)\leq\ell\), then there exists a set \(F\) of at most \(3\ell\) edges so that \(G-F\) has no non-zero circuit which begins and ends at \(b\)._ Proof.: By Theorem 4, the RES-graph \((G,\gamma,b)\) has a certificate \((X,\gamma^{\prime})\). Let \(F_{1}\) be the set of all edges in \(E(X)\) which are non-zero according to \(\gamma^{\prime}\), and let \(F_{2}\) be any subset of \(\delta(X)\) which is obtained by deleting one edge incident to each component of \(G-X\). Then \[3\ell\geq 3\tilde{\nu}(G,\gamma,b)\geq\gamma^{\prime}(E(X))+3\left(\frac{| \delta(X)|}{2}-\operatorname{odd}_{\gamma^{\prime}}(G-X)\right).\] It is clear that \(\gamma^{\prime}(E(X))=|F_{1}|\). We claim that \(3(|\delta(X)|/2-\operatorname{odd}_{\gamma^{\prime}}(G-X))\geq|F_{2}|\). To see this, observe that a component of \(G-X\) with vertex-set \(Y\) "contributes" either \(3|\delta(Y)|/2-3\) or \(3|\delta(Y)|/2\) to the expression, depending on whether \(Y\) is \(\gamma^{\prime}\)-odd. Moreover, \(3|\delta(Y)|/2-3\geq|\delta(Y)|-1\) since \(G\) is \(4\)-edge-connected. The corollary follows. ## Acknowledgement The author would like to express their deepest thanks to Jim Geelen and Paul Wollan for their input on this paper. The author would also like to thank Louis Esperet for suggesting the connection to the conjectures of Macajova and Skoviera, and to James Davies for feedback which improved the presentation.
2306.06695
The arc complexes of bicoloured polygons are balls
We prove that the arc complexes of a convex polygon and of a once-punctured polygon with a bicolouring are pseudo-manifolds with boundary and we also give a shelling order. As a consequence we get that the arc complex of an ideal decorated hyperbolic (possibly once-punctured) polygon is a closed ball.
Pallavi Panda
2023-06-11T14:59:15Z
http://arxiv.org/abs/2306.06695v1
# The arc complexes of bicoloured polygons are balls # The arc complexes of bicoloured polygons are balls Pallavi Panda **Abstract.** We prove that the arc complexes of a convex polygon and of a once-punctured polygon with a bicolouring are pseudo-manifolds with boundary and we also give a shelling order. As a consequence we get that the arc complex of an ideal decorated hyperbolic (possibly once-punctured) polygon is a closed ball. ## 1 Introduction Given a Euclidean polygon \(\mathcal{P}_{m}\) with \(m\geq 4\) vertices, its _arc complex_\(\mathcal{A}(\mathcal{P}_{m})\) is a pure flag simplicial complex constructed using diagonals and their disjointness (See Definition 2.3). It is a classical result of combinatorics that this complex is a piecewise linear sphere of dimension \(m-4\). Penner [17] proved this result in the context of hyperbolic surfaces and Teichmuller theory. He studied the arc complex of an _ideal polygon_, which is the convex hull in the hyperbolic plane \(\mathbb{H}^{2}\) of finitely many points (called _ideal_) on the boundary \(\partial_{\infty}\mathbb{H}^{2}\). The diagonals in this case are bi-infinite hyperbolic geodesics with endpoints in this finite set. He attributes the original proof to Whitney. Topologically, a polygon is a closed disk with \(n\) marked points on its boundary. More generally, for a finite-type orientable surface with finitely many marked points on its boundary (possibly also with punctures), one constructs its arc complex using embedded arcs whose endpoints the marked points. These were first studied by Harer in [7]. He showed that a specific open dense subset of the arc complex, called the _pruned arc complex_, is an open ball of dimension one less than that of the deformation space of the surface. In [4], Fomin-Shapiro-Thurston established an important link between these arc complexes and cluster algebras. They proved that the arc complexes are subcomplexes of the cluster complexes of some cluster algebras. Furthermore, Fomin and Zelevinksy [5] gave a convex polytopal realisation of the cluster complexes in the finite case. The most famous one is the cluster complex of a convex polygon, whose dual is an associahedron. The associahedron was discovered by Tamari [22], and then rediscovered ten years later by Stasheff [21]. Its first polytopal realisation was given by Lee [8]. Another famous polytopal realisation, which is used in algebraic topology, was given by Loday in [9]. Sleator-Tarjan-Thurston [20] showed that the \(d\)-dimensional associahedron has diameter at most \(2d-4\) when \(d\geq 9\) using hyperbolic geometry. Later Pournin [18] proved the equality using combinatorial methods. The relationship to cluster algebras was motivated by Penner's Decorated Teichmuller Theory [14],[15]. Penner defined the _lambda length_ of a _horoball connection_ which is a geodesic arc joining two ideal points decorated by horoballs. These lengths act as coordinates for the decorated Teichmuller space of a decorated _crowned_ hyperbolic surface. This is a non-compact hyperbolic surface with polygonal boundary, where the vertices (called _spikes_) are projections of ideal points decorated with horoballs. Furthermore, using the arc complex, he gave a cell-decomposition of the decorated Teichmuller space. In [17], Penner studied the topology of the quotient of the arc complex under the canonical action of the pure mapping class group of the surface. He conjectured that this quotient space is a sphere of a certain dimension, but this was later proved to be false by Sullivan. There is a complete list of surfaces (see [16]) for which the statement is true, the ideal polygons and the once-punctured ideal polygons being among them. In the non-orientable setting, Dupont and Palesi [3] found an analogue of a cluster algebra associated to the arc complexes of non-orientable surfaces. Wilson [24] proved that the arc complex of a Mobius strip with marked points on the boundary is spherical. He also reproved the sphericity of the arc complexes of a convex polygon and a once-punctured polygon. The main ingredient he used was the shellability of pure simplicial complexes. This is an ordering of all the maximal simplices of the complex so that the intersection of the \(k\)-th simplex with the union of the first \((k-1)\) simplices is always a pure complex of codimension one. He gave shelling orders for the arc complexes of all three surfaces and then concluded using a result by Danaraj and Klee [1] which states that a shellable \(d\)-pseudo-manifold without boundary is a combinatorial sphere. The convex core of a crowned hyperbolic surface without punctures is a compact hyperbolic surface with non-empty totally geodesic boundary. An _admissible deformation_ of such a hyperbolic surface is an infinitesimal deformation that uniformly lengthens all non-trivial closed geodesics. Goldman-Labourie-Margulis, in [6], proved that the subspace of admissible deformations forms an open convex cone called the _admissible cone_. One can construct the arc complex of this surface generated by the isotopy classes of embedded arcs with endpoints on the boundary. The pruned arc complex of such a surface once again forms an open ball of dimension one less than that of the Teichmuller space, which is obtained by reinterpreting Harer's result. Hyperbolic strip deformations were first introduced by Thurston in [23]. See, for example, Section 1.2 in [2] for the definition. Danciger-Gueritaud-Kassel [2] showed that the pruned arc complex parametrises the positively projectivised admissible cone. To a positively weighted arc, the authors associated a unique admissible deformation of the surface by performing hyperbolic strip deformations along the arc, whose strip width is given by the weight. Motivated by the above works, in [10] we studied the arc complexes of a decorated ideal polygon \(\widehat{\Pi_{n}^{\times}}\) (\(n\geq 3\)) and a decorated once-punctured ideal polygon \(\widehat{\Pi_{n}^{\times}}\) (\(n\geq 2\)). These are generated by finite arcs whose end points lie on the boundary and infinite arcs with one end on the vertex and the finite endpoint on the boundary. In both of these cases the arc complexes are finite but their topologies were unknown. We proved that the pruned arc complex, which is just the interior of the arc complex in these cases, parametrises all the infinitesimal deformations that uniformly lengthened all horoball connections in these polygons. As a result, we found that the interior of the complexes were open balls. In this paper we show that the full arc complexes are \(PL\)-balls. The arc complex of a decorated (resp. once-punctured) polygon \(\widehat{\Pi_{n}^{\times}}\) (resp. \(\widehat{\Pi_{n}^{\times}}\)) is combina torially equivalent to a certain subcomplex of the arc complex of a Euclidean (once-punctured) polygon \(\mathcal{P}_{2n}\) (resp. \(\mathcal{P}_{2n}^{\times}\)), endowed with an alternate bicolouring (see Definition 2.2). In this paper, we prove something more general: **Theorem**.: The subcomplex \(\mathcal{Y}(\mathcal{P}_{m})\) of a polygon \(\mathcal{P}_{m}\) with any bicolouring is a closed ball of dimension \(m-4\). Similarly, the subcomplex \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) of a once-punctured polygon \(\mathcal{P}_{m}^{\times}\) with any bicolouring is a closed ball of dimension \(m-2\). Our proofs are heavily motivated by the works of Wilson and Danaraj-Klee. We use purely combinatorial topology tools while in [10] we use methods from hyperbolic geometry. As corollaries to these theorems we get the arc complexes of the two decorated hyperbolic polygons are closed balls. **Corollary 1.1**.: _For \(n\geq 3\), the arc complex \(\mathcal{A}\left(\widehat{\Pi_{n}^{\diamondsuit}}\right)\) of a decorated ideal polygon \(\widehat{\Pi_{n}^{\diamondsuit}}\) is PL-homeomorphic to a closed ball of dimension \(2n-4\)._ **Corollary 1.2**.: _For \(n\geq 2\), the arc complex \(\mathcal{A}\left(\widehat{\Pi_{n}^{\times}}\right)\) of a decorated once-punctured ideal polygon \(\widehat{\Pi_{n}^{\times}}\) is PL-homeomorphic to a closed ball of dimension \(2n-2\)._ These results have applications for the study the boundary of the admissible cone of the decorated triangles using the boundary simplices of the arc complex. A facet on the boundary of the admissible cone corresponds to all those infinitesimal deformations that lengthen every horoball connection except one. As we mentioned above, the full arc complex is rarely a \(PL\)-manifold so this approach is not possible for "bigger" surfaces. Nonetheless it will shed some useful light on how to parametrise the facets using arc complexes of subsurfaces. The paper is structured into sections in the following way: Section 2 recapitulates the necessary vocabulary and results from simplicial complexes, arc complexes and hyperbolic geometry. Section 3 contains the proofs of our main theorems. Section 4 describes the link between the arc complexes of the Euclidean polygons and the decorated hyperbolic polygons. Acknowledgements.This work was done at Universite de Luxembourg supported by Luxembourg National Research Fund OPEN grant O19/13865598. I would like to thank my supervisor Hugo Parlier for giving me this opportunity. I would also like to thank Lionel Pournin for helpful discussions and encouragement. ## 2 Setup ### Simplicial complex In this section we recall relevant definitions and results on finite simplicial complexes. A simplicial complex is called _pure_ if all of its maximal simplices have the same dimension. The _dual graph_ of a simplicial complex is the graph whose vertices are the maximal simplices and two vertices are joined by an edge if the corresponding maximal simplices share a codimension \(1\) face. A pure simplicial complex is said to be _strongly connected_ if its dual graph is connected. A _\(d\)-pseudo-manifold with boundary_ is a pure strongly connected \(d\)-simplicial complex in which every \((d-1)\)-simplex is contained in atmost two \(d\)-simplices.Note that the boundary of such a simplicial complex is formed by all \((d-1)\)-simplices that are contained in exactly one \(d\)-simplex. Next we recall the definition of a shelling of pure simplicial complexes. Let \(X\) be a pure finite simplicial complex of dimension \(d\). A _shelling_ of \(X\) is an enumeration of its maximal simplices \(\mathcal{T}:(C_{1},\ldots,C_{n})\) such that for every \(1\leq k\leq n\), the intersection \(\left(\bigcup\limits_{j=1}^{k-1}C_{j}\right)\bigcap C_{k}\) is a pure simplicial complex of dimension \(d-1\). The following is a lemma linking shellability and join of two simplicial complexes that we shall use in the proof of our main theorems. See [19] for a proof. **Lemma 2.1**.: _Two complexes \(X,Y\) are shellable if and only if \(X\bowtie Y\) is shellable._ We will use the following result by Danaraj and Klee [1] to prove our main theorems. **Theorem 2.2**.: _A shellable \(d\)-pseudo-manifold with boundary is \(PL\)-homeomorphic to a closed ball of dimension \(d\)._ ### Arcs and arc complexes In this section we recall the relevant definitions and results the arc complex of polygons that will be used in the rest of the paper. We denote by \(S_{g,n}\) a surface with genus \(g\,(\geq 0)\) and \(n\,(\geq 1)\) marked points on its boundary. **Definition 2.3**.: The arc complex \(\mathcal{A}(S_{g,n})\) of a finite-type surface \(S_{g,n}\) with marked points on its boundary is a simplicial complex defined in the following way: the \(0\)-skeleton is given by the embedded arcs with their endpoints on the marked points of the polygon, up to homotopy relative to the endpoints. For \(k\geq 1\), every \(k\) simplex is given by a \((k+1)\)-tuple of pairwise disjoint and distinct arcs, up to homotopy. In this section, we are going to consider only two types of surfaces: convex polygons \(\mathcal{P}_{m}\) (\(m\geq 4\)) and once-punctured convex polygons \(\mathcal{P}_{m}^{\times}\), (\(m\geq 2\)). In the case of a convex polygon, these homotopy classes are simply the diagonals joining two vertices of the polygon. To avoid confusion, we will refer to the arcs of a punctured polygon as diagonals as well. A _maximal_ diagonal of \(\mathcal{P}_{m}^{\times}\) is the diagonal with both its endpoints coinciding on a vertex of the polygon. The blue diagonal in the left panel of Fig. (2) is a maximal diagonal. The following is a classical fact from combinatorics. See, for instance, [17] for a proof by Penner. Figure 1 **Theorem 2.4**.: _The arc complex of a convex polygon \(\mathcal{P}_{m}\) (\(m\geq 4\)) is PL-homemorphic to a sphere of dimension \(m-4\)._ Fig.(1a) shows the diagonals and the arc complex of a hexagon. The diagonals corresponding to the \(0\)-skeleton of a maximal simplex of \(\mathcal{A}(\mathcal{P}_{m})\) decomposes the polygon \(\mathcal{P}_{m}\) into triangles. In the case of a punctured polygon \(\mathcal{P}_{m}^{\times}\), the diagonals decompose it into triangles and a once-punctured disc with one marked point on its boundary. Hence a maximal simplex will be alternatively referred to as a _triangulation_ of the polygon. A triangulation, all of whose diagonals are incident on the same vertex, is called a _fan_ triangulation. The dual graph to \(\mathcal{A}(S_{g,n})\) is called a _flip graph_. In [11], [12] and [13], Parlier and Pournin study the diameter growth of flip graphs, up to the action of pure mapping class groups, of certain families \(S_{g,n}\). In the case of a convex polygon, dual graph is the \(1\)-skeleton of a convex polytope is the associahedron, which we mentioned in the introduction. See Fig.(1b) for the associahedron of dimension \(3\). The following theorem about the arc complex of once-punctured polygons was proved by Penner in [17]. **Theorem 2.5**.: _The arc complex \(\mathcal{A}(\mathcal{P}_{m}^{\times})\) of a punctured \(m\)-gon, (\(m\geq 2\)), is PL-homeomorphic to a sphere of dimension \(n-2\)._ Fig.2 gives an illustration of the arc complex of a once-punctured quadrilateral. The blue diagonal in the left panel is a maximal arc. Bicolourings.Given a convex (possibly punctured) polygon we consider all possible colourings of its vertices with two colours, say red (\(R\)) and blue (\(B\)), so that there is a vertex of each colour. Such a colouring is called a _bicolouring_. A bicolouring is called _non-trivial_ if there is at least one \(R-R\) diagonal. A trivial bicolouring can be of two types: 1. there is exactly one red vertex, 2. there are exactly two red vertices and they are consecutive. Figure 2: The three types of diagonals and the full arc complex of \(\mathcal{P}_{4}^{\times}\) See Fig.(3) for examples of trivial and non-trivial bicolourings of a quadrilateral \(\mathcal{P}_{4}\). One example of a non-trivial bicolouring of \(\mathcal{P}_{m}\), with \(m\) even is the _alternate bicolouring_: for every pair of consecutive vertices, exactly one is blue. This will be used in Section 4 to link the Euclidean polygons with their hyperbolic siblings. We denote by \(\mathcal{Y}(\mathcal{P}_{m})\) (resp. \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\)) the subcomplex of \(\mathcal{A}(\mathcal{P}_{m})\) (resp. \(\mathcal{A}(\mathcal{P}_{m}^{\times})\)) generated by the \(R-B\) and \(B-B\) diagonals only. We call these diagonals _permitted_ and the \(R-R\) diagonals are called _rejected_. A simplex of \(\mathcal{Y}(\mathcal{P}_{m})\) using permitted diagonals is called _permissible_. _Remark 2.1_.: Any fan triangulation of a bicoloured polygon based at a blue vertex is permissible. _Remark 2.2_.: In the case of a trivial bicolouring, the subcomplex is the full arc complex because there are no rejected diagonals. This is why the bicolouring is named "trivial". Figs. (4) and (5) show the subcomplexes \(\mathcal{Y}(\mathcal{P}_{6})\) and \(\mathcal{Y}(\mathcal{P}_{4}^{\times})\) when the polygons \(\mathcal{P}_{6}\), \(\mathcal{P}_{4}^{\times}\) have alternate bicolouring. In both the cases, they are closed 2-balls. ### Shelling of finite arc complexes Wilson gave an equivalent definition for shelling in [24], in the context of arc complexes. _Property 1_ (Wilson).: Let \(X\) be a pure simplicial complex. There exists an enumeration of all the maximal faces of \(X\), denoted by \(\mathcal{T}:(C_{1},\ldots,C_{n})\), such that for every two positive integers \(j\) and \(k\) satisfying \(j<k\leq n\), there exists a positive integer \(i<k\) such that Figure 4: Figure 3: 1. \(C_{i}\cap C_{k}\) is a codimension one simplex, 2. \(C_{j}\cap C_{k}\subset C_{i}\cap C_{k}\). In [24] (Proposition 3.5), Wilson proved that in the context of finite arc complexes, there is an equivalence between shellability and Property (1). **Lemma 2.6**.: _Suppose that \(S_{g,n}\) is a surface with finite arc complex. Then the complex \(\mathcal{A}(S_{g,n})\) is shellable if and only if it satisfies Property (1)._ In the following Lemma we show that the same holds for any pure subcomplex of the arc complex. **Lemma 2.7**.: _A pure codimension zero subcomplex \(X\) of \(\mathcal{A}(S_{g,n})\) is shellable if and only if it satisfies Property (1)._ Proof.: We show that the enumeration \(\mathcal{T}=C_{1}\ldots,C_{n}\) given by the property works as a shelling order. So we need to show that for every \(1\leq k\leq n\), the simplicial complex \(B_{k}:=\left(\bigcup\limits_{j=1}^{k-1}C_{j}\right)\bigcap C_{k}\) is a pure simplicial complex of dimension \(d-1\), where \(d\) is the dimension of the subcomplex \(X\) as well as \(\mathcal{A}(S_{g,n})\). Let \(\sigma\subset B_{k}\) be any simplex of codimension more than one. Then it is contained in \(C_{j}\cap C_{k}\) for some \(j\in\{1,\ldots,k-1\}\). Since \(X\) is a subcomplex of the arc complex, the complex \(C_{j}\cap C_{k}\) is in fact a simplex. From Property (1) we get that there is an \(i<k\) such that \(C_{i}\cap C_{k}\) is of dimension \(d-1\) and \(C_{j}\cap C_{k}\subset C_{i}\cap C_{k}\subset B_{k}\). Hence we get that the simplex \(\sigma\) is not maximal in \(B_{k}\) and the dimension of any maximal simplex containing \(\sigma\) is \(d-1\). Conversely, let us suppose that \(X\) is shellable with a shelling order \(\mathcal{T}_{n}=(T_{1},\ldots,T_{n})\). Consider two maximal simplices \(T_{j},T_{k}\) with \(j<k\leq n\). If \(T_{j}\cap T_{k}=\varnothing\), there is nothing to show. So we assume that \(T_{j}\cap T_{k}\neq\varnothing\). Once again, the intersection \(T_{j}\cap T_{k}\) is a simplex because \(X\) is a subcomplex of the arc complex. Let \(B_{k}:=\left(\bigcup\limits_{i=1}^{k-1}T_{i}\right)\cap T_{k}\) as before. From the hypothesis, \(B_{k}\) is a pure simplicial complex of dimension \(d-1\). So we get that \(T_{j}\cap T_{k}\) is contained in a \((d-1)\)-simplex \(T_{i}\cap T_{k}\), where \(T_{i}\) is a maximal simplex with \(i<k\). This \(T_{i}\) satisfies the two conditions of Property (1). The orientable surfaces with finite arc complexes are convex polygons \(\mathcal{P}_{m}\), once-punctured polygons \(\mathcal{P}_{m}^{\times}\), annulus with one marked point in one boundary component and \(m\) marked points on the other boundary component. Note that a once-punctured convex polygon is an annulus with marked points only on one boundary component. In [3], Dupont and Palesi introduce the _quasi_ arc complex of a non-orientable surface where they include any one-sided curve in the 0-skeleton. The only non-orientable surface with finite (quasi) arc complex is a Mobius strip with marked points on its boundary. Wilson proved the sphericity of this complex in [24], using shellability. In order to prove the shellability, he first gave shelling orders of the arc complexes of a convex polygon and a once-punctured polygon. **Theorem 2.8** (Wilson, Proposition 2.11, Claim 1.4).: _The arc complex \(\mathcal{A}(\mathcal{P}_{m})\) of a convex polygon \(\mathcal{P}_{m}\) (\(m\geq 4\)) is a \((m-4)\)-pseudo-manifold with boundary and is shellable._ **Theorem 2.9** (Wilson, Proposition 2.11, 3.13).: _The arc complex \(\mathcal{A}(\mathcal{P}_{m}^{\times})\) of a punctured polygon \(\mathcal{P}_{m}^{\times}\) (\(m\geq 2\)) is a \((m-2)\)-pseudo-manifold with boundary and is shellable._ ### Hyperbolic geometry In this section we recall a few definitions from hyperbolic geometry which will be used in 4 to give the application of our main theorems in the context of hyperbolic polygons, as previewed in the introduction. The hyperbolic plane, denoted by \(\mathbb{H}^{2}\), is the unique (up to isometry) complete simply-connected Riemannian 2-manifold of constant curvature equal to -1. Its boundary, denoted by \(\partial_{\infty}\mathbb{H}^{2}\), is homeomorphic to a circle \(\mathbb{S}^{1}\). Its orientation preserving isometry group is isomorphic to \(\mathrm{PSL}(2,\mathbb{R})\). This group has three types of elements: elliptic (one fixed point in \(\mathbb{H}^{2}\)), parabolic (one fixed point in \(\partial_{\infty}\mathbb{H}^{2}\)) and hyperbolic (two fixed points in \(\partial_{\infty}\mathbb{H}^{2}\)). A horocycle based at a point in \(\partial_{\infty}\mathbb{H}^{2}\) is the orbit of a parabolic element fixing that point. A horoball is the convex hull of a horocycle. An _ideal \(n\)-gon_, denoted by \(\Pi_{n}^{\diamond}\), is defined as the convex hull in \(\mathbb{H}^{2}\) of \(n\,(\geq 3)\) distinct points on \(\partial_{\infty}\mathbb{H}^{2}\). The points on the boundary are called _vertices_ and the _edges_ are infinite geodesics of \(\mathbb{H}^{2}\) joining two consecutive vertices. The restriction of the hyperbolic metric to an ideal polygon gives it a complete finite-area hyperbolic metric with geodesic boundary. For \(n\geq 2\), an _ideal once-punctured \(n\)-gon_, denoted by \(\Pi_{n}^{\times}\), is another non-compact complete hyperbolic surface with geodesic boundary, obtained from an ideal \((n+2)\)-gon, by identifying two consecutive edges using a parabolic element of \(\mathrm{PSL}(2,\mathbb{R})\) that fixes the common vertex. The edges of the polygon are the connected components of the boundary. The vertices are the quotients of the vertices of \(\Pi_{n+2}^{\diamond}\). Main theorems ### Bicoloured convex polygon In this section we prove that the subcomplex \(\mathcal{Y}(\mathcal{P}_{m})\subset\mathcal{A}(\mathcal{P}_{m})\) is PL-homeomorphic to a closed ball. Firstly we show that this subcomplex is a pseudo-manifold with boundary when the bicolouring is non-trivial. **Lemma 3.1**.: _For \(m\geq 4\) and any non-trivial bicolouring of \(\mathcal{P}_{m}\), the subcomplex \(\mathcal{Y}(\mathcal{P}_{m})\) of \(\mathcal{A}(\mathcal{P}_{m})\) is pure of dimension \(m-4\)._ Proof.: Let \(\sigma\) be a simplex of \(\mathcal{Y}(\mathcal{P}_{m})\) of dimension \(k\geq 1\). The diagonals corresponding to the \(0\)-simplices of \(\sigma\) decompose the polygon \(\mathcal{P}_{m}\) into \(k+2\) regions, some of which are untriangulated polygons with at least \(4\) vertices. Since these diagonals are permitted, every smaller untriangulated polygon has a blue vertex. We triangulate each one of them with a fan triangulation based at one such blue vertex. Since the dimension of the maximal simplex in the uncoloured polygon is \(m-4\), we get that any maximal simplex containing \(\sigma\) is of dimension \(m-4\). Next, we show that the subcomplex is strongly connected. **Lemma 3.2**.: _For \(m\geq 4\) and any non-trivial bicolouring, the subcomplex \(\mathcal{Y}(\mathcal{P}_{m})\) is strongly connected._ Proof.: We will prove that \(T\) is connected by flips to the fan triangulation based at any blue vertex, by induction on \(m\). For \(m=4\), there are two non-trivial bicolourings possible: either alternating \(R,B\) vertices or three \(R\) vertices. In each case the subcomplex \(\mathcal{Y}(\mathcal{P}_{m})\) is a single \(0\)-simplex which is also the fan triangulation based simultaneously at the two blue vertices. Suppose that for any bicolouring of polygons with \(1,\ldots,m-1\) vertices, any triangulation \(T\) is connected by flips to a Figure 6: fan triangulation. Now, consider the polygon \(\mathcal{P}_{m}\). We name \(v_{1}\) any blue vertex and we enumerate the rest of the vertices in the clockwise direction. Let \(T_{0}\) be the fan triangulation based at \(v_{1}\). Suppose that \(l_{i}\in T\cap T_{0}\) for some \(i=3,\ldots,m-2\). See the left panel of Fig.(3.2). It divides the polygon \(\mathcal{P}_{m}\) into two smaller polygons \(\mathcal{P}_{i},\mathcal{P}_{m-i+1}\) whose vertices are \(v_{1},v_{2},\ldots,v_{i}\) and \(v_{i},v_{i+1},\ldots,v_{1}\), respectively. The triangulation \(T\) triangulates these two polygons. It is possible that that one of these smaller polygons have a trivial coloring. In this case, we use fact that the dual of the full arc complex (the associahedron) is connected. In particular, the restriction of \(T\) to this smaller polygon is connected by flips to the fan triangulation based at a blue vertex. If both the smaller polygons have a non-trivial bicolouring, we use our induction hypothesis inside each of them to connect \(T\) by flips to the fan triangulation based at \(v_{1}\). So \(T\), as a triangulation of \(\mathcal{P}_{m}\), is connected by flips to the triangulation \(T_{0}\). Now we suppose that there are no diagonals in \(T\) that are incident at \(v_{1}\) and since \(T\) is a permissible triangulation, the vertex \(v_{1}\) is contained in a unique triangle with vertices at \(v_{2},v_{m}\) such that at least one is blue. The diagonal joining \(v_{2},v_{m}\) is the edge of exactly one other triangle, whose third vertex is say \(v_{j}\neq v_{1}\). Let \(T^{\prime}\) be the triangulation obtained by flipping the diagonal joining \(v_{2},v_{m}\) with the permissible diagonal \(l_{j}\) joining \(v_{1},v_{j}\). Now we are in the first case. This concludes the induction step. Next we show that **Lemma 3.3**.: _Every codimension one simplex of the subcomplex \(\mathcal{Y}(\mathcal{P}_{m})\) is contained in at most two maximal simplices of \(\mathcal{Y}(\mathcal{P}_{m})\)._ Proof.: Let \(\sigma\) be a \(d-1\)-simplex, where \(d\) is the dimension of \(\mathcal{Y}(\mathcal{P}_{m})\). Since the complex \(\mathcal{A}(\mathcal{P}_{m})\) of an uncoloured polygon is a pseudo-manifold, we have that \(\sigma\) is contained in exactly two \(d\)-simplices of \(\mathcal{A}(\mathcal{P}_{m})\). These two simplices represent two triangulations which have all but one diagonal in common. These two diagonals can be either \(R-B\) and \(B-B\) or both \(B-B\). Indeed, if both were \(R-R\) diagonals or both were \(R-B\) diagonals, then there would be a \(R-R\) diagonal inside \(\sigma\) bounding the quadrilateral region where the two intersecting diagonals lie. This is impossible because \(\sigma\subset\mathcal{Y}(\mathcal{P}_{m})\). When both the diagonals are of type \(B-B\), then both the maximal simplices lie inside \(\mathcal{Y}(\mathcal{P}_{m})\). Otherwise only the maximal simplex generated by \(R-B,B-B\) diagonals lies in \(\mathcal{Y}(\mathcal{P}_{m})\). In this case, \(\sigma\) is a boundary simplex of \(\mathcal{Y}(\mathcal{P}_{m})\). Using Lemmas (3.1)-(3.3), we get that **Theorem 3.4**.: _The subcomplex \(\mathcal{Y}(\mathcal{P}_{m})\) is a \(d\)-pseudo-manifold with boundary._ As the last step, we prove that the subcomplex is shellable. **Theorem 3.5**.: _For \(m\geq 4\), the simplicial complex \(\mathcal{Y}(\mathcal{P}_{m})\) satisfies Property (1)._ Proof.: We use induction on the number of vertices. For \(m=4\), there are two possible colourings. In both the cases, the complex is a single \(0\)-simplex which satisfies the property trivially. Let us now suppose that the statement is true for \(4,\ldots,m-1\). Due to the bicolouring, there is a edge of the polygon joining one red (the right one, say) and one blue vertex (left one). Enumerate the vertices in the anticlockwise direction starting from the blue vertex so that the red vertex is numbered \(m\). See Fig.7. For \(i=3,\ldots,m-2\), let \(\sigma_{i}\) be the simplex of \(\mathcal{Y}(\mathcal{P}_{m})\) generated by the two diagonals joining the vertex \(i\) to the vertices \(1\) and \(m\). In Fig. (7), the diagonals of \(\sigma_{3}\) are drawn in grey and those of \(\sigma_{5}\) are in green. If the vertex \(2\) is a blue vertex, then let \(\sigma_{2}\) be the \(0\)-simplices corresponding to the diagonal joining \((2,m)\). Let \(\sigma_{m-1}\) be the \(0\)-simplices corresponding to the diagonals joining \((1,m-1)\), which is either \(R-B\) or \(B-B\) because the vertex \(1\) is a blue vertex. For each \(i\), the simplex \(\sigma_{i}\) decomposes the polygon \(\mathcal{P}_{m}\) into an \(i\)-gon, an \((m-i+1)\)-gon and a triangle with vertices at \(1,m,i\). So the star of the simplex \(\sigma_{i}\), St(\(\sigma_{i}\)), in the complex \(\mathcal{Y}(\mathcal{P}_{m})\) is the join of \(\mathcal{Y}(\mathcal{P}_{i})\) and \(\mathcal{Y}(\mathcal{P}_{m-i+2})\). From the induction hypothesis, there exist enumerations of the maximal simplices of \(\mathcal{Y}(\mathcal{P}_{i})\) and \(\mathcal{Y}(\mathcal{P}_{m-i+2})\), respectively, satisfying Property (1). From Lemma (2.7), the complexes \(\mathcal{Y}(\mathcal{P}_{i}),\mathcal{Y}(\mathcal{P}_{m-i+1})\) are shellable. From Lemma (2.1), we get that St(\(\sigma_{i}\)) is shellable. Again from Lemma (2.7), we get an enumeration \(\mathcal{T}^{i}\) of the maximal simplices of St(\(\sigma_{i}\)). Let \(J\subset\{2,\ldots,m-1\}\) be the blue vertices. Consider the ordering \(\mathcal{T}:=\mathcal{T}^{m-1},\mathcal{T}^{i_{1}}1,\ldots,\mathcal{T}^{i_{p}}\), where \(i_{1}>i_{2}>\ldots>i_{p}\) are all the elements of \(J\). We claim that \(\mathcal{T}\) is an ordering of all the maximal simplices of \(\mathcal{Y}(\mathcal{P}_{m})\) satisfying the conditions of Property(1). First we show that every triangulation appears in \(\mathcal{T}^{i}\) for exactly one \(i\in\{3,m-2\}\). Let \(T\) be any triangulation of \(\mathcal{P}_{m}\) generated by \(R-B\) and \(R-R\) diagonals. Then either there is no diagonal of \(T\) incident at the vertex \(1\), or there is at least one, in which case the last diagonal of \(T\) (anti clockwise direction) must join \(1\) to a blue vertex, say \(k\in\{2,m-1\}\), of the polygon. In Fig. (7), we have that the last blue vertex joined to \(1\) is at \(5\). Since this diagonal is the last one, there must be a diagonal of \(T\) joining the vertices \(k\) and \(m\). Hence \(T\in\mathcal{T}^{j}\). Also, for every \(j\neq j^{\prime}\), we have \(\mathcal{T}^{j}\cap\mathcal{T}^{j^{\prime}}=\varnothing\) because the diagonal \((j,m)\) intersects the diagonal \((1,j^{\prime})\). Now suppose that \(S,T\) are two maximal simplices of \(\mathcal{Y}(\mathcal{P}_{m})\) such that \(S\) precedes \(T\). We need to find another maximal simplex \(P\) preceding \(T\) such that \(P\) is obtained by flipping a diagonal of Figure 7: \(T\) such that any diagonal common to \(S\) and \(T\) must be common to \(P\) and \(T\). If \(S,T\) belong to the same \(\mathcal{T}_{i}\), then we can conclude immediately by using the induction hypothesis. So we assume that \(S\in\mathcal{T}_{j}\) and \(T\in\mathcal{T}_{k}\) with \(j>k\). Any diagonal common to both the triangulations \(S\) and \(T\) must lie outside the quadrilateral \(1,k,j,m\) because they must be disjoint to the diagonals \((1,k)\), \((k,j)\), \((j,m)\). See Fig.7. Now let \(P\) be the triangulation obtained from \(T\) by flipping the diagonal \((m,k)\) by \((1,j)\). Since this operation happens inside the quadrilateral, we get that \(S\,\cap\,T\subset P\,\cap\,T\). Also by construction, \(P\in\mathrm{St}(\sigma_{j})\) which precedes \(\mathrm{St}(\sigma_{k})\) and hence \(T\). From Theorem 2.2, Lemma 3.4 and Theorem 3.5, we get that **Theorem 3.6**.: _For \(m\geq 4\), the subcomplex \(\mathcal{Y}(\mathcal{P}_{m})\) of a convex polygon \(\mathcal{P}_{m}\) with a non-trivial bicolouring is a PL-homeomorphic closed ball of dimension \(m-4\)._ ### Punctured bicoloured polygons In this section, we prove that the subcomplex \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\subset\mathcal{A}(\mathcal{P}_{m}^{ \times})\) is a closed ball of dimension \(m-2\) for any non-trivial bicolouring. First we show that the subcomplex is a pseudo-manifold with boundary. **Lemma 3.7**.: _For \(m\geq 2\) and any non-trivial bicolouring, the subcomplex \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) of \(\mathcal{A}(\mathcal{P}_{m}^{\times})\) is pure of dimension \(m-2\)._ Proof.: Let \(\sigma\) be any simplex of \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\). If one of its 0-simplices correspond to a maximal diagonal, one of the regions in its complement is a bicoloured unpunctured polygon. See Fig.(8). Using Lemma 3.1, we have that \(\sigma\) is contained in some permissible maximal simplex of \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\). If none of the 0-simplices represent a maximal diagonal, let \(l\in\sigma\) be the diagonal that separates the puncture from the rest of the diagonals of \(\sigma\). At least one of the endpoints of \(l\) is a blue vertex. We add to \(\sigma\) the maximal diagonal based at this vertex. Again use Lemma 3.1 on its complement to get a maximal simplex of \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) containing \(\sigma\). Next we show that the subcomplex is strongly connected. Figure 8: Complementary regions to a maximal diagonal **Lemma 3.8**.: _For \(m\geq 2\) and any non-trivial bicolouring, the subcomplex \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) is strongly connected._ Proof.: Let \(T\) be a permissible triangulation of \(\mathcal{P}_{m}^{\times}\). Then there is a maximal diagonal based at a blue vertex. Cutting along the diagonal we get a bicoloured unpunctured polgygon, triangulated by the rest of the diagonals of \(T\). Let \(T_{0}\) be the fan triangulation based at one of the two blue vertices given by the maximal diagonal. By Lemma 3.2, \(T\) is connected to \(T_{0}\) by flips and all the intermediate triangulations remain disjoint to the maximal diagonal. Now we show that **Lemma 3.9**.: _Every codimension one simplex of the subcomplex \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) is contained in at most two maximal simplices of \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\)._ Proof.: Let \(\sigma\) be a codimension one simplex of \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\). From Theorem 2.9 we get that \(\sigma\) is contained in exactly two codimension \(0\) simplices \(\eta,\eta^{\prime}\) of \(\mathcal{A}(\mathcal{P}_{m}^{\times})\). From Lemma 3.7, we know that at least one of \(\eta,\eta^{\prime}\) is inside \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\). If none of the \(0\)-simplices of \(\eta,\eta^{\prime}\) contain a rejected diagonal, then they both are inside \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\), otherwise only one of them is. Finally we prove that the the subcomplex is shellable. **Theorem 3.10**.: _For \(m\geq 2\), the simplicial complex \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) satisfies Property (1)._ Proof.: Like in the proof of Theorem 3.5, we enumerate the vertices in the anticlockwise direction so that the first vertex is blue and the \(m\)-th is red. Let \(d_{i_{1}},\ldots,d_{i_{p}}\) be the maximal diagonals based at the blue vertices \(i_{1},\ldots,i_{p}\), respectively. The star \(\mathrm{St}(d_{i_{j}})\) is the coloured arc complex \(\mathcal{Y}(\mathcal{P}_{m+1})\) of the unpunctured bicoloured polygon \(\mathcal{P}_{m+1}\) obtained cutting \(\mathcal{P}_{m}^{\times}\) along \(d_{i_{j}}\). Let \(\mathcal{T}^{i}\) be the shelling order given by Theorem 3.5. We claim that \(\mathcal{T}:=\mathcal{T}^{i_{1}},\ldots,\mathcal{T}^{i_{p}}\) is a shelling order of \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) satisfying Property 1. Since every triangulation of \(\mathcal{P}_{m}^{\times}\) contains exactly one maximal diagonal based at some blue vertex, \(\mathcal{T}\) is an enumeration of the maximal simplices of \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\). Now suppose that \(S,T\) are two maximal simplices of \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) such that \(S\) precedes \(T\). If they belong to the same star \(\mathrm{St}(d_{i_{j}})\), then we conclude using the proof of Theorem 3.5. Now let \(S\in\mathrm{St}(d_{i_{j}})\) and \(T\in\mathrm{St}(d_{i_{k}})\) with \(j<k\leq p\). The diagonals of \(S\cap T\), if any, lie outside the region containing \(d_{i_{k}}\cup d_{i_{k}}\) bounded by the two diagonals joining the vertices \(i_{j},i_{k}\). See Fig.(9). Let \(P\) be the triangulation obtained from \(T\) by flipping the diagonal \(d_{i_{k}}\) with \(d_{i_{j}}\). So \(P\in\mathrm{St}(d_{i})\), hence preceding \(T\) in the order. Also by construction of \(P\), we have that \(S\cap T\subset P\cap T\). This concludes the proof. See Fig.(10) for a shelling order for the subcomplex \(\mathcal{Y}(\mathcal{P}_{4}^{\times})\) when \(\mathcal{P}_{4}^{\times}\) is endowed with an alternate colouring. In this case, there are only two permissible maximal diagonals: \(d_{1}\) and \(d_{3}\). Cutting along \(d_{1}\), we get an unpunctured pentagon with a non-trivial bicolouring. As in the proof of Theorem (3.6), we consider the stars of \(\sigma_{2}\) (purple) and \(\sigma_{4}\) (orange). The subcomplex \(\mathcal{Y}(\mathcal{P}_{5})\) is a closed \(1\)-ball as shown in the bottom right panel. A shelling order of this subcomplex gives a shelling order for \(\mathrm{St}(d_{2})\). Similarly, we get a shelling order for \(\mathrm{St}(d_{3})\). Combining the two, we get a shelling order for \(\mathcal{Y}(\mathcal{P}_{4}^{\times})\). From Lemmas (3.7),(3.8),(3.9) and Theorem (3.11), we get that, **Theorem 3.11**.: _For \(m\geq 2\), the subcomplex \(\mathcal{Y}(\mathcal{P}_{m}^{\times})\) of a once-punctured polygon \(\mathcal{P}_{m}^{\times}\) with any non-trivial bicolouring is a closed ball of dimension \(m-2\)._ Figure 10: A shelling order for \(\mathcal{Y}(\mathcal{P}_{4}^{\times})\) with alternate bicolouring ## 4 Applications: decorated hyperbolic polygons Decorated Polygons.A vertex \(v\) of an ideal (possibly punctured) polygon is said to be _decorated_ if a horoball, based at \(v\) is added. For \(n\geq 3\), a _decorated ideal \(n\)-gon_, denoted by \(\widehat{\Pi_{n}^{\diamondsuit}}\), is an ideal polygon, all of whose vertices are decorated with pairwise disjoint horoballs. Similarly, for \(n\geq 2\), a _decorated ideal once-punctured \(n\)-gon_, denoted by \(\widehat{\Pi_{n}^{\diamondsuit}}\), is a once-punctured ideal \(n\)-gon, all of whose vertices are decorated with pairwise disjoint horoballs. See Fig.11 for a decorated triangle and a decorated once-punctured bigon. On these decorated polygons we consider two types of arcs. An _arc_ on a hyperbolic polygon \(\Pi\), is an embedding \(\alpha\) of a closed interval \(I\subset\mathbb{R}\) into \(\Pi\). There are two possibilities depending on the nature of the interval: 1. \(I=[a,b]\): In this case, the arc \(\alpha\) is finite. We consider those finite arcs that verifiy: \(\alpha(a),\alpha(b)\in\partial\Pi\) and \(\alpha(I)\cap S=\{\alpha(a),\alpha(b)\}\). 2. \(I=[a,\infty)\): These are embeddings of hyperbolic geodesic rays in the interior of the polygon such that \(\alpha(a)\in\partial\Pi\). The infinite end converges to a vertex of the polygon. An arc \(\alpha\) of a polygon \(\Pi\) with non-empty boundary is called _non-trivial_ if each connected component of \(\Pi\smallsetminus\{\alpha\}\) has at least one decorated vertex. Let \(\mathcal{A}\) be the set of all non-trivial arcs of the two types above. The _arc complex_ of a decorated hyperbolic is a simplicial complex \(\mathcal{A}\left(\Pi\right)\) whose \(0\)-simplices are given by the isotopy classes of arcs in \(\mathcal{A}\) fixing the boundary and decorated vertices, and for \(k\geq 1\), every \(k\)-simplex is given by a \((k+1)\)-tuple of pairwise disjoint and distinct isotopy classes. Now we establish a link between these decorated polygons and Euclidean polygons with bicolourings. We start with the polygon \(\mathcal{P}_{2n}\) with \(n\geq 2\) and consider the alternate \(R-B\) bicolouring of its vertices. To every decorated polygon \(\widehat{\Pi_{n}^{\diamondsuit}}\), one can associate the polygon \(\mathcal{P}_{2n}\) with an alternate \(B-R\) colouring of its vertices, in the following way: Figure 11: The two types of decorated hyperbolic polygons * a decorated vertex of \(\widehat{\Pi_{n}^{\diamond}}\) corresponds to a red vertex of \(\mathcal{P}_{2n}\), * an edge of \(\widehat{\Pi_{n}^{\diamond}}\) corresponds to a blue vertex of \(\mathcal{P}_{2n}\), such that one \(R\)-vertex and one \(B\)-vertex are consecutive in \(\mathcal{P}_{2n}\) if and only if the corresponding edge and decorated vertex of \(\widehat{\Pi_{n}^{\diamond}}\) are consecutive. See Fig.(12). Again, we have the bijection: \[\left\{\text{Isotopy classes of edge-to-edge arcs of }\widehat{\Pi_{n}^{\diamond}}\right\} \leftrightarrow\left\{B-B\text{ diagonals}\right\}\] \[\left\{\text{Isotopy classes of edge-to-vertex arcs of }\widehat{\Pi_{n}^{\diamond}}\right\} \leftrightarrow\left\{B-R\text{ diagonals}\right\}\] So the arc complex \(\mathcal{A}\left(\widehat{\Pi_{n}^{\diamond}}\right)\) is isomorphic to the subcomplex \(\mathcal{Y}(\mathcal{P}_{2n})\) of \(\mathcal{A}(\mathcal{P}_{2n})\). By starting with a punctured polygon \(\mathcal{P}_{2n}^{\times}\) with an alternate bicolouring of its vertices and by using the same argument as above, we get that the the arc complex \(\mathcal{A}\left(\widehat{\Pi_{n}^{\times}}\right)\) is isomorphic to the subcomplex \(\mathcal{Y}(\mathcal{P}_{2n}^{\times})\) of \(\mathcal{A}(\mathcal{P}_{2n}^{\times})\). From Theorem 3.6 we get that **Corollary 4.1**.: _For \(n\geq 4\), the arc complex \(\mathcal{A}\left(\widehat{\Pi_{n}^{\diamond}}\right)\) of a decorated ideal polygon \(\widehat{\Pi_{n}^{\diamond}}\) is PL-homeomorphic to a closed ball of dimension \(2n-4\)._ Finally, from Theorem 3.11, we get that **Corollary 4.2**.: _For \(n\geq 2\), the arc complex \(\mathcal{A}\left(\widehat{\Pi_{n}^{\times}}\right)\) of a decorated once-punctured polygon \(\widehat{\Pi_{n}^{\times}}\) is PL-homeomorphic to a closed ball of dimension \(2n-2\)._ Figure 12: From decorated ideal \(n\)-gon to Euclidean \(\mathcal{P}_{2n}\) with alternate bicolouring
2301.02454
Optimizing the generation of polarization squeezed light in nonlinear optical fibers driven by femtosecond pulses
Bright squeezed light can be generated in optical fibers utilizing the Kerr effect for ultrashort laser pulses. However, pulse propagation in a fiber is subject to nonconservative effects that deteriorate the squeezing. Here, we analyze two-mode polarization squeezing, which is SU(2)-invariant, robust against technical perturbations, and can be generated in a polarization-maintaining fiber. We perform a rigorous numerical optimization of the process and the pulse parameters using our advanced model of quantum pulse evolution in the fiber that includes various nonconservative effects and real fiber data. Numerical results are consistent with experimental results.
A. V. Andrianov, N. A. Kalinin, A. A. Sorokin, E. A. Anashkina, L. L. Sanchez-Soto, J. F. Corney, G. Leuchs
2023-01-06T10:35:52Z
http://arxiv.org/abs/2301.02454v1
Optimizing the generation of polarization squeezed light in nonlinear optical fibers driven by femtosecond pulses ###### Abstract Bright squeezed light can be generated in optical fibers utilizing the Kerr effect for ultrashort laser pulses. However, pulse propagation in a fiber is subject to nonconservative effects that deteriorate the squeezing. Here, we analyze two-mode polarization squeezing, which is SU(2)-invariant, robust against technical perturbations, and can be generated in a polarization-maintaining fiber. We perform a rigorous numerical optimization of the process and the pulse parameters using our advanced model of quantum pulse evolution in the fiber that includes various nonconservative effects and real fiber data. Numerical results are consistent with experimental results. 1Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod 603950, Russia 2Max Planck Institute for the Science of Light, 91058 Erlangen, Germany 3Advanced School of General and Applied Physics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod 603022, Russia 4Departamento de Optica, Facultad de Fisica, Universidad Complutense, 28040 Madrid, Spain 5School of Mathematics and Physics, University of Queensland, Brisbane, Queensland 4072, Australia 6Department of Physics, University of Erlangen-Nuremberg, 91058 Erlangen, Germany *[email protected] ## 1 Introduction Squeezed light is one of the most important resources in quantum optics with many existing and foreseen applications, including improving the sensitivity and precision of optical metrology, quantum communications, and quantum computing with continuous variables [1, 2, 3, 4]. Squeezed light, as a theoretical concept, has been studied for a long time (see, e.g. [5] for a review). However, the first experimental observation of squeezing was made in 1986 [6]. Since then, the experimental methods have improved significantly, and quantum squeezing has already become a useful technology for applications. For example, modern gravitational wave detectors use squeezed light to enhance the sensitivity and increase the observable range in space [7]. Squeezed light plays an important role in modern theoretical studies; e.g., quantum cavity electrodynamics, quantum phase transitions [8], and symmetry breaking in quantum systems [9]. Squeezed light can be generated by using various optical nonlinearities (see, e.g. [1] for a review), including second-order nonlinear processes, such as parametric down-conversion and oscillations [10, 11], parametric up-conversion [12, 13], third-order nonlinearity in atomic vapors and fibers, and also by direct intensity noise reduction by driving semiconductors lasers with extremely low-noise current source [14]. In this work, we concentrate on the optical Kerr effect, which can produce squeezing in amorphous media and it is not limited by phase-matching conditions, thus providing larger bandwidth. In the simplest scheme, a coherent state of light is launched into the nonlinear Kerr medium. Amplitude-phase correlations are induced because the nonlinear phase shift is proportional to the intensity. These correlations result in the formation of a squeezed Wigner distribution with elliptic contours in phase space, in contrast to the rotationally symmetric Gaussian distribution of the initial coherent state. The full quantum treatment shows that the Kerr interaction leads to a non-Gaussian periodic dynamics with the appearance of "cat states" and recurrence to the initial coherent state [15, 16]. However, for reasonable values of nonlinearities, light power and loss-limited distances, the Gaussian approximation can be used within a large margin. In the first fiber experiment continuous-wave (CW) light was used, and less than 1dB squeezing was achieved in 114-m long fiber [17]. This experiment required enormous efforts to overcome destructive effects of losses and noise induced by Brillouin scattering on the thermally excited guided acoustic phonons in the fiber (GAWBS-guided acoustic waves Brillouin scattering) accumulated over the long fiber. It was then proposed to use pulsed light because it is much easier to achieve high-peak power, keeping the average power at a moderate level, thus requiring much shorter fibers and greatly reducing the effect of losses and GAWBS. Although most of the following fiber squeezing studies rely on short pulses, we note that in modern fibers based on glasses with very high nonlinearity and good transparency, e.g. chalcogenide and tellurite glasses, CW or long-pulse squeezing may worth revisiting [18, 19, 20]. A quantum theory of pulse propagation in dispersive nonlinear media [21] suggested that quadrature squeezing can be achieved for pulsed light, especially for solitons that preserve their shape and peak intensity over long distances despite dispersion. Early experiments utilized both nonsoliton [22, 23] as well as soliton pulse propagation [24, 25]. One obstacle in using Kerr squeezing is that the squeezed ellipse is tilted in phase space with respect to the mean vector of the field amplitude so that the output quantum state is not amplitude-squeezed, which hinders direct detection of the reduced noise with power detectors. Several methods to overcome this obstacle were proposed, such as using reflection from a highly dispersive cavity [17] or employing two-mode squeezing in Sagnac-type and Mach-Zehnder-type fiber interferometers to facilitate heterodyne detection [24, 22, 25, 26, 27, 28]. Symmetric Sagnac interferometers [24, 25] producing nearly vacuum squeezed state, as well as asymmetric interferometers producing bright coherent squeezed states were used [27, 29]. Another approach relies on the spectral filtering of the pulse after nonlinear propagation, which converts noise correlations between different spectral bands into directly detectable amplitude squeezing [30]. One of the most robust techniques relies on squeezing of the uncertainty of the polarization state. By generating two squeezed beams in two polarization modes of a polarization-maintaining fiber and appropriately transforming the output polarization state, the reduced uncertainty of the polarization state can be directly measured by power detectors [31, 32, 33, 34]. The best squeezing achieved so far with fibers was observed in such a system [32]. Ultrashort pulses propagating in fibers are susceptible to nonconservative effects of spontaneous and stimulated Raman scattering. It was quickly recognized [35] and tested in experiments and simulations [32, 33] that the Raman effect is one of the most important factors limiting squeezing in optical fibers for ultrashort pulses. Whereas electronic Kerr nonlinearity is not sensitive to the pulse duration, the delayed Raman contribution is. It is known that the influence of Raman on the classical properties of ultrashort fiber solitons scales with the pulse duration, being much more pronounced for shorter pulses. This suggests that increasing the pulse duration may also help reduce detrimental Raman contribution to quantum squeezing. However, the comprehensive analysis of pulsed Kerr squeezing and optimization over the full set of pulse parameters has not been done yet. In this work we perform rigorous numerical simulations to test the dependence of the squeezing on the pulse energy and pulse duration as well as the fiber length. We identified the regions of optimum parameters. We also proposed simple analytical considerations that help to identify the role of the Raman effect and obtain the approximate scaling of the optimal pulse duration. The numerical results were supported by experimental data. ## 2 Polarization squeezing description and numerical modeling We focus on two-mode polarization squeezing because its experimental realization is quite robust and less susceptible to various technical disturbances. The scheme we consider both in our modeling and experiment utilizes propagation of two pulses with the orthogonal polarizations aligned along axes of a birefringent nonlinear fiber. Both pulses experience Kerr squeezing. The polarization squeezing relies on the fact that the quantum uncertainty of the polarization state of two properly combined Kerr squeezed states can in some direction be made smaller than the shot- noise limit. Polarization state and polarization fluctuations can be described in terms of the Stokes operators \[\begin{split}\hat{S}_{0}&=\hat{a}_{H}^{\dagger}\hat {a}_{H}+\hat{a}_{V}^{\dagger}\hat{a}_{V},\qquad\hat{S}_{1}&=\hat {a}_{H}^{\dagger}\hat{a}_{H}-\hat{a}_{V}^{\dagger}\hat{a}_{V},\\ \hat{S}_{2}&=\hat{a}_{H}^{\dagger}\hat{a}_{V}+\hat{a} _{V}^{\dagger}\hat{a}_{H},\qquad\hat{S}_{3}&=i(\hat{a}_{V}^{ \dagger}\hat{a}_{H}-\hat{a}_{H}^{\dagger}\hat{a}_{V}),\end{split} \tag{1}\] where \(\hat{a}_{H/V}^{\dagger}\) and \(\hat{a}_{H/V}\) are creation and annihilation operators of two field modes, corresponding to orthogonal horizontal/vertical polarization modes. The uncertainty relations for the polarization operator and the corresponding squeezing can be defined in an SU(2)-invariant manner [36]. The operators \(\hat{S}_{1,2,3}\) can be represented as Cartesian components of a Stokes operator vector \(\hat{\mathbf{S}}=(\hat{S}_{1},\hat{S}_{2},\hat{S}_{3})\), and \(\hat{S}_{0}\) represents the total photon number. We can define squeezing without explicit use of Cartesian projections of the Stokes operator vector, by introducing the component \(S_{\parallel}\) parallel to the mean value \(\langle\hat{\mathbf{S}}\rangle\) and two components \(\hat{S}_{\perp 1}\), \(\hat{S}_{\perp 2}\) in the plane orthogonal to \(\langle\hat{\mathbf{S}}\rangle\) (the so-called "dark plane") [33]. The nontrivial uncertainty relation for variances \(\Delta^{2}\hat{S}_{\perp 1}\), \(\Delta^{2}\hat{S}_{\perp 2}\) then reads as \(\Delta^{2}\hat{S}_{\perp 1}\Delta^{2}\hat{S}_{\perp 2}\geq|\langle\hat{S}_{ \parallel}\rangle|^{2}\). The squeezing is observed if there are components in the dark plane that obey [36] \[\Delta^{2}\hat{S}_{\perp 1}<|\langle\hat{S}_{\parallel}\rangle|<\Delta^{2} \hat{S}_{\perp 2}. \tag{2}\] The SU(2) invariance implies that the rotations of the polarization states, which can be done with the use of birefringent plates and polarization splitting and combining optics, do not destroy the polarization squeezing, provided losses are small. Then the squeezing can be measured after appropriate rotation of the polarization state and measurement of the Stokes parameter \(\hat{S}_{1}\) by using a polarization splitter and a balanced detector [33]. Moreover, this quantum polarization description can be mapped one-to-one onto the quantum description of SU(2) interferometers [37], for which it is known that the sensitivity can be enhanced by using squeezed light states [38]. This means that polarization squeezed light can be used for precision interferometric measurements. It was shown that bright squeezed light can be used for increasing the precision of polarimetry [39], and enhancing the sensitivity of polarization interferometer [40]. Efficient numerical modeling of quantum dynamics leading to squeezed-state formation in the fiber requires certain assumptions and simplifications. We assume that the pulses propagate independently of each other in two polarization modes of the fiber. We apply the truncated Wigner method to model the quantum dynamics. This method is based on reconstructing the Wigner distribution by gathering a large number of stochastic trajectories using the stochastic nonlinear Schrodinger equation [41, 42, 43, 44]. Our particular implementation of this equation takes into account fiber dispersion (up to the third order) and the nonlinear response mediated by both the Raman and instantaneous electronic interactions. We model this equation with the parameters of a particular fiber which was used in our experiment (second-order dispersion \(\beta_{2}=-10.5\)ps\({}^{2}\)/km, third- order dispersion \(\beta_{3}=0.155\) ps\({}^{3}\)/km, nonlinear coefficient \(\gamma=3\) W\({}^{-1}\)km\({}^{-1}\), and the Raman response function as in [44]). The pulse parameters were chosen in the ranges covering the values accessible in our experiment. We adopt the polarization squeezing detection scheme and used the corresponding routine to calculate the squeezing from the numerically simulated data [33, 44]. In numerical modeling we calculate the squeezing for various input pulse parameters and various fiber lengths. We prepared initial conditions in the form of hyperbolic secant-shaped pulses \(A=A_{0}/\cosh{(t/\tau)}\) with different durations in the range \(T=0.11-0.5\) ps (\(T\) is FWHM duration, \(T=1.763\tau\)) and with a pulse energy in the range \(E=22.5-120\) pJ. We calculate the quantum dynamics for a propagation distance of up to 30 m for each initial condition. For each set of initial conditions, we modeled 5000 realizations of stochastic trajectories to reconstruct the squeezing ellipse. In the process of modeling, we calculated the squeezing at intermediate distances along the fiber and recorded the obtained values. After the calculation of squeezing, we could also introduce the losses of the detection scheme, which are inevitable in the experiment. ## 3 Experimental study We carried out an experimental study of polarization squeezing, which allowed us to compare the measurements with our theory. The experimental setup for generating and measuring squeezing generally followed the schematic presented in [33]. The two-mode squeezed light needed to achieve polarization squeezing was generated in the polarization-maintaining fiber (3M FSPM-7811). Femtosecond pulses at the central wavelength of 1.56 \(\mu\)m from a mode-locked laser with an adjustable pulse width and energy were launched into both polarization axes with equal power. The laser signal was shot-noise limited above radio frequencies of a few MHz. Because of the fiber birrefringence and the difference in the group velocities of the polarization modes, the pulses quickly separated in time and propagated in the fiber almost independently. To match the pulse arrival time at the output we used two consecutive fiber pieces of precisely equal length spliced together with swapped fast and slow axes (rotated by 90 degrees) [40]. This allowed us to make the setup simple and robust eliminating the free-space interferometer required in the original scheme [33] to adjust the pulse arrival time. We tested two fiber lengths of 5.2 and 30 meters. To measure squeezing we first adjusted the polarization state and orientation of the squeezed ellipse using waveplates (as described in [33]) and measured the noise in the Stokes parameter \(\hat{S}_{1}\) using a polarization beam splitter, a balanced photodetector and a radiofrequency spectrum analyzer. We used several laser settings providing different pulse durations and for each setting the pulse energy was optimized for the best squeezing. ## 4 Results and analysis The analysis of numerical data allowed us to identify the most important processes and parameters affecting squeezing. The entire set of simulations provide a 3D data set, with squeezing calculated as a function of pulse duration, pulse energy, and fibre length. The slices of the 3D data set showing the squeezing versus input pulse duration and energy at eight distances along the fiber are presented in Fig. 1. The simulation was carried out on a \(14\times 14\) grid, but we have used data interpolation for a better visual representation. We can see that at the very beginning of the pulse propagation the squeezing mainly depends on the peak power of the pulse. This behavior is consistent with simple considerations: At small distances the pulse shaping effects are not pronounced, so the pulse mainly experiences self-phase modulation and hence acquires some squeezing proportional to the peak power and the fiber length. To emphasize this, we add lines of constant peak power to the plot. At larger distances the soliton effects begin to play an important role, so the pulse dynamics becomes more complicated. Pronounced regions of better squeezing are formed along curved lines. Better squeezing is observed for the pulse parameters close to the fundamental soliton. To demonstrate this, we plot dashed black lines, corresponding to the soliton parameters \(T=1.763\tau=3.526|\beta_{2}|/\gamma E\). For two distances (7.2 m and 30 m) we also plot curves corresponding to the pulse energies maximizing squeezing at variable pulse duration (dotted lines in Fig. 1). Note that for small and intermediate fiber lengths and large durations, solitons do not have enough distance to form, so the squeezing for such pulses largely depends on the input pulse peak power. This results in a C-shaped optimal energy curve for the fiber length of 7.2 m. We also compared our numerical findings with experimental results. The experimental points, obtained for several pulse durations after optimizing the pulse energy, are shown in Fig. 1 for the fiber lengths of 7.2 m and 30 m. It is evident that the experimental points align very well with the curves of optimum pulse energy obtained in numerical modeling. They represent two distinctive cases. For shorter distances, the optimum pulse energy increases as the pulse duration increases above \(\sim\)0.3 ps. For longer distances, the optimum pulse energy decreases as the pulse width increases so that the pulse parameters stay close to those of fundamental soliton. The absolute values of squeezing in the experiment are significantly smaller than the modeled ones, but this is because the simulation presented in Fig. 1 does not include losses. However, the trends in the optimal pulse parameters are similar, since the effect of losses on squeezing does not depend directly on the pulse energy or duration. We extracted the maximum squeezing and the corresponding pulse parameters for different fiber lengths, as shown in Fig. 2. It is seen that better squeezing can be achieved in longer fibers, requiring progressively larger pulse durations and lower pulse energies. From the data we calculated the soliton number parameter \(N^{2}=\tau\gamma E/2|\beta_{2}|\), which characterizes how close the pulse is to the fundamental soliton (\(N\)=1 corresponds to the fundamental soliton). Note that at short propagation distances the best squeezing is observed at energies of about 10% larger than the soliton energy. For large distances the best squeezing is achieved for pulses very close to the fundamental solitons. Optical losses in the fiber, at the fiber output and in the squeezing detector can strongly limit the squeezing, especially for very large values demonstrated in lossless modeling. The effect of losses at the output and lower than unity efficiency of the detectors can be simply added on top of the quantum dynamics modeling [45]. The effect of distributed fiber losses needs to be directly modeled using the propagation equation, but it can also be included approximately as lumped losses at the output, and we did so to speed up our modeling. In Fig. 2 we show the squeezing Figure 1: Simulated squeezing for different pulse energies and durations at eight distances along the fiber from 0.6 to 30 meters (color maps). Dashed cyan lines in the plot for 0.6 m correspond to constant peak power. Dashed black lines in the rest of the plots represent fundamental soliton parameters. Dotted lines in plots for 7.2 m and 30 m correspond to the pulse energy maximizing squeezing for a given pulse duration. Black dots show data points obtained in the experimental optimization of the pulse energy. Measured squeezing values are shown next to each dot. calculated when different losses are included: intrinsic fiber losses of 1dB/km and losses at the fiber output and in the detection scheme. Including only the fiber losses, the squeezing is still very strong, although it starts to roll off with increasing fiber length. With other additional loss of 20% (estimated value for our experiment) the observed squeezing saturates at around \(-6\) dB, which is close to the experimentally measured result. With smaller additional losses of 5%, which seems feasible in a carefully optimized experiment, we can expect observed squeezing at the level of \(-10\) to \(-12\) dB. ## 5 Analysis of pulse duration limitations due to Raman effect Now we discuss the optimization of the pulse duration. From the simple picture of the squeezing building up due to the Kerr effect, one may expect that the squeezing would improve with increasing soliton energy (and corresponding shortening of its durations). At small propagation distances, regions of best squeezing are indeed observed for shorter durations and highest energies, but the optimum is gradually shifted towards longer durations and smaller energies at larger distances. For shorter pulses the squeezing degrades abruptly. We illustrate this in Fig. 3, in which we plot the maximum achievable squeezing optimized with respect to the pulse energy as a function of the pulse duration and the fiber length. A well-defined region of the best squeezing is observed. The optimum pulse duration shifts slowly towards larger values as the propagation distance increases. The observed behavior can be explained by evaluating the influence of nonconservative Raman effects. The Raman effect for ultrashort solitons manifests itself in gradual self-frequency shift of the pulse central frequency [46]. The rate of soliton self-frequency shift is given by an approximate formula [47] \[\frac{d\Omega}{dz}=\frac{8T_{R}|\beta_{2}|}{15\tau^{4}}\,, \tag{3}\] where \(T_{R}\) characterizes the strength of the Raman response, \(T_{R}\sim 3-4\) fs depending on the particular shape of the response function. For the quantum evolution the influence of the Raman effect is more complicated, but some useful conclusions can be drawn based on the following considerations. The squeezing is related to certain correlations between the frequency side-bands of quantum noise. These correlations build up due to the Kerr effect during soliton propagation. The Raman effect redistributes the spectral components of the soliton and destroys these correlations. To be able to make an estimate, we assume that the correlations are destroyed Figure 2: Maximum squeezing as a function of the fiber length for different losses (a): no losses (black curve), intrinsic fiber losses only (red curve), fiber losses and external losses of 5% (green curve), fiber losses and external losses of 20% (blue curve). Optimal pulse parameters (b): energy (black curve, left axis), duration (blue curve, right axis), soliton number (red curve, left axis, multiplied by 100). and the squeezing is reduced when the soliton frequency spectrum is shifted by an amount comparable to the pulse spectral width. The FWHM spectral width \(\Delta\Omega\) is inversely proportional to the pulse duration \(\Delta\Omega\approx 2/T\). This leads to the condition \[\frac{|\beta_{2}|T_{R}z}{T^{3}}\equiv K\ll 1, \tag{4}\] where the dimensionless coefficient \(K\) absorbs all the constants. This condition must be well fulfilled so that the Raman effects can be neglected. Figure 3 shows lines of constant \(K\). If we start from long pulse durations the squeezing starts to improve as the soliton duration decreases for any fixed fiber length corresponding to increasing \(K\). However, as \(K\) increases too much the squeezing saturates and then degrades. The contours of the best squeezing region coincide very well with the analytical predictions. The threshold at which the Raman effect becomes important corresponds to \(K\sim 0.05\ldots 0.1\). Although the presented numerical modeling was carried out for particular fiber parameters, the analytical condition (4) is fairly universal and thus could be used as a guide in planning and optimizing experiments. ## 6 Discussion and conclusion Our numerical modeling provides useful insights in how to optimize fiber polarization squeezing. The numerical results match the experimental observations fairly well in terms of the optimal combinations of pulse energy and pulse duration, for short as well as for long fiber length. However, the numerical results giving the best match were obtained for about fiber lengths differing by about 30%. This can be explained by the fact that propagation in orthogonal polarization modes is not completely independent. Near the input and output ends of the fiber the pulses overlap in time, so the cross-Kerr interaction leads to an increase in the effective nonlinearity experienced by the pulses. The distance at which the pulses separate in time is about 0.5 m. Along this distance, the cross-Kerr contribution induced by the orthogonal pulse with the same energy and peak power is added to the selfaction of the considered pulse [47]. In our simplified modeling we assumed independent propagation and neglected cross-Kerr effect, so longer fiber length was required to achieve similar effect. The experimentally measured squeezing is severely affected by losses. The squeezing saturates as it approaches the limit set by losses in the fiber and the detection scheme. However, the intrinsic fiber losses are quite low for considered fiber lengths and the theoretically achievable squeezing is quite strong even with these internal losses taken into account. So, the modeling Figure 3: Simulated squeezing optimized with respect to the pulse energy as a function of the pulse duration and the fiber length. Black dashed lines correspond to constant values of \(K\) in (4). presented here shows that a significant increase of the observed squeezing is realistic for a new experimental setup with largely reduced external losses. In conclusion, we performed the numerical simulation of polarization quantum squeezing in a nonlinear fiber aimed at the optimization of squeezing with respect to the pulse duration and energy as well as the fiber length and losses. Based on the analysis of the 3D data space obtained in the modeling we identified the parameter areas for the best squeezing and described general trends covering a wide range of pulse and fiber parameters. We proposed a simple analytical approximation, which takes into account the Raman effect and provides the optimal pulse duration for given fiber parameters. **Funding.** Ministry of Science and Higher Education of the Russian Federation, contract 075-15-2022-316. **Disclosures.** The authors declare no conflicts of interest. **Data availability.** Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2310.02315
Measuring scalar charge with compact binaries: High accuracy modelling with self-force
Using the self-force approach, we present the premier first-post-adiabatic accuracy formalism for modelling compact binaries in theories with a massless scalar field non-minimally coupled to gravity. We limit the binary secondary to being a non-spinning compact body with no scalar dipole (we will address the spinning and scalar dipole cases in an upcoming paper). By producing an ansatz for the scalar charged point particle action, we derive first- and second-order perturbative field equations and equations of motion for the secondary compact object. Under our assumptions, implementing this formalism will produce sufficiently accurate waveform templates for precision measurements of the scalar charge of the secondary with LISA data on extreme-mass-ratio inspirals. Our formalism is consistent with almost general scalar-tensor theories of gravity. Implementing our formalism builds on self-force models in General Relativity; we show the incorporation into the two-timescale formalism is straightforward. Excitingly, implementation poses no significantly more challenging barriers than computing first-post adiabatic waveforms in General Relativity.
Andrew Spiers, Andrea Maselli, Thomas P. Sotiriou
2023-10-03T18:00:05Z
http://arxiv.org/abs/2310.02315v1
# Measuring scalar charge with compact binaries: High accuracy modelling with self-force ###### Abstract Using the self-force approach, we present the premier first-post-adiabatic accuracy formalism for modelling compact binaries in theories with a massless scalar field non-minimally coupled to gravity. We limit the binary secondary to being a non-spinning compact body with no scalar dipole (we will address the spinning and scalar dipole cases in an upcoming paper). By producing an ansatz for the scalar charged point particle action, we derive first- and second-order perturbative field equations and equations of motion for the secondary compact object. Under our assumptions, implementing this formalism will produce sufficiently accurate waveform templates for precision measurements of the scalar charge of the secondary with LISA data on extreme-mass-ratio inspirals. Our formalism is consistent with almost general scalar-tensor theories of gravity. Implementing our formalism builds on self-force models in General Relativity; we show the incorporation into the two-timescale formalism is straightforward. Excitingly, implementation poses no significantly more challenging barriers than computing first-post adiabatic waveforms in General Relativity. ## I Introduction The detection of a new fundamental field via its imprint on compact objects and the gravitational wave signals they produce would be a ground-breaking discovery. Indeed, such searches are among the key goals of current and future gravitational wave detectors [1; 2; 3; 4; 5]. Asymmetric binaries are particularly promising sources in this context [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. They consist of a larger body, the primary, of mass \(M\), and a much smaller body, the secondary, of mass \(\mu\)1. Their mass ratio Footnote 1: In this paper \(\mu\) is the zeroth-order in \(\varepsilon\) mass, we allow for higher order corrections to the mass which come from the presence of the scalar field. \[\varepsilon=\frac{\mu}{M}\, \tag{1}\] is then very small, with the more extreme cases, known as extreme mass ratio inspirals (EMRIs), reaching \(\varepsilon\sim 10^{-6}\). EMRIs could execute some \(10^{5}\) orbits while in LISA's band and be continuously observed for very long periods -- several months or even years. Hence, they are expected to be excellent probes of the properties of their source [18]. Until recently attention was strongly focused on EMRIs potential to probe the spacetime around, and hence the properties of, the primary black hole [19; 20; 21]. However, it was recently pointed out in [22] that the scalar charge of the secondary could leave a strong imprint on the EMRI waveforms -- strong enough to make the charge measurable by LISA [23]. It has also been shown in [22], using an effective field theory framework, that, to leading order in \(\varepsilon\), the charge of the primary is negligible. Hence, the primary is adequately approximated by a Kerr black hole, and the charge for the secondary fully controls the deviations from GR. This drastically simpler framework has already been extended to eccentric orbits [24] and massive scalar fields [25]. We will discuss the assumptions underpinning this framework in detail below, but its most important practical limitation is that it does not include any post-adiabatic corrections that appear at second order in \(\varepsilon\). Indeed, a burgeoning method for modelling compact binaries is the self-force approach in black hole perturbation theory. This method provides an accurate approximation in the extreme-mass-ratio limit, \(\varepsilon\lesssim 10^{-4}\), where it is exorbitantly computationally expensive to produce full inspiral waveforms using Numerical Relativity due to the binary's disparate length scales. Tackling this problem using perturbation theory is advantageous because \(\varepsilon\) is a naturally small expansion parameter. World-leading perturbative self-force models are to first-post-adiabatic accuracy. Such accuracy provides waveforms with \(\mathcal{O}(\varepsilon)\) phase error over the course of an inspiral of \(\mathcal{O}(\frac{1}{\varepsilon})\) orbits. Currently, waveforms to first-post adiabatic accuracy have been computed for quasi-circular inspirals of Schwarzschild black holes in GR. These waveforms show impressive agreement with Numerical Relativity waveforms for mass-ratios smaller than \(\mathcal{O}(\frac{1}{10})\). Significant efforts are ongoing to extend these results to generic orbits around a Kerr primary black hole [26; 27; 28; 29; 30; 31; 32; 33], and include effects such as resonances [34] and the spin of the secondary [35; 36; 37; 38; 39]. Extending the perturbative self-force approach to theoretical scenarios that include new fundamental fields is a fledgling field of research of clear importance. Constraining the existence of new fundamental fields with LISA measurements on EMRIs requires high-accuracy waveform templates, including the effects of these new fundamental fields. As in GR, one has to reach first-post adiabatic accuracy. As a first, critical step in this direction, in this paper, we push the approach of [22; 23] to post-adiabatic order. By generalising the perturbative self-force approach and using a new ansatz for the point-particle action, we derive the field equations and equations of motion for the secondary. That is, we provide the required equations for modelling inspirals to first-post adiabatic accuracy in scenarios with a massless scalar field nonminimally coupled to gravity. This is the first scheme for modelling binaries perturbatively to first-post adiabatic accuracy beyond GR and the Standard Model and provides a roadmap to full calculations. The paper is organised as follows. In Sec. II, we discuss the theoretical ground in which we model asymmetric binaries beyond GR. In Sec. III, we derive a formalism for calculating the first- and second-order self-force in a large class of scalar-tensor theories of gravity. We exploit our formalism to build a two-timescale approximation for efficient first-post adiabatic accurate modelling in Sec. V. In appendix A, we briefly review the self-force approach within black hole perturbation theory in GR, derive the metric perturbation field equations and perturbative equations of motion from our action approach, and demonstrate why the first-post adiabatic models provide high-accuracy waveforms ready for LISA observations. ## II Asymmetric binaries and scalar fields ### Action and field equations Following Ref. [22], our starting point will be a general action which describes a real scalar field \(\varphi\) non-minimally coupled to gravity: \[S[\mathbf{g}_{ab},\varphi,\Psi]=S_{0}[\mathbf{g}_{ab},\varphi]+ \alpha S_{c}[\mathbf{g}_{ab},\varphi]+S_{\rm m}[\mathbf{g}_{ab},\varphi,\Psi]\, \tag{2}\] where \[S_{0}[\mathbf{g}_{ab},\varphi]=\int\frac{\sqrt{-\mathbf{g}}}{16 \pi}\Big{(}R-\frac{1}{2}\partial_{\mu}\varphi\partial^{\mu}\varphi\Big{)}d^{4 }x\, \tag{3}\] \(\mathbf{g}\) is the metric determinant, and we work in geometric units \(G=c=\hbar=1\). \(S_{\rm c}\) contains any additional (self)-interactions for the scalar, whereas \(S_{\rm m}\) describes the matter fields \(\Psi\). For concreteness, we consider here a massless scalar, omit the bare mass term in \(S_{0}\), and consistently assume that \(S_{\rm c}\) respects shift symmetry. However, this approach can be straightforwardly generalized to massive scalars [25], and we will return to this point later. We take the coupling constant \(\alpha\) to have dimensions \([{\rm length}]^{n}\) where \(n\geq 2\). This corresponds to negative mass dimensions in particle physics units; _i.e._, we assume that these interactions are suppressed by some characteristic energy scale. Note that there are no terms that would correspond to \(n\leq 1\) that are consistent with local Lorentz symmetry [40]. We apply no other conditions to \(S_{\rm c}\), and hence, our approach covers a very broad range of theoretical scenarios. For the time being, we leave \(S_{m}\) generic. To derive field equations for the metric, one varies the action (2) with respect to the metric, that yields \[G_{ab}[\mathbf{g}_{cd}]=T^{\rm scal}_{ab}[\varphi]+\alpha T^{\rm c }_{ab}[\mathbf{g}_{cd},\varphi]+T^{\rm m}_{ab}[\mathbf{g}_{cd},\varphi,\Psi]\, \tag{4}\] where \(G_{ab}[\mathbf{g}_{cd}]\) is the Einstein operator, \[T^{(i)}_{ab}=-\frac{8\pi}{\sqrt{-\mathbf{g}}}\frac{\delta S_{(i)}}{\delta g^ {ab}}\, \tag{5}\] where \(i\in\{{\rm c},{\rm m}\}\), and \[T^{\rm scal}_{ab}[\varphi]=\frac{1}{2}\partial_{\mu}\varphi\partial_{b}\varphi -\frac{1}{4}\mathbf{g}_{ab}\partial_{c}\varphi\partial^{c}\varphi. \tag{6}\] To derive the scalar field equation one varies Eq. (2) with respect to \(\varphi\), yielding \[\Box_{\mathbf{g}}\varphi=\alpha\Sigma_{\rm c}+\Sigma\,, \tag{7}\] where \(\Box_{\mathbf{g}}=\mathbf{g}^{ab}\nabla^{\mathbf{g}}_{a}\nabla^{\mathbf{g}}_ {b}\), \(\nabla^{\mathbf{g}}_{a}\) is the covariant derivative associated with the metric \(\mathbf{g}_{ab}\), and \[\Sigma=-\frac{16\pi}{\sqrt{-\mathbf{g}}}\frac{\delta S_{\rm m}}{\delta \varphi},\qquad\Sigma_{\rm c}=-\frac{16\pi}{\sqrt{-\mathbf{g}}}\frac{\delta S _{\rm c}}{\delta\varphi}. \tag{8}\] ### The primary We assume that the primary is a black hole. The action \(S_{0}\) is covered by no-hair theorems [41; 42], so any scalar hair would have to be introduced by terms in \(S_{\rm c}\). For shift-symmetric theories, however, \(S_{\rm c}\) is also covered by the no-hair theorem, for static and spherically symmetric [43] and for slowly rotating [44] asymptotically flat black holes. The only interaction that evades this theorem is a linear coupling between \(\varphi\) and the Gauss-Bonnet invariant \(\mathcal{G}\equiv R_{abcd}R^{abcd}-4R_{ab}R^{ab}+R^{2}\)[45]. Adding the term \(\alpha_{\rm GB}\,\varphi\,\mathcal{G}\) to \(S_{0}\) introduces a non-constant scalar field to black holes, described by the black hole having a scalar charge; however, the scalar charge (which can be thought of as the scalar monopole) is not an independent parameter, instead, the scalar charge is fixed by a regularity condition on the horizon and is determined by the mass and spin of the black hole and \(\alpha_{\rm GB}\)[46; 44; 44] (see also [47] for earlier work without shift symmetry). The charge per unit mass \(d\) scales as \(\alpha_{\rm GB}/M^{2}\) in geometric units [44]. Adding additional shift-symmetric interactions will change the scalar configuration [48], but the regularity conditions that fix the charge persists. The charge per unit mass is then given by an integral over the horizon \(\mathcal{H}\), [40] \[d=\frac{\alpha_{\rm GB}}{4\pi M}\int_{\mathcal{H}}n^{a}\mathcal{G}_{a}\ d\Omega \tag{9}\] where \(n^{a}\) is the horizon generator and \(\mathcal{G}=\nabla_{a}\mathcal{G}^{a}\). This implies that any terms in \(S_{\mathrm{c}}\) other than the linear coupling with \(\mathcal{G}\), controlled by a coupling \(\alpha_{i}\), contribute to \(d\) with a factor of \(\alpha_{\mathrm{GB}}\alpha_{i}\). ### The secondary To initially define the matter action for (2), which describes the secondary body in the EMRI, we use the conventional _skeletonized_ approach2[50]. The skeletonized description of a compact object replaces the matter action, \(S_{\mathrm{m}}\) (assuming no other matter fields are present), with a point particle action \(S_{\mathrm{p}}\). Ref. [50] presented a point-particle action for a massive, scalar-charged, compact object: Footnote 2: The skeletonized formalism was first developed for electromagnetism and gravity [49] and has previously been extended to scalar-tensor theories with multiple fields [50]. \[S_{\mathrm{p}}=-\int_{\gamma}m[\varphi]ds=-\int_{\gamma}m[\varphi]\sqrt{{\bf g }_{ab}{\bf u}^{a}{\bf u}^{b}}d\tau, \tag{10}\] where \(\gamma\) is the worldline of the compact object, \({\bf u}^{\alpha}=\frac{dz^{\alpha}}{d\tau}\) is the four-velocity of the compact object in \({\bf g}_{ab}\) and \(\tau\) is the proper time in \({\bf g}_{ab}\). Eq. (10) introduces a mass function \(m[\varphi]\), which depends on the scalar field, generating the scalar charge. We will show that Eq. (10) is sufficient for deriving the linear field equations but encounters issues beyond linear order. With our point-particle action in hand, we can now derive the stress-energy tensor and scalar charge density that will appear in the field equations. Varying Eq. (10) with respect to the metric, \({\bf g}_{ab}\), yields \[T^{ab}_{\mathrm{m}}=8\pi\int_{\gamma}m[\varphi]\frac{\delta^{4}[x^{\mu}-z^{ \mu}_{\mathrm{p}}[\tau]]}{\sqrt{-{\bf g}}}{\bf u}^{a}{\bf u}^{b}d\tau, \tag{11}\] the scalar charged point particle stress-energy tensor. Varying Eq. (10) with respect to the scalar field, \(\varphi\), yields \[\Sigma=16\pi\int_{\gamma}m^{\prime}[\varphi]\frac{\delta^{4}[x^{\mu}-z^{\mu}_ {\mathrm{p}}[\tau]]}{\sqrt{-{\bf g}}}d\tau, \tag{12}\] the point particle scalar-charge density. ## III Perturbative expansion The contribution \(\alpha\) makes to \({\bf g}_{ab}\) and \(\varphi\) must be dimensionless as \({\bf g}_{ab}\) and \(\varphi\) are dimensionless. As \(M\) is the only length scale associated with the background spacetime, \(\alpha\) must be accompanied by a factor of \(\frac{1}{M^{n}}\) for the leading-order contribution. We introduce the dimensionless coupling \[\zeta=\frac{\alpha}{M^{n}}. \tag{13}\] This can be expressed in terms of the mass ratio \(\varepsilon\) as \[\zeta=\frac{\alpha}{M^{n}}=\varepsilon^{n}\frac{\alpha}{\mu^{n}}. \tag{14}\] \(\zeta\) represents the non-minimal coupling perturbation parameter for \({\bf g}_{ab}\) and \(\varphi\). If we assume that solutions of the field equations are continuously connected to GR solutions as \(\alpha\to 0\), our earlier assumptions for \(S_{\mathrm{c}}\), that \([\alpha]=[\mathrm{length}]^{n}\), with \(n\geq 2\), and the expression for \(d\) in Eq. (9), imply that deviations from GR are controlled by \(\zeta\). In particular, so long as the length-(energy)-scale associated with a particular coupling \(\alpha_{i}\) of a term in \(S_{\mathrm{c}}\) (\(\alpha\) denotes them collectively) is not significantly larger (smaller) than the scale associated with \(\alpha_{\mathrm{GB}}\), \(d\propto\alpha_{\mathrm{GB}}/M^{2}\) to leading order in \(M^{-1}\). Existing constraints inferred from astrophysical observations imply that \(\frac{\alpha}{\mu^{n}}=\mathcal{O}(1)\) or smaller [51], as \(\mu\) corresponds to solar mass bodies. Therefore, the mass-ratio, \(\varepsilon\), the natural bookkeeping parameter for the self-force approach, can be used as the sole perturbative parameter for the problem at hand. That is, conservatively, \(\zeta\sim\varepsilon^{n}\). To build our formalism, we will consistently expand the field's equations (4)-(7), as well as the metric and the scalar field, up to the second order in \(\varepsilon\). An expansion of a generic tensor, \({\bf A}\), takes the form, \[{\bf A}={\bf A}^{(0)}+\varepsilon{\bf A}^{(1)}+\varepsilon^{2}{\bf A}^{(2)}+ \mathcal{O}(\varepsilon^{3}). \tag{15}\] The metric expansion is written as \[{\bf g}_{ab}=g_{ab}+\varepsilon h^{(1)}_{ab}+\varepsilon^{2}h^{(2)}_{ab}+ \mathcal{O}(\varepsilon^{3}), \tag{16}\] where \(g_{ab}\) is the background metric (which we take to be the Kerr metric) and \(h^{(n)}_{ab}\) are the metric perturbations. The background metric is used to raise and lower the indices of all tensors and tensor perturbations, such as \(h^{(n)}_{ab}\). We label all the tensor perturbations with a subscript or superscript number in brackets, which denotes the order in \(\varepsilon\) of the perturbation. In practice, the \(\varepsilon\) dependence is implicit in the labelled tensors, and the explicit factors of \(\varepsilon\) in Eq. (16) are used as a counting parameter. That is, \(\varepsilon\) is set to \(1\) before computing calculations. The scalar field expansion takes the form \[\varphi=\varphi^{(0)}+\varepsilon\varphi^{(1)}+\varepsilon^{2}\varphi^{(2)}+ \mathcal{O}(\varepsilon^{3}). \tag{17}\] Note that \(\varphi^{(0)}\) corresponds to the contribution from the action \(S_{0}\) for an isolated black hole, which is covered by no-hair theorems. Hence, \(\varphi^{(0)}\) is constant and can be set to zero by a constant shift without loss of generality. Our procedure requires a specific treatment for the mass function \(m[\varphi]\) that appears in the secondary's stress-energy tensor (23). To this aim we expand \(m[\varphi]\) as: \[m[\varphi]=m_{[0]}+m_{[1]}\varphi+m_{[2]}\varphi^{2}+\mathcal{O}[\varphi^{3}], \tag{18}\] where \(m_{[0]}\), \(m_{[1]}\) and \(m_{[2]}\) are constant coefficients. In our setup, \(m_{[0]}=\mu\); \(\mu\) can be considered as the secondary mass in GR (that is, for \(\varphi=0\)). Note \(\mu\) is not the total mass of the secondary in scalar-tensor theories of gravity as \(m_{[1]}\) and \(m_{[2]}\) can contribute to the mass when \(\varphi\neq 0\), as we will show. We assume \(m_{[1]}\) and \(m_{[2]}\) have the same (stellar mass) scale of \(m_{[0]}\); that is \(m_{[0,1,2]}/M=\mathcal{O}(\varepsilon)\). We can derive the explicit expression of \(m[\varphi]\) up to the second order in \(\varepsilon\) by replacing (17) (with \(\varphi^{(0)}=0\)) into Eq. (18), obtaining: \[m[\varphi]=\mu+\varepsilon m_{[1]}\tilde{\varphi}_{(1)}+\varepsilon^{2}(m_{[ 2]}\varphi_{(1)}^{2}+m_{[1]}\varphi_{(2)}). \tag{19}\] ### Field regularization We now return to the matter action and show how the conventional _skeletonized_ point particle action is problematic in the self-force context. Equation (10) poses a typical problem within self-force: near the worldline (\(\gamma\)), the singular nature of the metric and scalar field makes \(\mathbf{g}_{ab}\) and \(m[\varphi]\) ill-defined, whereas the four-velocity is only defined on the worldline [52]. Hence, Eq. (10) is ill-defined. To solve this problem and extend Eq. (10) beyond linear order, we assume the existence of a _singular_ (\(\mathcal{S}\)) and _regular_ (\(\mathcal{R}\)) split of the metric and scalar field perturbations [53, 54]: \(h_{ab}^{(n)}=h_{ab}^{(n)\mathcal{S}}+h_{ab}^{(n)\mathcal{R}}\), \(\varphi^{(n)}=\varphi^{(n)\mathcal{S}}+\varphi^{(n)\mathcal{R}}\). A motivation for this assumption is such a split exists in the decoupled case [53]. We also define \[h_{ab}^{\mathcal{R}}=\sum_{n=1}\varepsilon^{n}h_{ab}^{\mathcal{R}(n)}\quad, \quad\varphi^{\mathcal{R}}=\sum_{n=1}\varepsilon^{n}\varphi_{\mathcal{R}}^{( n)}. \tag{20}\] \(h_{ab}^{\mathcal{R}}\) and \(\varphi^{\mathcal{R}}\) identify the regular part of the field perturbations (the part that generates the self-force) [52]. Ref. [53] provides an extensive analysis of the singular-regular decomposition to linear order, which has been generalised to the non-linear regime in GR [55, 56, 57]. The decomposition into singular and regular pieces of the metric perturbations has been studied extensively in GR in [53]. Conventionally, \(G_{ab}^{(1)}[h_{cd}^{(1)\mathcal{S}}]=8\pi T_{ab}^{\mathrm{m}(1)}\) and \(G_{ab}^{(1)}[h_{cd}^{(1)\mathcal{R}}]=0\) are satisfied. This definition does not fully fix the field; additionally, \(h_{ab}^{\mathcal{S}}\) is chosen to depend only on the instantaneous state and position of the particle, and \(h_{ab}^{\mathcal{R}}\) on the compact objects' causal past [53]. We expect similar definitions to hold for the metric perturbation and scalar field in our formalism. With Eqns. (20) in hand we define an effective metric and scalar field: \[\tilde{g}_{ab}=g_{ab}+h_{ab}^{\mathcal{R}}\quad,\quad\tilde{\varphi}=\varphi^ {\mathcal{R}}. \tag{21}\] First and second-order self-force calculations in GR have found that compact objects move as a test body of the effective metric [58, 59, 60, 61]. Replacing \(\mathbf{g}_{ab}\) and \(\tilde{\phi}\) with \(\tilde{g}_{ab}\) and \(\tilde{\varphi}\) in Eq. (10), we obtain our _effective_ point-particle action of a scalar-charged compact object: \[S_{\mathrm{p}}=-\int_{\gamma}m[\tilde{\varphi}]d\tilde{s}=-\int_{\gamma}m[ \tilde{\varphi}]\sqrt{\tilde{g}_{ab}\tilde{u}^{a}\tilde{u}^{b}}d\tilde{\tau}, \tag{22}\] where \(\tilde{\tau}\) is the proper time in the effective spacetime and \(\tilde{u}^{\alpha}=dz^{\alpha}/d\tilde{\tau}\). Equation (22) represents our ansatz for the point particle action. For \(\tilde{\varphi}=0\) it reduces to the point particle action in GR, Eq. (A.2), which we show to be consistent with self-force calculations up to second-order [60, 61, 31] in App. A. By varying the action (22) with respect to the effective metric, \(\tilde{g}_{ab}\), yields \[T_{\mathrm{m}}^{ab}=8\pi\int_{\gamma}m[\tilde{\varphi}]\frac{\delta^{4}[x^{ \mu}-z_{\mathrm{p}}^{\mu}[\tilde{\tau}]]}{\sqrt{-\tilde{g}}}\tilde{u}^{a} \tilde{u}^{b}d\tilde{\tau}, \tag{23}\] varying with respect to the effective scalar field, \(\tilde{\varphi}\), yields \[\Sigma=16\pi\int_{\gamma}m^{\prime}[\tilde{\varphi}]\frac{\delta^{4}[x^{\mu}- z_{\mathrm{p}}^{\mu}[\tilde{\tau}]]}{\sqrt{-\tilde{g}}}d\tilde{\tau}, \tag{24}\] which replace Eqs. (23) and (24) respectively. Eqs. (23) and (24) inform us that \(m[\tilde{\varphi}]\) and \(m^{\prime}[\tilde{\varphi}]\) are the Bondi mass and scalar charge of the secondary compact object respectively. We remark that our approach is motivated by Ref. [62], which applied the effective metric approach to the point-particle stress-energy of a massive compact object in GR. Their main result was a conjecture for the second-order stress-energy tensor of a compact object in self-force in GR. This conjecture was later proven to hold in regular gauges by Ref. [31]. ### The equation of motion for the secondary Using our point particle action, Eq. (22), we can also derive the equation of motion of the compact object. We follow the approach of [54], varying the whole action (2) with respect to the body's path, \(x^{\mu}\to x^{\mu}+\delta x^{\mu}\). This yields: \[\delta_{x^{\mu}}S_{\mathrm{p}} =\int_{\gamma}\bigg{[}-(\tilde{g}^{\alpha}_{\ \mu}+\tilde{u}^{\alpha}\tilde{u}_{\mu})\frac{\partial m[\varphi]}{\partial \varphi}\frac{\partial\varphi}{\partial x^{\alpha}}\delta x^{\mu}\] \[+\delta x^{\mu}m[\varphi]\Big{(}\tilde{\Gamma}_{\mu\alpha\nu}\tilde{u }^{\alpha}\tilde{u}^{\nu}-\tilde{g}_{\mu\nu}\frac{d^{2}x^{\nu}}{d\tilde{\tau}^ {2}}\Big{)}\bigg{]}d\tilde{\tau}\ +\mathcal{O}(\varepsilon^{3}). \tag{25}\] Note that the contribution coming from the non-minimal action is at least order \(\mathcal{O}(\varepsilon^{3})\) - based on our earlier assumptions that \(\zeta=\mathcal{O}(\varepsilon^{2})\) and \(S_{\mathrm{c}}\) contains at least one copy of \(\varphi\), which is itself order \(\varepsilon\). Requiring stationarity under first-order variations yields \[m[\tilde{\varphi}]\tilde{a}^{a}=m^{\prime}[\tilde{\varphi}](\tilde{g}^{ab}+ \tilde{u}^{a}\tilde{u}^{b})\partial_{\mathrm{b}}\tilde{\varphi}+\mathcal{O}( \varepsilon^{3})\, \tag{26}\] where \(\tilde{a}^{a}=\tilde{u}^{b}\tilde{\nabla}_{b}\tilde{u}^{a}\) and \(\tilde{\nabla}_{b}\) is the covariant derivative of the effective metric. Eq. (26) is equivalent to the standard self-force equation of motion for a point scalar charge [63, 64, 65] but extended to at least second-order. The charge moves as a point-particle being pushed away from geodesic motion in the effective spacetime by a self-force generated by the effective scalar field. We can also derive evolution equations for the mass and scalar charge of the secondary compact object [54]: \[\frac{dm}{d\tau}=\frac{\partial m}{\partial\tilde{\varphi}}\frac{ \partial\tilde{\varphi}}{\partial\tau}=m^{\prime}[\tilde{\varphi}]u^{a}\nabla_ {a}\tilde{\varphi}, \tag{27}\] \[\frac{dm^{\prime}}{d\tau}=\frac{\partial m^{\prime}}{\partial \tilde{\varphi}}\frac{\partial\tilde{\varphi}}{\partial\tau}=m^{\prime\prime} [\tilde{\varphi}]u^{a}\nabla_{a}\tilde{\varphi}. \tag{28}\] Note, Eqns. (23)-(24), and (26) reduce to the GR limit, Eq. (100), when \(\tilde{\varphi}\to 0\). The validity of the equations of motion for black holes and self-gravitating extended compact objects in an effective spacetime has been assessed in GR up to the second order in the self-force expansion [58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 289; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 324; 325; 336; 337; 341; 342; 343; 358; 359; 360; 371; 38; 389; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 419; 420; 421; 422; 433; 444; 45; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 61; 64; 69; 62; 65; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 84; 86; 88; 89; 92; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 113; 108; 109; 114; 115; 116; 117; 118; 119; 121; 122; 124; 125; 126; 127; 128; 129; 131; 143; 144; 156; 157; 168; 179; 180; 190; 117; 183; 185; 186; 187; 188; 189; 191; 192; 193; 194; 195; 196; 197; 198; 199; 299; 297; 298; 299; 301; 302; 303; 304; 305; 306; 307; 308; 309; 311; 320; 321; 324; 34; 350; 351; 352; 363; 374; 38; 399; 41; 425; 43; 45; 46; 47; 49; 51; 52; 54; 56; 57; 59; 61; 62; 63; 64; 65; 67; 68; 69; 70; 71; 72; 74; 75; 76; 78; 79; 81; 82; 83; 84; 85; 86; 87; 89; 90; 91; 92; 933; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 106; 107; 108; 109; 111; 117; 119; 133; 134; 145; 158; 159; 160; 170; 181; 199; 197; 198; 1999; 299; 302; 310; 32; 333; 346; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 46; 48; 49; 50; 51; 52; 53; 54; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 78; 79; 80; 83; 85; 86; 87; 89; 93; 940; 88; 87; 88; 89; 95; 96; 97; 98; 101; 117; 118; 129; 130; 131; 144; 156; 157; 169; 170; 184; 185; 187; 188; 199; 199; 299; 310; 332; 347; 35; 36; 37; 38; 39; 42; 43; 44; 45; 46; 47; 48; 49; 50; 52; 54; 56; 57; 58; 59; 61; 68; 69; 70; 73; 75; 79; 74; 71; 76; 78; 79; 82; 85; 89; 99; 90; 91; 93; 94; 95; 96; 97; 98; 99; 101; 117; 119; 122; 133; 144; 157; 158; 169; 170; 189; 199; 199; 198; 199; 299; 303; 310; 32; 348; 35; 360; 371; 38; 399; 42; 43; 45; 46; 47; 48; 49; 52; 58; 59; 63; 64; 65; 67; 68; 69; 71; 70; 72; 73; 75; 78; 79; 83; 89; 99; 90; 91; 199; 199; 199; 299; 304; 333; 349; 35; 361; 37; 38; 399; 50; 396; 38; 397; 40; 41; self-force using the first-order equation of motion. Expanding Eq. (26) gives the first-order equations of motion4 Footnote 4: Note, Eq. (36) cannot be derived directly from Eq. (10) because of the lack of regularisation; that is, the effective action, Eq. (22), is required. The expansion also requires Eq. (12) \[a^{a}_{(1)}=a^{a}_{(1)\text{grav}}+a^{a}_{(1)\text{scal}}\, \tag{36}\] where the gravitational and the scalar components are given by: \[a^{a}_{(1)\text{grav}}=-\frac{1}{2}(g^{ab}+u^{a}u^{b})(2h^{(1)\mathcal{R}}_{bd ;e}-h^{(1)\mathcal{R}}_{de;b})u^{d}u^{e}\, \tag{37}\] which is identical to Eq. (102), and \[a^{a}_{(1)\text{scal}}=m_{[1]}(g^{ab}+u^{a}u^{b})\nabla_{b}\varphi^{(1)}_{ \mathcal{R}}. \tag{38}\] Calculations for both \(a^{(1)}_{\text{grav}}\) and \(a^{a}_{(1)\text{scal}}\) have been carried out for generic orbits in the Kerr background in the literature [72; 73; 74; 75]. These results can be exploited to derive the full first-order self-force in scalar-tensor theories, which includes conservative corrections to an EMRI's evolution. ### Second order perturbations Using our expansion of Eq.(4) (including the expansions in App. B), we can express the field's equations for the second-order metric perturbation \[\delta G_{ab}[h^{(2)}_{cd}]=-\delta^{2}G_{ab}[h^{(1)}_{cd},h^{(1) }_{cd}]+\frac{1}{2}\partial_{a}\varphi^{(1)}\partial_{b}\varphi^{(1)}\] \[-\frac{g_{ab}}{4}\partial_{c}\varphi^{(1)}\partial^{c}\varphi^{( 1)}+4\pi\int_{\gamma}\frac{\delta^{4}[x^{\mu}-z^{\mu}_{\text{p}}[\tau]]}{ \sqrt{-g}}\bigg{[}2m_{[1]}\varphi^{(1)}_{\mathcal{R}}u_{a}u_{b}\] \[+\mu\big{(}4h^{\mathcal{R}(1)}_{ac}u^{c}u_{b}-u_{a}u_{b}(g^{cd}_{ (0)}-u^{c}u^{d})h^{\mathcal{R}(1)}_{cd}\big{)}\bigg{]}d\tau. \tag{39}\] Note, \(G^{(2)}_{ab}=\delta G_{ab}[h^{(2)}_{cd}]+\delta^{2}G_{ab}[h^{(1)}_{cd},h^{(1) }_{cd}]\). For \(m_{[1]}=0\) and \(\varphi^{(1)}_{\mathcal{R}}=0\) the right hand side of Eq. (39) reduces to the GR form, Eq. (103). Expanding the scalar field equation, Eq. (7) (including the expansions in App. B), to second-order, we obtain: \[\square\varphi^{(2)} =-\frac{8\pi\alpha^{(2)}}{\sqrt{-g}}\mathcal{G}^{(0)}-h^{ab}_{(1) }\nabla_{a}\nabla_{b}\varphi^{(1)}-(\nabla^{a}h^{(1)}_{ab})\nabla^{b}\varphi^ {(1)}\] \[\quad+\frac{1}{2}(\nabla^{b}h_{(1)})\nabla_{b}\varphi^{(1)}+16\pi \int_{\gamma}\Big{[}m_{[2]}\varphi^{(1)}_{R}\] \[\quad-\frac{1}{2}m_{[1]}(g^{ab}+u^{a}u^{b})h^{R(1)}_{ab}\Big{]} \frac{\delta^{(4)}[x^{\mu}-z^{\mu}_{\text{p}}[\tau]]}{\sqrt{-g}}d\tau. \tag{40}\] where \(h_{(1)}=g^{ab}_{(0)}h^{(1)}_{ab}\). As discussed earlier, \(\zeta S_{\text{c}}\) is at least \(\mathcal{O}(\varepsilon^{3})\), as \(\zeta\) is at least order \(\varepsilon^{2}\) and \(S_{\text{c}}\) contains at least one copy of \(\varphi\). Due to this, there are no contributions from \(S_{\text{c}}\) to Eq. (39) and the only contribution to Eq. (40) comes from a linear coupling between \(\varphi\) and \(\mathcal{G}\)[45]. That is \[T^{\text{c}}_{ab} =\mathcal{O}(\varepsilon^{3}), \tag{41}\] \[\Sigma_{\text{c}} =\varepsilon^{2}\frac{8\pi\alpha^{(2)}}{\sqrt{-g}}\mathcal{G}^{(0 )}+\mathcal{O}(\varepsilon^{3}). \tag{42}\] For the leading order contribution in Eq. (42), \(\alpha\) has dimension \([\text{length}]^{2}\), so we label \(\alpha\to\alpha^{(2)}\) as \(\alpha\mathcal{G}^{(0)}=\mathcal{O}(\varepsilon^{2})\). In a Kerr background \[\mathcal{G}^{(0)}=24M^{2}\big{(}(r-ia\cos[\theta])^{-6}+(r+ia\cos[\theta])^{- 6}\big{)}. \tag{43}\] The expansion of the equation of motion, Eq. (26) (using Eq. (125)), to second-order gives \[a^{a}_{(2)}=a^{a}_{(2)\text{grav}}+a^{a}_{(2)\text{scal}}\, \tag{44}\] with \[a^{a}_{(2)\text{grav}}=-\frac{1}{2}\Big{[}(g^{ab}+u^{a}u^{b})(2 h^{(2)\mathcal{R}}_{bd;e}-h^{(2)\mathcal{R}}_{de;b})\\ -(g^{ab}+u^{a}u^{b})h^{\ c}_{b(1)\mathcal{R}}(2h^{(1)\mathcal{R}} _{cd;e}-h^{(1)\mathcal{R}}_{de;c})\Big{]}u^{d}u^{e}\, \tag{45}\] which is identical to Eq. (101), and \[\mu a^{a}_{(2)\text{scal}}= (g^{ab}_{(0)}+u^{a}u^{b})\Big{(}m_{[1]}\nabla_{b}\varphi^{(2)}_{ \mathcal{R}}\] \[+2m_{[2]}\varphi^{(1)}_{\mathcal{R}}\nabla_{b}\varphi^{(1)}_{ \mathcal{R}}-\frac{m^{2}_{[1]}}{\mu}\varphi^{(1)}_{\mathcal{R}}\nabla_{b} \varphi^{(1)}_{\mathcal{R}}\Big{)}\] \[+m_{[1]}\big{(}h^{\mathcal{R}}_{cd}u^{c}u^{d}u^{a}u^{b}-h^{ab}_{ \mathcal{R}}\big{)}\nabla_{b}\varphi^{(1)}_{\mathcal{R}}. \tag{46}\] Eq. (45) is equivalent to the second-order self-force in GR, Eq. (101)6. Footnote 6: The second-order self-force equations, Eqs. (45) and (46) (and similarly calculating \(h^{(2)\mathcal{R}}_{ab}\) and \(\varphi^{(2)}_{\mathcal{R}}\)) may be unnecessary if flux balance laws can be derived to extract the dissipative piece of the second-order self-force directly from \(h^{(2)}_{ab}\) and \(\varphi^{(2)}\). The interpretation of \(m_{[2]}\) is similar to \(m_{[1]}\) except its contribution to the scalar charge is suppressed by an order in \(\varepsilon\) as there is further coupling to the scalar field \(\varphi^{(1)}_{\mathcal{R}}\). Examining the \(m_{[2]}\) piece in Eq. (40) we see it will also generates a term in \(\varphi^{(2)}\) equivalent to Eq. (34). Note, the other delta function term (which is coupled to \(h^{\mathcal{R}(1)}_{ab}\)) in Eq. (40) will also contribute a term equivalent to Eq. (34). Therefore, \[\varepsilon\mu d=-4\bigg{(}\varepsilon m_{[1]}\\ +\varepsilon^{2}\Big{(}m_{[2]}\varphi_{R}^{(1)}-\frac{1}{2}m_{[1] }(g^{ab}+u^{a}u^{b})h^{\mathcal{R}(1)}_{ab}\Big{)}\bigg{)}. \tag{47}\] That is, \(m_{[2]}\), which we call hereafter _"charge coupling"_, and the coupling to \(h^{R(1)}_{ab}\), provide an \(\mathcal{O}(\varepsilon)\) correction to the scalar charge \(d\). We can expand \(d\), \[d=d^{(0)}+\varepsilon d^{(1)}+\mathcal{O}(\varepsilon^{2}). \tag{48}\] We re-define Eq.(35) as \[m_{[1]}=-\frac{\mu d^{(0)}}{4}, \tag{49}\] and define \[d^{(1)}=-\frac{4}{\mu}\Big{(}m_{[2]}\varphi_{R}^{(1)}-\frac{1}{2}m_{[1]}(g^{ ab}+u^{a}u^{b})h^{\mathcal{R}(1)}_{ab}\Big{)}. \tag{50}\] Interestingly, the only piece in the second-order equations that does not derive from a minimally coupled scalar field to the metric is the Gauss-Bonnet invariant term in Eq. (40), which is stationary in Kerr spacetime. Its effect on \(\varphi^{(2)}\) in Eq. (40) is then also stationary and, hence, it will not affect the dissipative piece of the second-order self-force. It can, therefore, be neglected for first-post adiabatic accurate modelling. Hence, our formalism is independent of the choice of scalar-tensor theories which obey our assumptions. In general (including in the GR limit), the most challenging term to solve in Eqs. (39) is computing the mode decomposition of \(\delta^{2}G_{ab}[h^{(1)}_{ab},h^{(1)}_{ab}]\) near the world-line. This problem derives from \(\delta^{2}G_{ab}[h^{(1)}_{ab},h^{(1)}_{ab}]\) being quadratic in the first-order metric perturbation, which is singular on \(\gamma\). Ref. [78] describes and addresses this problem in detail by splitting \(\delta^{2}G_{ab}[h^{(1)}_{ab},h^{(1)}_{ab}]\) into three pieces: (i) the regular term \(\delta^{2}G_{ab}[h^{(1)\mathcal{R}}_{ab},h^{(1)\mathcal{R}}_{ab}]\), (ii) the mildly singular term, \(\delta^{2}G_{ab}[h^{(1)\mathcal{S}}_{ab},h^{(1)\mathcal{R}}_{ab}]\) computed by casting \(h^{(1)\mathcal{S}}_{ab}\) and \(h^{(1)\mathcal{R}}_{ab}\) as a sum of modes, and (iii) the very singular term, \(\delta^{2}G_{ab}[h^{(1)\mathcal{S}}_{ab},h^{(1)\mathcal{S}}_{ab}]\), which can be calculated using a 4-dimensional expression for \(h^{(1)\mathcal{S}}_{ab}\) associated with the mode expansion of \(h^{(1)\mathcal{S}}_{ab}\). While this approach has been recently used to calculate the first-post adiabatic quasi-circular inspiral of a Schwarzschild binary system [79; 80; 81], the method remains computationally expensive and highly technical. Overcoming this issue remains a major obstacle for self-force waveform modelling in GR. One may have expected this problem to be exacerbated in scalar-tensor theories of gravity as introducing additional degrees of freedom may have resulted in \(\delta^{2}G_{ab}[h^{(1)}_{ab},h^{(1)}_{ab}]\) being different, for different theories. However, the decoupling of scales on which our approach builds provides key simplifications. As the first-order metric perturbation, \(h^{(1)}_{ab}\), is the same in GR and in all the theories of gravity specified by the action (2), the expression for \(\delta^{2}G_{ab}[h^{(1)}_{ab},h^{(1)}_{ab}]\) remain the same. Therefore, we can use the same \(\delta^{2}G_{ab}[h^{(1)}_{ab},h^{(1)}_{ab}]\) constructed in GR, significantly reducing the computational burden of our formalism. Nonetheless, additional singular terms appear at the second order, as quadratic functions of the scalar field contribute in Eq. (39). Additionally, mixed products between the metric and the scalar linear perturbations appear in Eq. (40). These terms are similarly challenging to calculate as they contain products of singular functions on the worldline. However, their mode decomposition can be computed adopting the same method used in [78]. Remarkably, since \(\varphi^{(1)}\) is independent of the specific theory of gravity, up to an amplitude rescaling given by the scalar charge, all these contributions are scalar-tensor theory invariant. ## V Two-timescale expansion The two-timescale expansion is an example of a multi-scale expansion [82]. In the EMRI context, Ref. [83] used the two-timescale expansion to argue that first-post adiabatic models were necessary to model inspirals accurately. Since then, there has been growing interest in applying the two-timescale approximation to the EMRI problem [84; 85; 86; 36]. A two-timescale expansion was implemented to produce the first-post-adiabatic waveform models for Schwarzschild black holes in a quasi-circular inspiral in GR [79; 80; 81]. Here, we apply the two-timescale approximation to our formalism similarly. We show that the resemblance to the two-timescale framework in GR allows calculations in scalar-tensor theories of gravity to require only supplementary terms. As discussed in Appendix A.1, EMRI dynamics allow us to identify two distinct timescales [83; 84]: (i) the _fasttimescale_ over which the orbital phases evolve and (ii) the _slow-timescale_ that dictates the change of the orbital frequencies and of the physical parameters of the system. Due to the near periodicity of each EMRI orbit, the time evolution on the fast-timescale is effectively periodic. In contrast, the slow-time evolution is non-trivial but contributes beyond the leading order (as \(\tilde{t}=\mathcal{O}(\frac{1}{\varepsilon})\)) [84]. The position of the compact object at any given time can be represented by the three orbital phases, \[\phi_{i}:=\{\phi_{r},\phi_{\theta},\phi_{\phi}\}. \tag{51}\] The evolution of the phases, \(\phi_{i}\), can be expressed in terms of their respective frequencies \(\Omega_{i}:=\{\Omega_{r},\Omega_{\theta},\Omega_{\phi}\}\), \[\phi_{i}=\int\Omega_{i}[\tilde{t}]dt, \tag{52}\] where \(\Omega_{i}\) evolves on the slow-time \(\tilde{t}\). We assume \(\Omega_{i}\) is approximately constant on the fast-timescale. This assumption is valid because the background spacetime is Kerr and geodesics in Kerr are triperiodic. An appropriate choice of _fast-times_ are the phases, \(\phi_{i}\), as they evolve on the orbital timescale. The frequencies, \(\Omega_{i}\), can be expressed in terms of the three constants of motion of Kerr geodesics: energy, angular momentum and the Carter constant, \(J_{i}:=\{E,J_{z},Q\}\)[87]. That is, \[\Omega_{i}:=\Omega_{i}[J_{i}]. \tag{53}\] As the compact object does not remain on a geodesic over the course of the inspiral, the three constants of motion evolve over the slow-timescale. Their evolution can be computed through the self-force. In turn, the evolution of the frequencies of motion, \(\Omega_{i}\) can be constructed from the self-force, \[\frac{d\Omega_{i}}{dt}=\varepsilon F_{\Omega_{i}}^{\{0\}}[\Omega_{i}]+ \varepsilon^{2}F_{\Omega_{i}}^{\{1\}}[\Omega_{i},\delta M_{\mu}]+\mathcal{O} (\varepsilon^{3}), \tag{54}\] where \(F_{i}^{\{n\}}\) are \(n\)th-post adiabatic-order self-force coefficients. Note the adiabatic self-force coefficients only depend on the frequencies. In contrast, the first-post adiabatic coefficients depend on the frequencies and the change in the system's physical parameters \(\delta M_{\mu}\), which we explain next. Additional physical parameters of the binary evolve on the slow-timescale of an EMRI; one must account for their evolution to achieve first-post adiabatic accurate models. In the GR case, the mass and spin of the primary black hole evolve as gravitational waves pass into the primary black hole horizon. The change in mass and spin are labelled as \[\delta M_{A}[\tilde{t}]:=\{\delta M,\delta J\}. \tag{55}\] We now assess whether additional physical parameters of an EMRI need to be evolved in scalar-tensor theories of gravity. In scalar-tensor theories of gravity, the primary black hole also absorbs scalar radiation. This scalar radiation carries energy and angular momentum, which must be accounted for in the evolution of the \(\delta M_{A}[\tilde{t}]\). One may question if the scalar field can carry scalar charge into the supermassive black hole. However, in general, scalar charge does not tend to be a free parameter in scalar-tensor theories of gravity. Instead, when present, its value is fixed in terms of the mass and spin of the black hole by regularity conditions on the horizon (see _e.g._[44; 45; 46; 47]). Hence, we expect the scalar charge of the supermassive black hole to evolve consistently with the evolution of the mass and angular momentum of the black hole. The scaling arguments in Secs. II.2 and III, that multiple orders of \(\varepsilon\) suppress the scalar charge of the supermassive black hole, still hold. As the evolution of the mass is an \(\mathcal{O}(\varepsilon)\) effect, we can neglect the evolution of the scalar charge of the supermassive black hole as it is a higher-order effect (at least an \(\mathcal{O}(\varepsilon^{3})\) effect for \(n\geq 2\)). A potential caveat of this argument may be that the expectation that the scalar charge is fixed with respect to the mass and spin is based on the properties of stationary black holes. It is known that relaxing the assumption of stationarity can allow for hair formation in principle [88], but it is reasonable to expect that the charge per unit mass of the primary introduced by the absorption of scalar radiation will be negligible in the present scenario. Further investigation of whether regular (near the horizon) perturbative modes could excite further independent scalar charge degrees of freedom within this multi-scale formalism would test this hypothesis. We can also test this hypothesis against time-domain self-force evolutions, for which the formalism in the main body of this paper is also consistent. Fully nonlinear numerical investigations of a supermassive black hole absorbing scalar radiation arising from an asymmetric binary would also be of interest in this respect. We next turn our attention to whether the physical characteristics of the secondary evolve on the orbital timescale. For EMRIs in GR, the evolution of the mass of the secondary object is a high-order effect [36]. This is due to the length scales of the orbital dynamics and the radiation wavelength being much larger than the scale of the secondary. This argument extends to scalar-tensor theories of gravity, so we expect the secondary object's parameters \(\mu\) and \(m_{[1]}\) (related to \(\mu\) by Eq. (49)) to remain constant throughout the inspiral for first-post adiabatic modelling. This is not to say that the mass (\(m[\tilde{\varphi}]\)) and scalar charge (\(\mathrm{m}^{\prime}[\tilde{\varphi}]\)) of the secondary remain constant, they evolve via Eqs. (27) and (28) respectively. That is, their evolution is determined solely by the evolution of the scalar field, which is accounted for in the two-timescale formalism. Hence, no additional parameters evolve on the slow-timescale in our formalism are required; that is, Eq. (55) holds. The evolution of the EMRI parameters, \(\delta M_{A}\), can also be constructed from the self-force, \[\frac{d\delta M_{A}}{dt}=\varepsilon F_{A}^{\{1\}}[\Omega_{j}]+\mathcal{O}( \varepsilon^{2}). \tag{56}\] More precisely, the evolution of \(\delta M\) and \(\delta J\) can be calculated from the first-order scalar and gravitational fluxes that pass through the supermassive black hole horizon. To implement the two-timescale approximation we need to re-express the first- and second-order field equations (Eqs. (32), (33), (39), and (40)) and the equations of motion (Eqs. (36) and (44)). In the two-timescale approximation, the field variables are expressed as [84] \[h_{ab}^{(1)} =\sum_{p,q,m}h_{ab}^{(1),\omega_{p,q,m}}[\Omega_{i},x^{i}]e^{-ik^ {i}\phi_{i}}, \tag{57}\] \[\varphi^{(1)} =\sum_{p,q,m}\varphi^{(1),\omega_{p,q,m}}[\Omega_{i},x^{i}]e^{-ik^ {i}\phi_{i}}, \tag{58}\] and \[h^{(2)}_{ab} =\sum_{p,q,m}h^{(2),\omega_{p,q,m}}_{ab}[\Omega_{i},\delta M_{A},x^{ i}]e^{-ik^{i}\phi_{i}}, \tag{59}\] \[\varphi^{(2)} =\sum_{p,q,m}\varphi^{(2),\omega_{p,q,m}}[\Omega_{i},\delta M_{A}, x^{i}]e^{-ik^{i}\phi_{i}}, \tag{60}\] where \(k^{i}:=\{p,q,m\}\), \(\omega_{p,q,m}=k^{i}\Omega_{i}\), and \(p\), \(q\), and \(m\) are integers to sum over. Eqs. (57), (57), (59) and (60) are a discrete Fourier series in terms of the phases of motion, whose coefficients evolve on the slow-timescale. The advantage of the two-timescale approximation becomes apparent when you act with a time derivative on the field variables; for example, a time derivative of \(\varphi^{(n)}\) gives, \[\frac{d\varphi^{(n)}}{dt}=\sum_{p,q,m}\bigg{(}\varphi^{(n),\omega _{p,q,m}}[\tilde{t},x^{i}]\frac{\partial\phi_{j}}{\partial t}(-ik^{j})e^{-ik^ {i}\phi_{i}}\] \[+\Big{(}\frac{\partial\Omega_{j}}{\partial t}\frac{\partial \varphi^{(n),\omega_{p,q,m}}}{\partial\Omega_{j}}+\frac{\partial\delta M_{ \mu}}{\partial t}\frac{\partial\varphi^{(n),\omega_{p,q,m}}}{\partial\delta M _{\mu}}\Big{)}e^{-ik^{i}\phi_{i}}\Bigg{)}. \tag{61}\] Evaluating the partial derivatives in Eq. (61): \(\frac{\partial\partial_{j}}{\partial t}=\Omega_{j}\) using Eq. (52); \(\frac{\partial\Omega_{j}}{\partial t}=\varepsilon F^{\{0\}}_{\Omega_{i}}[ \Omega_{i}]+\mathcal{O}(\varepsilon^{2})\) from Eq. (54); and \(\frac{\partial\delta M_{\mu}}{\partial t}=\varepsilon F^{\{1\}}_{\mu}[ \Omega]+\mathcal{O}(\varepsilon^{2})\) from Eq. (56). As the background spacetime and background scalar field are stationary, and the perturbations are of the form in Eqs. (57)-(60), we can split all time derivatives into an algebraic \(\mathcal{O}(\varepsilon^{0})\) piece and a differential \(\mathcal{O}(\varepsilon)\) piece, \[\frac{\partial}{\partial t}\rightarrow-ik^{j}\Omega_{j}+\varepsilon\frac{ \partial}{\partial\tilde{t}}\, \tag{62}\] where we have defined the _slow-time derivatives_, \(\frac{\partial}{\partial t}\), such that when it acts on a field variable \[\frac{\partial}{\partial\tilde{t}}:=F^{\{0\}}_{\Omega_{j}}[ \Omega_{i}]\frac{\partial\varphi^{(n),\omega_{p,q,m}}}{\partial\Omega_{j}}+F^ {\{1\}}_{A}[\Omega_{j}]\frac{\partial\varphi^{(n),\omega_{p,q,m}}}{\partial \delta M_{A}}. \tag{63}\] Note, the slow-time derivatives in Eq. (62) contribute at one order in \(\varepsilon\) higher than the _fast-time derivatives_ (the algebraic part of Eq. (62)). We now use a labelling convention [89] to denote the number of slow-time derivatives in a differential operator. Take, for a simple example, the operator \[A[\varphi^{(n)}]:=\frac{\partial^{2}\varphi^{(n)}}{\partial t^{2}}. \tag{64}\] As \(\varphi^{(n)}\) can be expressed using Eqs. (57) and (60), we replace the time derivatives with fast and slow-time derivatives using Eq. (62), giving \[A[\varphi^{(n)}] =\Big{(}-ik^{j}\Omega_{j}+\varepsilon\frac{\partial}{\partial \tilde{t}}\Big{)}\Big{(}-ik^{j}\Omega_{j}+\varepsilon\frac{\partial}{\partial \tilde{t}}\Big{)}\varphi^{(n)} \tag{65}\] \[=-(k^{j}\Omega_{j})^{2}\varphi^{(n)}-2i\varepsilon k^{j}\Omega_{j }\frac{\partial\varphi^{(n)}}{\partial\tilde{t}}+\varepsilon^{2}\frac{ \partial^{2}\varphi^{(n)}}{\partial\tilde{t}^{2}}. \tag{66}\] \(A[\varphi^{(n)}]\) can be expressed in orders of slow-time derivatives: \[A[\varphi^{(n)}] =A^{\langle 0\rangle}[\varphi^{(n)}]+A^{\langle 1\rangle}[\varphi^{(n)}]+A^{ \langle 2\rangle}[\varphi^{(n)}], \tag{67}\] where \[A^{\langle 0\rangle}[\varphi^{(n)}]: =-(k^{j}\Omega_{j})^{2}\varphi^{(n)}, \tag{68}\] \[A^{\langle 1\rangle}[\varphi^{(n)}]: =-2i\varepsilon k^{j}\Omega_{j}\frac{\partial\varphi^{(n)}}{ \partial\tilde{t}},\] (69) \[A^{\langle 2\rangle}[\varphi^{(n)}]: =\varepsilon^{2}\frac{\partial^{2}\varphi^{(n)}}{\partial \tilde{t}^{2}}, \tag{70}\] where the number in angular brackets denotes the number of slow-time derivatives in the differential operator. We also need to expand additional quantities that appear in our equations: \[z^{\alpha} =z^{\alpha}_{(0)}+z^{\alpha}_{(1)}+... \tag{71}\] \[u^{\alpha} =u^{\alpha}_{(0)}+u^{\alpha}_{(1)}+...\] (72) \[\Big{(}\frac{d\tau}{dt}\Big{)} =\Big{(}\frac{d\tau}{dt}\Big{)}_{(0)}+\Big{(}\frac{d\tau}{dt} \Big{)}_{(1)}+... \tag{73}\] with the usual multi-scale expansion definitions [85]. The operators in our field equations, Eqs. (32), (39), (33), and (40), separate into pieces with various numbers of slow-time derivatives. Take Eq. (32), we can re-express it in the two-timescale approximation as \[\delta G^{\langle 0\rangle}_{ab}[h^{(1)}_{ab}]+\delta G^{\langle 1 \rangle}_{ab}[h^{(1)}_{ab}]\] \[=8\pi\varepsilon\int_{\gamma}\mu\Bigg{(}\Big{(}u^{\alpha}_{(0)}u^ {b}_{(0)}+\varepsilon 2u^{(a}_{(1)\perp}u^{b)}_{(0)}\Big{)}\frac{\delta^{4}[x^{\mu}-z^{\mu}_{p}[ \tau]]}{\sqrt{-g}}\] \[-u^{(0)}_{a}u^{(0)}_{b}z^{\gamma}_{(1)\perp}\nabla_{\gamma}\frac{ \delta^{(4)}[x^{\mu}-z^{\mu}_{p}[\tau]]}{\sqrt{-g}}\Bigg{)}d\tau_{(0)}+ \mathcal{O}(\varepsilon^{3}). \tag{74}\] where \(z^{\gamma}_{(1)\perp}=(g^{\gamma}_{\ \alpha}+u^{\gamma}_{(0)}u^{(0)}_{\alpha})z^{\alpha}_{(1)}\) and \(u^{\gamma}_{(1)\perp}=(g^{\gamma}_{\ \alpha}+u^{\gamma}_{(0)}u^{(0)}_{\alpha})u^{\alpha}_{(1)}\). Note, Eq. (32) is a first order in \(\varepsilon\) equation, but implementing the two-timescale approximation has introduced \(\varepsilon^{2}\) pieces in Eq. (74). The second order in \(\varepsilon\) pieces can be promoted to the second-order field equation, Eq. (39), giving \[\delta G^{\langle 0\rangle}_{ab}[h^{(1)}_{ab}]=8\pi\int_{\gamma}\mu\frac{ \delta^{4}[x^{\mu}-z^{\mu}_{p}[\tau]]}{\sqrt{-g}}u^{\alpha}_{(0)}u^{b}_{(0)}d \tau_{(0)}, \tag{75}\] \[\delta G^{\langle 0\rangle}_{ab}[h^{(2)}_{ab}]=-\delta^{2}G^{\langle 0 \rangle}_{ab}[h^{(1)}_{ab},h^{(1)}_{ab}]-\delta G^{\langle 1\rangle}_{ab}[h^{(1)}_{ab}]\] \[+\frac{1}{2}\partial^{\langle 0\rangle}_{a}\varphi^{(1)} \partial^{\langle 0\rangle}_{b}\varphi^{(1)}-\frac{1}{4}g_{ab}(\partial^{\langle 0 \rangle}\varphi^{(1)})^{2}\] \[+4\pi\int_{\gamma}\frac{\delta^{4}[x^{\mu}-z^{\mu}_{p}[\tau]]}{\sqrt -g}\Big{(}2m_{[1]}\varphi^{(1)}_{R}u^{(0)}_{a}u^{(0)}_{b}+\] \[\mu\big{(}4h^{R(1)}_{ac}c^{(0)}_{b}-(0)_{a})_{b}(g^{cd}_{(0)}-u^{c}_ {(0)}u^{d}_{(0)})h^{R(1)}_{cd}\big{)}\Big{)}d\tau_{(0)}\] \[+16\pi\int_{\gamma}\mu\bigg{(}2u^{(0\perp}_{(a}u^{(0)}_{b}) \frac{\delta^{(4)}[x^{\mu}-z^{\mu}_{p}[\tau]]}{\sqrt{-g}}\] \[-u^{(0)}_{a}u^{(0)}_{b}z^{\gamma}_{(1)\perp}\nabla_{\gamma}\frac{ \delta^{(4)}[x^{\mu}-z^{\mu}_{p}[\tau]]}{\sqrt{-g}}\bigg{)}d\tau_{(0)}, \tag{76}\] where we have also expanded \(\delta^{2}G_{ab}[h^{(1)}_{ab}]\) and \(\partial_{\mu}\) in orders of slow-time derivatives. As we are interested in first-post adiabatic modelling here, we have neglected any terms that are \(\mathcal{O}(\varepsilon^{3})\). For example, \(\delta^{2}G^{(1)}_{ab}[h^{(1)}_{ab}]=\mathcal{O}(\varepsilon^{3})\); generally \(\delta^{n}G^{(m)}_{ab}[h^{(i)}_{ab}]=\mathcal{O}(\varepsilon^{(ni+m)})\). Deriving higher-order equations with this algorithm is trivial. Applying the slow-time derivative expansion algorithm to the scalar perturbation field equations, Eqs. (33) and (40), give \[\square^{(0)}\varphi^{(1)} =16\pi\int_{\gamma}m_{[1]}\frac{\delta^{(4)}[x^{\mu}-z^{\mu}_{ \mathrm{p}}[\tau]]}{\sqrt{-g}}d\tau_{(0)}, \tag{77}\] \[\square^{(0)}\varphi^{(2)} =-\frac{8\pi\alpha^{(n)}}{\sqrt{-g}}\mathcal{G}^{(0)}-\square^{(1 )}\varphi^{(1)}\] \[-h^{ab}_{(1)}\nabla_{a}\nabla_{b}\varphi^{(1)}-(\nabla^{a}h^{(1 )}_{ab})\nabla^{b}\varphi^{(1)}+\frac{1}{2}(\nabla^{b}h_{(1)})\nabla_{b} \varphi^{(1)}\] \[+16\pi\Bigg{(}\int_{\gamma}m_{[2]}\varphi^{(1)}_{R}\frac{\delta^{ (4)}[x^{\mu}-z^{\mu}_{\mathrm{p}}[\tau]]}{\sqrt{-g}}d\tau_{(0)}\] \[-\frac{1}{2}\int_{\gamma}m_{[1]}\frac{\delta^{(4)}[x^{\mu}-z^{\mu }_{\mathrm{p}}[\tau]]}{\sqrt{-g}}(g^{ab}+u^{a}_{(0)}u^{b}_{(0)})h^{R(1)}_{ab}d \tau_{(0)}\] \[-\int_{\gamma}m_{[1]}z^{\gamma}_{(1)\perp}\nabla_{\gamma}\frac{ \delta^{(4)}[x^{\mu}-z^{\mu}_{\mathrm{p}}[\tau]]}{\sqrt{-g}}d\tau_{(0)}\Bigg{)}. \tag{78}\] From the field variable coefficients, \(\varphi^{(1),\omega_{p,q,m}}\), \(\varphi^{(2),\omega_{p,q,m}}\), \(h^{(1),\omega_{p,q,m}}_{ab}\), and \(h^{(2),\omega_{p,q,m}}_{ab}\), one can calculate the orbit averaged self-force: \[F^{a}_{SF} =\varepsilon F^{a}_{(1)}[h^{(1),\omega_{p,q,m}}_{ab},\varphi^{(1),\omega_{p,q,m}}]\] \[+\varepsilon^{2}F^{a}_{(2)}[h^{(2),\omega_{p,q,m}}_{ab},\varphi^{ (2),\omega_{p,q,m}}]+\mathcal{O}(\varepsilon^{3}). \tag{79}\] Where we have extended Eq. (101) to include the self-force from the scalar field. We use the arguments in Ref. [83] to split the self-force into dissipative and conservative pieces in a post-adiabatic expansion, \[F^{a}_{SF} =F^{a}_{\{0\}(1)\mathrm{diss}}[h^{(1),\omega_{p,q,m}}_{ab}, \varphi^{(1),\omega_{p,q,m}}]\] \[+F^{a}_{\{1\}(1)\mathrm{cons}}[h^{(1),\omega_{p,q,m}}_{ab}, \varphi^{(1),\omega_{p,q,m}}]\] \[+F^{a}_{\{1\}(2)\mathrm{diss}}[h^{(2),\omega_{p,q,m}}_{ab}, \varphi^{(2),\omega_{p,q,m}}]+... \tag{80}\] where the numbers in the curly brackets denotes the post-adiabatic order of the self-force contribution. We can input Eq. (80) into Eq. (104) to calculate the phase evolution. In practice, the phases evolve via Eq. (52); that is, the evolution of the orbital frequencies is determined by the self-force coefficients \(F^{\{0\}}_{\Omega_{i}}\) and \(F^{\{1\}}_{\Omega_{i}}\) in Eq. (54). The self-force coefficients \(F^{\{n\}}_{\Omega_{i}}\) and \(F^{\{1\}}_{A}\) can be determined from the field variable coefficients (similarly to Eq. (80)), \[F^{\{0\}}_{\Omega_{i}}[\Omega_{j}] =F^{\{0\}}_{\Omega_{i}}[h^{(1),\omega_{p,q,m}}_{ab},\varphi^{(1),\omega_{p,q,m}}], \tag{81}\] \[F^{\{1\}}_{\Omega_{i}}[\Omega_{j},\delta M_{A}] =F^{\{1\}}_{\Omega_{i}}[h^{(1),\omega_{p,q,m}}_{ab},\varphi^{(1),\omega_{p,q,m}},\] \[h^{(2),\omega_{p,q,m}}_{ab},\varphi^{(2),\omega_{p,q,m}}],\] (82) \[F^{\{1\}}_{A}[\Omega_{j}] =F^{\{1\}}_{A}[h^{(1),\omega_{p,q,m}}_{ab},\varphi^{(1),\omega_{p,q,m}}]. \tag{83}\] The force coefficients in Eqs. (81) and (82) can be derived from the perturbative equations of motion, Eq. (36) and (44). However, one must account for slow-time derivative contributions to Eq. (44). To find the additional contribution one can examine Eq. (36) using Eqs. (62), (71), and (72), giving \[a^{a}_{(2)\mathrm{slow}}=m^{(1)}_{[1]}(g^{ab}+u^{a}_{(0)}u^{b}_ {(0)}+2u^{(a}_{(1)}u^{b}_{(0)}))\partial^{(1)}_{b}\varphi^{(1)}_{\mathcal{R}}\] \[-\frac{1}{2}(g^{ab}+u^{a}_{(0)}u^{b}_{(0)})\big{(}2\partial^{(1) }_{b}h^{(1)\mathcal{R}}_{bd}-\partial^{(1)}_{b}h^{(1)\mathcal{R}}_{bc}\big{)} u^{d}_{(0)}u^{e}_{(0)}\] \[-\frac{1}{2}(g^{ab}+u^{a}_{(0)}u^{b}_{(0)})(2h^{(1)\mathcal{R}}_{ bd;c}-h^{(1)\mathcal{R}}_{dc;b})2u^{(d}_{(1)}u^{e}_{(1)}. \tag{84}\] The second-order equation of motion then becomes \[a^{a}_{(2)}=a^{a}_{(2)\mathrm{grav}}+a^{a}_{(2)\mathrm{scal}}+a^{a}_{(2) \mathrm{slow}}. \tag{85}\] From Eqs. (36) and (85) one can derive precise relations for Eqs. (81) and (82). The force coefficients in Eq. (83) can be determined from the orbit averaged fluxes entering the primary black hole horizon in terms of \(h^{(1),\omega_{p,q,m}}_{ab}\), and \(\varphi^{(1),\omega_{p,q,m}}\). This completes the two-timescale expansion of our first-post adiabatic accuracy modelling scheme for non-spinning binaries in scalar-tensor theories of gravity. ## VI Summary and conclusions In this paper, we have derived a framework for computing the premiere first-post adiabatic binary modelling scheme in theories with a massless scalar field non-minimally coupled to gravity. As discussed in detail in Sec. II, our main assumptions are that any interaction terms for the scalar are suppressed by a coupling with mass-dimension-2 or higher, the scalar field is shift symmetric, and the solutions of the theory continuously connect to those of general relativity with a minimally coupled scalar field. We have used perturbative scaling arguments, using the suppression of black hole scalar charge by its mass, to isolate the adiabatic and first-post adiabatic contributions. This allows us to ignore most of the general coupling between the metric and scalar field. This approach is similar to Refs. [22; 23; 24] who developed and implemented this method to adiabatic order. We have produced an ansatz for the matter action of a compact object with a scalar charge, which is the foundation of our formalism. The long-established scalar charged point particle action, Eq. (10) [50], becomes problematic in the self-force approach because of divergences in the field perturbations on the worldline. We have combined the scalar charged point particle action with the effective metric approach [62] to produce our ansatz for the particle action, Eq. (22). The effective metric and effective scalar field are regular on the worldline, making our action well defined. In the limit \(\varphi\to 0\), Eq. (22) is equivalent to a point-mass effective action in GR, which is consistent with the first- and second-order self-force method [58; 59; 60; 61; 62; 31]. Our ansatz and formalism could be checked by calculating a matched asymptotic expansion [60; 90; 91; 92; 93] for a scalar charged compact binary in scalar-tensor theories of gravity. In Sec. IV, we derived field equations for the first- and second-order metric and scalar perturbations, Eqs. (32), (39), (33), and (40). We also derive the first- and second-order equations of motion for the secondary object, Eq. (36) and (44). Our formalism also trivially extends to higher orders. Because Eq. (32) is identical in GR, the first-order Teukolsky equation holds in our formalism [94; 95]. It is straightforward to convert the second-order metric perturbation field equations (Eqs. (75) or (76)) and convert them to second-order Teukolsky equations by applying the methods used in Refs. [26; 27; 96]. Ref. [97] presents an alternative method for deriving a Teukolsky equation in alternative theories of gravity. Also, the metric reconstruction methods developed in GR hold in our formalism (see CCK [68; 69]), GHZ [27; 28], AAB [70], and Lorenz gauge [29] metric reconstruction). In Sec. V, we integrate our formalism into the two-timescale approximation, allowing for efficient first-post adiabatic calculations. Our formalism is also consistent in the time-domain [84] and the self-consistent approximation [98]. An additional modelling problem that needs addressing is resonances [99]. We expect that the methods used to address resonances in GR, such as those implemented in Ref. [34], will also be applicable to our formalism. Remarkably, our formalism has added only one additional parameter, \(m_{[2]}^{(1)}\), to the adiabatic order formalism of Refs. [22]. Hence, it appears that the scalar charge \(d\) and \(m_{[2]}^{(1)}\) capture the effects of the scalar field to first post-adiabatic order in a very large class of theories. Our understanding of \(m_{[2]}^{(1)}\) is currently limited, and it would be interesting to investigate how different theories of gravity generate a non-zero \(m_{[2]}^{(1)}\) and how significant this contribution is to an EMRI model. It is conceivable that in a subset of theories, the only significant extra parameter to the first post-adiabatic order is indeed \(d\). It would also be interesting to extend our formalism to include a mass of the scalar field and interactions that do not respect shift symmetry. The effect of the former has already been studied at adiabatic order in [24]. We also intend to publish a follow-up paper which will extend our formalism to include the first-post adiabatic effects of the spin and scalar dipole of the secondary compact object. The main motivation for this work is to model EMRI waveforms to first-post adiabatic accuracy for LISA data analysis. Accuracy requirement arguments suggest that calculating the second-order self-force to \(\sim 1\%\) accuracy will likely be sufficient for LISA data analysis [100]. If the second-order scalar self-force (and the effect of the scalar field on the gravitational second-order self-force) are suppressed by two orders of magnitude compared to the gravitational self-force, then their effect may be neglected. Ref. [22] found the adiabatic scalar self-force is \(\mathcal{O}(1\%)\) of the gravitational self-force for \(d=0.3\); for smaller \(d\) the scalar self-force is further suppressed. If a similar relationship is found at first-post adiabatic order, then the conservative piece of the first-order scalar self-force and the dissipative piece of the second-order scalar self-force may be negligible. Nevertheless, the adiabatic contribution (the dissipative piece of the first-order self-force) will still be significant. That is, waveforms in scalar-tensor theories of gravity will be significantly different from those in GR, but it may be even easier to model binaries in scalar-tensor theories of gravity than our formalism suggests. Implementing our formalism will be a similar difficulty as computing first-post adiabatic models in GR. The most challenging part of such calculations is calculating \(\delta^{2}G[h_{ab}^{(1)},h_{ab}^{(1)}]\) near the worldline. We have shown that this calculation is identical in GR and our formalism. A method for calculating \(\delta^{2}G[h_{ab}^{(1)},h_{ab}^{(1)}]\) near the worldline is given in Ref. [78], but it is inefficient and highly technical. There are additional, similarly challenging to calculate, pieces in our formalism (again containing products of divergences on the worldline): \(\frac{1}{2}\partial_{a}\varphi^{(1)}\partial_{b}\varphi^{(1)}-\frac{1}{4}g_{ab }(\partial\varphi^{(1)})^{2}\) in Eq. (39) and \(h_{(1)}^{ab}\nabla_{a}\nabla_{b}\varphi^{(1)}-(\nabla^{a}h_{ab}^{(1)})\nabla^ {b}\varphi^{(1)}+\frac{1}{2}(\nabla^{b}h_{(1)})\nabla_{b}\varphi^{(1)}\) in Eq. (40). We propose again using the method in Ref. [78], or any new methods developed to tackle the problem in GR (this is currently an active area of research in the self-force community). Our formalism could also be used to produce intermediate-mass-ratio inspiral waveforms. The results in Refs. [79; 80; 81] show encouraging agreement with Numerical Relativity and first-post adiabatic self-force waveforms in GR for quasi-circular inspirals of Schwarzschild black holes in the mass-ratio regime of \(1:10\). These results will soon be used to help future LVK data analysis (as LVK begins to probe deeper into the disparate mass ratio regime). Implementing our formalism, even for the simpler case of non-spinning (or linear in spin) black hole primary, would be an important step towards detecting or constraining the existence of a new fundamental scalar field. ## Appendix A Self-force formalism in GR In this appendix, we summarise the self-force approach for binary black holes in GR. We focus on the relevance of first-post adiabatic waveform models for gravitational wave observations. Additionally, we show how the first- and second-order field equations and equations of motion can be derived from an effective action. In black hole perturbation theory, the metric (\(\mathbf{g}_{ab}\)) is expressed as an expansion in orders of a small parameter, as in Eq. (16). In the perturbative self-force approach, the small parameter is the mass ratio, \(\varepsilon=\mu/M\). Working in GR, \(g_{ab}^{(0)}\) is a solution to the Einstein field's equations. Here, we take \(g_{ab}^{(0)}\) to be the Kerr metric [101]. The presence of the secondary compact object produces the metric perturbations. These perturbations important a force on the compact object, causing it to move off geodesic motion in the background spacetime. The so-called self-force per unit mass (\(F_{SF}^{a}\)) can also be expressed as an expansion in orders of the mass ratio: \[F_{\rm SF}^{a}=\varepsilon F_{(1)}^{a}\big{[}h_{ab}^{(1)}\big{]}+\varepsilon^{2 }F_{(2)}^{a}\big{[}h_{ab}^{(2)}\big{]}+\mathcal{O}(\varepsilon^{3}). \tag{10}\] The effect of the self-force on the motion of the inspiraling object is described by the equation of motion, \[u^{b}\nabla_{b}u^{a}=a^{a}=(\varepsilon F_{(1)}^{a}+\varepsilon^{2}F_{(2)}^{a })+\mathcal{O}(\varepsilon^{3}), \tag{11}\] where \(u^{a}\) is the four-velocity of the compact object in the background spacetime, \(a^{a}\) is the four-acceleration, and \(\nabla_{a}\) is the covariant derivative of the background metric. Eq. (11) is given by the MiSaTaQuWa equation [58; 59] to first order, and by the second-order equation of motion [60; 61] to second order. ### The self-force action approach Here, we show that it is possible to derive the GR self-force equations of motion and field equations, up to second-order, directly from an action. We begin with the action, \[S[\mathbf{g}_{ab},\Psi]=S_{EH}[\mathbf{g}_{ab}]+S_{\rm m}[\mathbf{g}_{ab},\Psi ]\, \tag{12}\] where \(S_{\rm m}\) is the matter action, \(\Psi\) are the matter fields, and \[S_{EH}[\mathbf{g}_{ab},\varphi]=\int\frac{\sqrt{-\mathbf{g}}}{16\pi}R\ d^{4}x\, \tag{13}\] is the Einstein-Hilbert action. In the compact binary problem, spacetime is a vacuum everywhere except on the position of the compact object (the worldline). The matter action in Eq. (12) can be replaced by the effective point particle action \[S_{\rm p}=-\int_{\gamma}\mu d\tilde{s}=-\int_{\gamma}\mu\sqrt{\tilde{g}_{ab} \tilde{u}^{a}\tilde{u}^{b}}d\tilde{\tau}. \tag{14}\] where \(\tilde{g}_{ab}\) is the effective metric defined in Eq. (21), \(\tilde{\tau}\) is the proper time in the effective spacetime, and \(\tilde{u}^{\alpha}=dz^{\alpha}/d\tilde{\tau}\). Varying the action (12) with respect to the body's path, \(x^{\mu}\to x^{\mu}+\delta x^{\mu}\), results in the equation of motion \[\mu\tilde{a}^{a}=0, \tag{15}\] where \(\tilde{a}^{a}=\tilde{u}^{b}\tilde{\nabla}_{b}\tilde{u}^{a}\). Expanding Eq. (15) using the expansions Eq. (21), and those in App. B, one recovers the MiSaTaQuWa equation, \[a_{(1)}^{a}=-\frac{1}{2}(g_{(0)c}^{a}+u^{a}u_{c})(2h_{d(b;c)}^{(1)\mathcal{R} }-h_{be;d}^{(1)\mathcal{R}})u^{b}u^{c}\, \tag{16}\] and the second-order equation of motion, \[a_{(2)}^{a} =-\frac{1}{2}\Big{[}(g_{(0)c}^{a}+u^{a}u_{c})(2h_{d(b;c)}^{(2) \mathcal{R}}-h_{be;d}^{(2)\mathcal{R}})\] \[+(g_{(0)c}^{a}+u^{a}u_{c})h_{(1)\mathcal{R}}^{cd}(2h_{d(b;c)}^{(1 )\mathcal{R}}-h_{be;d}^{(1)\mathcal{R}})\Big{]}u^{b}u^{c}. \tag{17}\] Additionally, the field equations can be derived by varying the action (12). This process is more delicate because one must vary the \(S_{EH}[\mathbf{g}_{ab}]\) with respect to the metric (\(\mathbf{g}_{ab}\)) and \(S_{\rm m}[\mathbf{g}_{ab}]=S_{p}[\tilde{g}_{ab}]\) with respect to \(\tilde{g}_{ab}\). The resulting field equation is [62; 31] \[G[\mathbf{g}_{ab}]=8\pi\int_{\gamma}\mu\frac{\delta^{4}[x^{\mu}-z_{\rm p}^{ \mu}[\tilde{\tau}]]}{\sqrt{-\tilde{g}}}\tilde{u}^{a}\tilde{u}^{b}d\tilde{\tau}. \tag{18}\] Geroch's theorem [102] tells us Eq. (18) is not formally well defined as a partial differential equation, but it will be useful for deriving the correct perturbative equations (to at least second-order). A perturbative expansion of Eq. (18) using the expansions in Eq. (16), (21), and the those in App. B, gives the first- and second order field equations: \[\delta G_{ab}[h_{ab}^{(1)}] =8\pi\int_{\gamma}\mu\frac{\delta^{4}[x^{\mu}-z_{\rm p}^{\mu}[ \tau]]}{\sqrt{-g}}u^{a}u^{b}d\tau, \tag{19}\] \[\delta G_{ab}[h_{ab}^{(2)}] =-\delta^{2}G_{ab}[h_{ab}^{(1)},h_{ab}^{(1)}]\] \[+4\pi\int_{\gamma}\frac{\delta^{4}[x^{\mu}-z_{\rm p}^{\mu}[\tau]]} {\sqrt{-g}}\] \[\mu\bigg{(}4h_{ac}^{\mathcal{R}(1)}u^{c}u_{b}-u_{a}u_{b}(g_{(0)}^ {cd}-u^{c}u^{d})h_{cd}^{R(1)}\bigg{)}d\tau. \tag{20}\] ### Why second-order self-force and first-post adiabatic accuracy Next, we summarise which parts of the self-force expansion are required for an accurate model. Summarising Ref. [83], we characterise an accurate model as having a small error on the final position of the compact object over the course of the inspiral. An inspiral is generally considered to evolve on a so-called _slow-timescale_ (\(\tilde{t}\)) related to the radiation reaction timescale, \(\tilde{t}\sim t_{rr}\sim\frac{M}{\varepsilon}=\mathcal{O}(\frac{1}{ \varepsilon})\)[84]. This relation can be derived by considering the rate at which orbital energy is dissipated from the system through gravitational waves. The rate of energy dissipation scales as \(\dot{E}=\frac{dE}{dt}\sim(h_{ab}^{(1)})^{2}\sim\varepsilon^{2}\)[103]. The orbital energy of the compact object is \(E\sim\mu\). As \(t\sim\frac{E}{E}\) and \(\frac{E}{E}\sim\frac{\mu}{\varepsilon^{2}}\sim\frac{M}{\varepsilon}\); therefore, \(t\sim\frac{M}{\varepsilon}\). The error in the final position (\(\delta\dot{z}^{\mu}\)) relates to the slow-timescale and the error in the acceleration (\(\delta a^{\mu}\)) [56]: \[\delta z^{\mu}\sim t^{2}\delta a^{\mu}\sim\tfrac{1}{\varepsilon^{2}}\delta a^ {\mu}. \tag{101}\] For \(\delta z^{\mu}\) to be small, \(\delta a^{\mu}=\mathcal{O}(\varepsilon^{3})\) is necessary. Hence, Eq. (101) must be calculated through second order to achieve an accurate model over an entire inspiral. That is, the first- and second-order self-force is required. In practice, the position of the compact object is expressed using orbital phases, \(\phi_{i}\). For generic orbits in Kerr, there are three orbital phases, that is, \(i=\{1,2,3\}\), or \(i=\{r,\theta,\phi\}\). The phases obey the expansion [83] \[\phi_{i}[t,\varepsilon]=\frac{1}{\varepsilon}\phi_{i}^{\{0\}}[t,\varepsilon] +\phi_{i}^{\{1\}}[t,\varepsilon]+\mathcal{O}(\varepsilon). \tag{102}\] For the error in Eq. (102) to be small as \(\varepsilon\to 0\) the coefficients \(\phi_{i}^{\{1\}}[t,\varepsilon]\) and \(\phi_{i}^{\{0\}}[t,\varepsilon]\) must be known. The appearance of \(\frac{1}{\varepsilon}\) terms in the expansion of \(\phi_{i}[t,\varepsilon]\) is acquired from the inspiral evolving over a slow-timescale. Ref. [83] showed that \(\phi_{i}^{\{0\}}[t,\varepsilon]\), the so-called _adiabatic contribution_, depends on the dissipative piece of the first-order self-force. They additionally showed that \(\phi_{i}^{\{1\}}[t,\varepsilon]\) depends on the conservative piece of the first-order self-force and the dissipative piece of the second-order self-force: \[\phi_{i}^{\{0\}}[t,\varepsilon] =\phi_{i}^{\{0\}}\big{[}F_{(1)\text{diss}}^{a}[h_{ab}^{(1)}]\big{]}, \tag{103}\] \[\phi_{i}^{\{1\}}[t,\varepsilon] =\phi_{i}^{\{1\}}\big{[}F_{(1)\text{cons}}^{a}[h_{ab}^{(1)}],F_{( 2)\text{diss}}^{a}[h_{ab}^{(2)}]\big{]}. \tag{104}\] The reasoning for the conservative self-force being suppressed by one order in \(\varepsilon\) is the conservative self-force averages out over a generic Kerr geodesic [83]. Eqs. (102), (103), and (104) show first-post adiabatic accurate models require the full first-order self-force and the dissipative piece of the second-order self-force. Ref. [83] also provides the framework for implementing a two-timescale approximation to produce first-post adiabatic self-force binary waveforms models in GR. Recently, first-post adiabatic waveforms have shown incredible agreement with Numerical Relativity waveforms for quasi-circular inspirals of Schwarzschild black holes, even in the \(1:10\) mass ratio regime [79; 80; 81]. These ground-breaking results suggest that first-post adiabatic models will play a key role in future gravitational wave science, across a mass-ratio range much wider than expected. ## Appendix B Necessary expansions In Eq. (23), \(\tilde{g}\) appears explicitly in \(\sqrt{-\tilde{g}}\) and implicitly in \(\tilde{u}^{a}\) and \(d\tilde{\tau}\). We expand each \(\tilde{g}\) dependence perturbatively around \(g_{ab}^{(0)}\) as follows [31], \[\frac{1}{\sqrt{-\tilde{g}}} =\frac{1}{\sqrt{-g}}(1-\frac{\varepsilon}{2}g^{ab}h_{ab}^{R(1)})+ \mathcal{O}(\varepsilon^{2}), \tag{105}\] \[\frac{d\tau}{d\tilde{\tau}} =\frac{1}{\sqrt{1-h_{ab}^{R}u^{a}u^{b}}}\] \[=1+\frac{\varepsilon}{2}h_{ab}^{R(1)}u^{a}u^{b}+\frac{3 \varepsilon^{2}}{8}[h_{ab}^{R(1)}u^{a}u^{b}]^{2}+\mathcal{O}(\varepsilon^{3}),\] (106) \[\frac{d\tilde{\tau}}{d\tau} =\sqrt{1-h_{ab}^{R}u^{a}u^{b}}=1-\frac{\varepsilon}{2}h_{ab}^{R(1) }u^{a}u^{b}+\mathcal{O}(\varepsilon^{2}), \tag{107}\] noting7, \(\tilde{u}^{a}=\frac{d\tau}{d\tilde{\tau}}u^{a}\) and \(\tilde{\tau}=\frac{d\tilde{\tau}}{d\tau}d\tau\). Footnote 7: We require the expansion in Eq. (106) to an order higher than Eqs. (105) and (107) for the expansion of Eq. (26). \(T_{ab}^{\text{m}}\) appears in Eq. (4) with indices down whereas in Eq. (23) it is expressed with indices up. Ref. [31] showed that the indices of the stress-energy tensor are raised and lowered by the effective metric (not the background metric). That is, \[T_{ab}^{\text{m}} =\tilde{g}_{ac}\tilde{g}_{bd}T_{\text{m}}^{ab}=\varepsilon g_{ ac}^{(0)}g_{bd}^{(0)}T_{(1)}^{ab}\] \[\quad+\varepsilon^{2}\left[g_{ac}^{(0)}g_{bd}^{(0)}T_{\text{m}(2 )}^{ab}+2h_{a(c}^{R(1)}g_{d)b}^{(0)}T_{\text{m}(1)}^{ab}\right]+\mathcal{O}( \varepsilon^{3}). \tag{108}\] ## Acknowledgments AS and TS acknowledge the partial support from the STFC Consolidated Grant no. ST/V005596/1. AS would like to thank Adam Pound for helpful discussions and comments.
2308.00748
Quadratic Dirac fermions and the competition of ordered states in twisted bilayer graphene
Magic-angle twisted bilayer graphene (TBG) exhibits a captivating phase diagram as a function of doping, featuring superconductivity and a variety of insulating and magnetic states. The bands host Dirac fermions with a reduced Fermi velocity; experiments have shown that the Dirac dispersion reappears near integer fillings of the moir\'e unit cell -- referred to as the $\textit{Dirac revival}$ phenomenon. The reduced velocity of these Dirac states leads us to propose a scenario in which the Dirac fermions possess an approximately quadratic dispersion. The quadratic momentum dependence and particle-hole degeneracy at the Dirac point results in a logarithmic enhancement of interaction effects, which does not appear for a linear dispersion. The resulting non-trivial renormalisation group (RG) flow naturally produces the qualitative phase diagram as a function of doping -- with nematic and insulating states near integer fillings, which give way to superconducting states past a critical relative doping. The RG method further produces different results to strong-coupling Hartree-Fock treatments: producing T-IVC insulating states for repulsive interactions, explaining the results of very recent STM experiments, alongside nodal $A_2$ superconductivity near half-filling, whose properties explain puzzles in tunnelling studies of the superconducting state. The model explains a diverse range of additional experimental observations, unifying many aspects of the phase diagram of TBG.
Julian Ingham, Tommy Li, Mathias S. Scheurer, Harley D. Scammell
2023-08-01T18:00:01Z
http://arxiv.org/abs/2308.00748v1
# Quadratic Dirac fermions and the competition of ordered states ###### Abstract Magic-angle twisted bilayer graphene (TBG) exhibits a captivating phase diagram as a function of doping, featuring superconductivity and a variety of insulating and magnetic states. The bands host Dirac fermions with a reduced Fermi velocity; experiments have shown that the Dirac dispersion reappears near integer fillings of the moire unit cell -- referred to as the _Dirac revival_ phenomenon. The reduced velocity of these Dirac states leads us to propose a scenario in which the Dirac fermions possess an approximately quadratic dispersion. The quadratic momentum dependence and particle-hole degeneracy at the Dirac point results in a logarithmic enhancement of interaction effects, which does not appear for a linear dispersion. The resulting non-trivial renormalisation group (RG) flow naturally produces the qualitative phase diagram as a function of doping - with nematic and insulating states near integer fillings, which give way to superconducting states past a critical relative doping. The RG method further produces different results to strong-coupling Hartree-Fock treatments: producing T-IVC insulating states for repulsive interactions, explaining the results of very recent STM experiments, alongside nodal \(A_{2}\) superconductivity near half-filling, whose properties explain puzzles in tunnelling studies of the superconducting state. The model explains a diverse range of additional experimental observations, unifying many aspects of the phase diagram of TBG. ## I Introduction Twisted bilayer graphene (TBG) has become a central focus of theoretical and experimental condensed matter physics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98]. Since the original discovery of unconventional superconductivity and correlated insulating states [1; 2], intense experimental scrutiny has uncovered a rich phase diagram as a function of temperature, electron density, and magnetic field - featuring orbital magnetism, nematic ordering, and Kekule textures [1; 2; 3; 4; 5; 6; 7; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98]. Stacking two sheets of graphene and twisting by a relative angle \(\theta\), the composite system is no longer periodic with the lattice constant of monolayer graphene, but is periodic at a larger moire scale \(\sim a/(2\sin(\theta/2))\), folding the monolayer graphene dispersion into a mini Brillouin zone [99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109]. The coupling between the two layers hybridises the monolayer Dirac points of graphene, forming mini bands, and when the twist angle is reduced to the so-called magic angle, the fourfold degenerate bands near charge neutrality flatten and the velocity of the Dirac points grows small, enhancing interaction effects and giving rise to a diverse collection of interesting material properties. Compressibility measurements observe an interesting property of TBG referred to as the _Dirac revival_ - when the density of electrons is increased to an integer number of electrons per unit cell, additional electrons'reset' to the charge neutrality point, and are described by the Dirac dispersion but with a reduced degeneracy [110]. For instance, at one electron per moire unit cell \(\nu=1\) (Fig. 2), one of the four-fold degenerate bands spontaneously becomes fully occupied, and as the density is increased the remaining three-fold degenerate bands refill starting from the charge neutrality point [158]. These revived Dirac fermions appear at much higher temperatures (\(\approx 20\) K) than the insulating and superconducting states (\(\approx 1\) K), and constitute the parent state of the correlated physics. While the magic angle tunes the Fermi velocity to be small, there is nonetheless a nonzero bandwidth; we suggest that the smallness of the linear term in the Dirac dispersion makes the dispersion approximately quadratic for an energy window near the Dirac points (Fig. 1c). Due to the Dirac revival, the quadratic Dirac dispersion near charge neutrality then characterises the physics of TBG near all integer fillings. A quadratic dispersion plus particle-hole degeneracy at the Fermi level in two dimensions results in a logarithmic enhancement of interaction effects, and a non-trivial renormalisation group (RG) flow. In this work we analyse the RG flow of these quadratic Dirac fermions and derive a number of interesting results: 1. Near integer fillings, the particle-hole degeneracy of the Dirac point causes insulating and nematic states, driven by particle-hole fluctuations, to compete strongly against superconductivity. Doping above the Dirac point lifts the particle-hole degeneracy, leaving superconductivity as the dominant order. 2. The result is a phase diagram with nematic or insulating states at integer fillings and superconductors in between (see Fig. 1b), consistent with the experimentally obtained phase diagram of TBG. 3. The quadratic Dirac theory predicts different ground states to pre-existing mean-field analyses; we list the resulting order parameters in Table 1. For instance, Hartree-Fock studies of the Coulomb repulsion favour K-IVC over T-IVC [61; 62; 63; 64; 65; 66; 67; 68; 69; 70], requiring phonon coupling for T-IVC to contend, whereas the RG flow can favour T-IVC for purely repulsive interactions. Very recent STM experiments have found evidence for T-IVC rather than K-IVC order near \(\nu=2\)[21; 22]; our mechanism therefore provides a natural explanation of this puzzle. 4. Intervalley \(A_{2}\) and \(E\) superconductors appear as instabilities alongside T-IVC order near \(\nu\gtrsim 2\); the properties of the \(A_{2}\) state can explain the transition from U- to V- shaped tunnelling profiles seen in studies of the superconducting state [18; 19; 20]. Incorporating a small finite Dirac velocity does not dramatically modify the RG behaviour of the Dirac theory, simply introducing an IR cutoff \(\Lambda_{v}\) on the RG flow. The enhancement of the interactions occurs within a window of temperatures associated to the energy window in which the dispersion appears quadratic (c.f. Fig. 1c). We contrast our theory with previous theoretical models. Firstly, interacting theories with linearly-dispersing Dirac fermions (e.g. Refs. [95; 96; 97; 98]) do not naturally feature insulating and nematic states as weak coupling instabilities. As we explain in Sec III, the quadratic scaling of the dispersion is essential to the presence of insulating and nematic states near integer fillings. Secondly, the starting point of our analysis is to treat the bands as dispersive rather than approximately flat (c.f. Refs [68; 69; 70; 71; 72]), motivated by the experimental observation of dispersive Dirac states [110], and a bandwidth \(\approx 40\) meV, much larger than that predicted by bandstructure [8]. Thirdly, we stress that van Hove singularities (vHS) and Fermi surface nesting [53; 54; 55; 56; 57; 58; 59; 60] do not feature in our model. Since the Dirac revival resets the dispersion to that of the Dirac point near integer fillings, in this regime the Fermi surface is neither nested nor located near a vHS. Experiments have found that superconductivity is seen near resets, and consistently suppressed at twist angles where revivals disappear and vHS are observed [159]. By contrast, our model argues that the superconducting and insulating states arise from the RG flow from a quadratic dispersion, with superconductivity dominant when the insulating states are suppressed via doping. Superconductivity therefore does not originate from an insulating parent state, but appears as a competing phase. The competing order scenario is supported by the presence of superconductivity in the absence of insulating states at smaller twist angles or when TBG is strongly screened by external gates [111; 112; 113] (though we comment that interpretation of these experiments is complicated by the presence of disorder), the appearance of the insulating state under the superconducting dome when superconductivity is suppressed by a magnetic field, along with the comparable magnitudes of the superconducting and insulating \(T_{c}\). Given this last point, it is particularly notable that our framework allows a simultaneous treatment of the insulating and superconducting states on equal footing, unlike Hartree-Fock studies which are well-suited to describing the insulating states. We lastly note that signatures of the Dirac revival phenomena are also seen in twisted trilayer graphene (tTLG) [114; 115; 116; 117; 118], and so we anticipate that our analysis is likely relevant to a range of moire systems. Figure 1: **Theoretical model.** Left: Two stacks of graphene with a relative twist of \(\theta\), Right: Our model for the Dirac fermions near integer filling. The dispersion is linear beneath an infrared cutoff \(\Lambda_{v}\), and quadratic between \(\Lambda_{v}\) and an ultraviolet cutoff \(\Lambda\). In the temperature window \(\Lambda_{v}<T<\Lambda\), interactions are logarithmically enhanced. Middle: Our proposed phase diagram features interlaced superconductors (red) and correlated insulators (blue), including nematic/K-IVC order at \(\nu=0\) and T-IVC order with proximate \(E/A_{2}\) superconductivity near \(\nu=2\). ## II Model and symmetry constraints ### Quantum numbers and Symmetries The bands near charge neutrality are four-fold degenerate, originating from the spin and monolayer valley degeneracy. The conduction and valence bands exhibit Dirac points at the moire \(K\)-points; we index these band touching points by sublattice \(\sigma\), monolayer valley \(\tau\), and moire valley \(\eta=\pm\), corresponding to the Bloch states near quasimomenta \(\tau R_{\pm\theta/2}\mathbf{K}\) where \(\mathbf{K}\) is the monolayer valley momentum. Counting the number of Dirac cones gives \(N_{f}=8\) species of Dirac fermions (see Fig. 2 left). We describe the valley and spin quantum numbers as "flavours"; after each Dirac revival, a flavour is projected out reducing the degeneracy by two, as shown in Fig. 2, i.e. \(N_{f}=2(4-\lfloor\nu\rfloor)\) for \(\nu>0\). In our analysis, we will not attempt to explain the origins of the revivals, but take the polarised Dirac theory as an input parent state. It is observed (e.g. Ref. [33]) that this parent state appears at different angles for electron \(\nu>0\) and hole \(\nu<0\) doping. Our argument that the correlated phases arise from the revived Dirac parent state therefore naturally explains the observed electron-hole asymmetry of the superconducting phase diagram. In what follows we will take \(\nu>0\) with the understanding that our results apply to all integer \(-4<\nu<4\) at which revivals occur. TBG possesses threefold rotational symmetry in the plane \(C_{3z}\), twofold rotational symmetry about the \(x\) axis \(C_{2x}\), twofold rotational symmetry in the plane \(C_{2z}\), i.e. the \(D_{6}\) point group, along with time-reversal symmetry \(\Theta\) (TRS). The system maintains SU(2) spin rotational symmetry due to absence of spin-orbit coupling. In addition, TBG has approximate symmetries, which we shall take to be exact in our model: independent spin rotations in the two monolayer valleys results in an enlarged SU(2)\(\times\) SU(2) spin symmetry. This symmetry is broken in experiment by the small yet finite Hund's coupling \(J_{H}\). In the small twist angle limit, TBG also possesses a particle-hole symmetry \(\mathcal{P}\)[160]; combining particle-hole and TRS gives an anti-commuting chiral symmetry represented by \(\mathcal{S}=\mathcal{P}\Theta=\sigma_{x}\tau_{x}\eta_{y}\), with action \(\mathcal{SH}_{0}(\mathbf{k})\mathcal{S}^{\dagger}=-\mathcal{H}_{0}(\mathbf{k})\) on the single-particle Hamiltonian \(\mathcal{H}_{0}\). ### Single-particle Hamiltonian The above symmetries allow us to construct the most general single-particle Hamiltonian describing the Dirac states near the moire \(K\)-points, which to quadratic order we find to be \[\mathcal{H}_{0}=v\tau_{z}(k_{+}\alpha_{-}+k_{-}\alpha_{+})+i\beta\eta_{z}\left( k_{+}^{2}\alpha_{+}-k_{-}^{2}\alpha_{-}\right) \tag{1}\] where \((\alpha_{x},\alpha_{y})=(\sigma_{x},\tau_{z}\sigma_{y})\), \(\alpha_{\pm}=\alpha_{x}\pm i\alpha_{y}\), and \(k_{\pm}=k_{x}\pm ik_{y}\). Strikingly, we find that restricting to quadratic order in the momentum expansion results in an emergent commuting chiral symmetry \([\mathcal{C},\mathcal{H}_{0}]=0\) with \(\mathcal{C}=-i\sigma_{z}\mathcal{S}=\sigma_{y}\tau_{x}\eta_{y}\); terms which break this symmetry may only appear at cubic and higher order in momentum. The symmetry \(\mathcal{C}\) has been studied in previous works, where a so-called 'chiral limit' [46] results in \(\mathcal{C}\) as an exact symmetry [161]. Here we do not impose the chiral limit, yet we find that this symmetry appears as an emergent low-energy symmetry of the Dirac effective theory. Our approach shall be to assume \(\Lambda_{v}=v^{2}/\beta\) is small compared to the UV cutoff \(\Lambda\gg\Lambda_{v}\), so that there is a range of energies in which the dispersion can be treated as quadratic, allowing us to neglect the linear term, Fig. 1c. In TBG, there are natural reasons to expect \(\Lambda\gg\Lambda_{v}\) - in the limit where \(\mathcal{P}\) is taken to be an exact symmetry, it has been shown [119] that the velocity can be made to vanish by tuning only a single parameter. Motivated by these results, in the Supplementary Material we show that for a wide range of tunnelling couplings, the \(\mathcal{P}\)-symmetric Bistritzer-MacDonald model [100] possesses a twist angle at which the Dirac points exhibit a quadratic dispersion [162]. However, our results are not reliant on the exact values of \(v\) and \(\beta\) - we shall leave them as phenomenological constants, which may feasibly be investigated experimentally through compressibility measurements [157]. ### Interactions Projecting the Coulomb interaction onto the basis of states near the Dirac points \(|\sigma,\tau,\eta\rangle\) gives \[V=\tfrac{1}{2}\sum_{\mu\nu}V_{13;24}\psi_{1,\mathbf{k}}^{\dagger}\psi_{3,\mathbf{k}- \mathbf{q}}\psi_{2,\mathbf{p}}^{\dagger}\psi_{4,\mathbf{p}+\mathbf{q}} \tag{2}\] where 1,2,3,4 are shorthand for the indices \(\{\sigma,\tau,\eta\}\). A powerful approach is to write the interactions in the adjoint representation: \[V_{13;24}=V_{\mu\nu}\left(\Omega^{\mu}\right)_{13}(\Omega^{\nu})_{24} \tag{3}\] where \(\Omega^{\mu}\in\{\sigma_{a}\tau_{b}\eta_{c}\}\), representing the Coulomb potential as a sum of tensor products in \(\sigma\tau\eta\) space [120; 121; 122]. Figure 2: **Dirac revival.** At each moiré valley \(\eta=\pm\) there are four flavours corresponding to the spin \(s=\pm\) and monolayer valley \(\tau=\pm\) degeneracy, which for \(|\nu|\gtrsim 0\) populate equally as the density increases (Left). When the bands are quarter filled at \(|\nu|=1\), all the density is spontaneously transferred to one flavour, and any additional electrons are added near the band touching point of the remaining Dirac states (Right). The potential is constrained by the requirement that only symmetry-invariant tensor products appear; in the Supplementary Material, we list the full set of symmetry-allowed products of bilinears. Under the assumption of a real Coulomb potential, only \(\Omega^{\mu}\) which commute with \(\mathcal{S}\) and \(\mathcal{C}\) may appear. These constraints result in only three possible bilinears: \(\Omega^{\mu}\in\{\sigma_{0}\tau_{0}\eta_{0},\sigma_{z}\tau_{0}\eta_{z},\sigma_{ z}\tau_{0}\tilde{\eta}_{\tilde{\pm}}\}\), where \(\tilde{\eta}_{\pm}=\eta_{x}\pm i\tau_{z}\eta_{y}\). Renormalisation of the interactions, which we discuss further in the next section, generates the additional vertices \(\eta_{z}\alpha_{x}\Omega^{\mu},\eta_{z}\alpha_{y}\Omega^{\mu},\alpha_{z}\Omega ^{\mu}\), which commute with \(\mathcal{C}\) but not \(\mathcal{S}\). This results in a set of nine coupling constants, \[V=g_{o}(\sigma_{0}\tau_{0}\eta_{0}\otimes\sigma_{0}\tau_{0}\eta_ {0})+g_{x\tau}(\alpha_{\pm}\tau_{z}\eta_{0}\otimes\alpha_{\mp}\tau_{z}\eta_{0 })+g_{z}(\alpha_{z}\tau_{0}\eta_{0}\otimes\alpha_{z}\tau_{0}\eta_{0})\] \[+v_{\sigma\tau}(\sigma_{0}\tau_{z}\tilde{\eta}_{\pm}\otimes \sigma_{0}\tau_{z}\tilde{\eta}_{\mp})+v_{x}(\alpha_{\pm}\tau_{0}\tilde{\eta}_ {\pm}\otimes\alpha_{\mp}\tau_{0}\tilde{\eta}_{\mp}+\alpha_{\pm}\tau_{0}\tilde {\eta}_{\mp}\otimes\alpha_{\mp}\tau_{0}\tilde{\eta}_{\pm})+v_{z}(\sigma_{z} \tau_{0}\tilde{\eta}_{\pm}\otimes\sigma_{z}\tau_{0}\tilde{\eta}_{\mp})\] \[+u_{\sigma\tau}(\sigma_{0}\tau_{z}\eta_{z}\otimes\sigma_{0}\tau_ {z}\eta_{z})+u_{x}(\alpha_{\pm}\tau_{0}\eta_{z}\otimes\alpha_{\mp}\tau_{0} \eta_{z})+u_{z}(\sigma_{z}\tau_{0}\eta_{z}\otimes\sigma_{z}\tau_{0}\eta_{z}). \tag{4}\] Based on the above arguments, our expectation is that \(g_{o}\) and \(v_{z}\) are likely the largest couplings near \(\nu=0\), but after each Dirac revival the bare values of these couplings likely change. Our theory for TBG near integer filling comprises the single-particle Hamiltonian Eq. (1) along with the four-fermion interactions of Eq. (II), \(\mathcal{H}=\mathcal{H}_{0}+V\). We argue this describes the normal state at each integer filling out of which the insulating and superconducting phases develop. Prior studies of Dirac theories [95; 96; 97; 98] have not explored the combination of quadratic band touching, \(\eta\)-dependent scattering, and filling factor-dependent degeneracy \(N_{f}=2(4-\left\lfloor\nu\right\rfloor)\), which we now elucidate. ## III Renormalisation flow equations The field theory we derived has a number of interesting properties. The combination of the band-touching Dirac states and momentum scaling of the energy \(\propto k^{2}\) in two dimensions results in a logarithmic enhancement of interaction effects [123; 124; 125; 126; 127], analogous to how a linear dispersion in one dimension results in strongly interacting physics in the theory of Luttinger liquids [128]. Corrections to the interaction constants are proportional to the so-called particle-particle and particle-hole susceptibilities, \[\chi_{pp}(T)=\sum_{n}\!\int\!\!\mathcal{G}(i\omega_{n},\mathbf{k}) \mathcal{G}(-i\omega_{n},-\mathbf{k})\,d^{2}k \tag{5}\] \[\chi_{ph}(T)=\sum_{n}\!\int\!\mathcal{G}(i\omega_{n},\mathbf{k}) \mathcal{G}(i\omega_{n},\mathbf{k})\,d^{2}k \tag{6}\] where \(\mathcal{G}(i\omega_{n},\mathbf{k})\) is the Matsubara Green's function, \[\mathcal{G}(i\omega_{n},\mathbf{k})=\frac{1}{i\omega_{n}+\mu-i\beta\eta_{z}\left(k _{+}^{2}\alpha_{+}-k_{-}^{2}\alpha_{-}\right)}, \tag{7}\] and \(\omega_{n}=(2n+1)\pi T\) are fermionic Matsubara frequencies. When the chemical potential is placed near the band-touching point, i.e. \(\mu=0\), one finds that the scaling of the numerator \(\sim d^{2}k\), and denominator \(\sim\beta k^{2}\), results in \(\chi_{pp}(T),\chi_{pp}(T)\rightarrow\log(\Lambda/T)/(4\pi\beta)\) as \(T\to 0\) where \(\Lambda\) is the UV cutoff, i.e. the corrections to the couplings diverge logarithmically. Doping away from the band-touching point via \(\mu\neq 0\) weakens the divergence in \(\chi_{ph}\) by removing the degeneracy of particle and hole excitations, while \(\chi_{pp}\) remains logarithmically divergent. By comparison, a linear dispersion would result in \(\chi_{ph}\sim T\) as \(T\to 0\), i.e. the associated corrections to the couplings would scale towards zero. In experiment, a small but finite velocity is observed; the effects of a finite velocity can be roughly incorporated as an IR cutoff on the RG flow \(\Lambda_{v}\sim v^{2}/\beta\) - as the temperature is lowered from \(\Lambda\) to \(\Lambda_{v}\), the quadratic dispersion results in a logarithmic enhancement of interactions, and for temperatures lower than \(\Lambda_{v}\) the enhancement ceases. Hence, there exists a window of temperatures in which the RG flow is controlled by the quadratic dispersion, c.f. Fig. 1. To track the evolution of the effective couplings with temperature, we use the functional renormalisation group (fRG) method [129; 130; 131; 132; 133; 134; 135]; we derive the method from a path integral treatment in the Supplementary Material. The couplings become functions of the dimensionless RG time \(t=\Lambda/T\), where the values at \(t=1\) are the unrenormalised values, and \(t\rightarrow\infty\) describes the low temperature behaviour of the theory. We find the RG equations can be written in the simple form reflected in Fig. 3; we obtain the analytic expression, \[\frac{d}{dt}V_{\alpha\beta}\,\Omega_{\alpha}\otimes\Omega_{\beta}\] \[=V_{\mu\nu}V_{\rho\lambda}(\dot{\Xi}^{pp}_{\mu\nu;\rho\lambda}+ \dot{\Xi}^{ph}_{\mu\nu;\rho\lambda}+\dot{\Xi}^{rpa}_{\mu\nu;\rho\lambda}+\dot{ \Xi}^{vert}_{\mu\nu;\rho\lambda}) \tag{8}\] where Einstein summation is implied for indices, and the matrix-valued RG kernels \(\Xi_{\mu\nu;\rho\lambda}\) correspond to the Feynman diagrams in Fig. 3. The RG procedure is to take the bare interactions and evolve them according to (8) until they grow large, resulting in a diverging susceptibility for some order parameters and a concomitant phase transition to an ordered state (see Sec. IV). At weak coupling, this occurs as \(t\rightarrow\infty\), and in this limit the fRG equations reduce to the well-known parquet equations [136; 137; 138; 139; 140; 141; 142; 143]. The RG flow predicts a divergence of the renormalised couplings as the flow proceeds into the deep IR, resulting in a re-emergence of strong coupling and a possible instability towards an ordered state. The diagram \(\Xi^{pp}_{\mu\nu;\rho\lambda}\) is the Cooper channel diagram \(\propto\chi_{pp}\) familiar from Fermi liquid theory - the internal lines have opposite momenta, and the diagram is proportional to the "Cooper logarithm" which drives the superconducting instability. The other diagrams \(\Xi^{ph}_{\mu\nu;\rho\lambda}\), \(\Xi^{pp}_{\mu\nu;\rho\lambda}\), \(\Xi^{vert}_{\mu\nu;\rho\lambda}\propto\chi_{ph}\) are the so-called "particle-hole" diagrams, which diverge as a result of particle-hole degeneracy and the quadratic dispersion. As one dopes away from the band touching point, the contribution of these diagrams is weakened via a cut-off on the logarithmic divergence. We encode this effect of doping by multiplying the particle-hole diagrams by a constant \(d=d(\mu)\leq 1\) which equals \(1\) at the band touching point and grows smaller with increased doping away from the band touching point i.e. increasing deviation from particle-hole degeneracy - a standard approximation in parquet RG [163]. Secondly, the RPA bubble diagram \(\Xi^{\text{\tiny prpa}}_{\mu\nu;\rho\lambda}\) contains a fermionic trace which produces a factor \(N_{f}\). After each Dirac revival, \(N_{f}\) reduces by \(2\), changing the renormalisation flow by weakening the RPA diagram, and altering the preferred ordered states near each integer filling. ## IV Ordering Instabilities of the Dirac Theory The ground state becomes unstable to an ordered phase when the associated order parameter develops a diverging susceptibility. The critical temperature for the ordered phase is given by \(T_{c}=\Lambda/t_{c}\) where \(t_{c}\) is the RG time at which the susceptibility diverges. The onset of an ordered state can be described by RG flow equations for the order parameter vertices, corresponding to the diagrams in Fig. 4, and take the form \[\partial_{t}\mathcal{O}_{i} =\lambda_{i}(t)\,\mathcal{O}_{i} \tag{9}\] \[\partial_{t}\Delta_{i} =\tilde{\lambda}_{i}(t)\,\Delta_{i} \tag{10}\] where \(\lambda_{i}(t)\) and \(\tilde{\lambda}_{i}(t)\) are henceforth referred to as 'order parameter eigenvalues', and are expressions involving the renormalised couplings, as well as \(\chi_{pp}\), and \(\chi_{ph}\) (see the Supplementary Material). The susceptibilities for superconducting orders \(\Delta_{i}\in\langle\psi\sigma_{\mu}\tau_{\nu}\eta_{\rho}\psi\rangle\) are driven to diverge by the particle-particle diagram \(\propto\chi_{pp}\) in Fig. 4a, while susceptibilities for particle-hole orders \(\mathcal{O}_{i}\in\langle\psi^{\dagger}\sigma_{\mu}\tau_{\nu}\eta_{\rho}\psi\rangle\) are driven to diverge by the particle-hole diagrams \(\propto\chi_{ph}\) in Fig. 4b; the logarithmic divergences in these diagrams mean the \(\Delta_{i}\) and \(\mathcal{O}_{i}\) compete as weak coupling instabilities. The \(\chi_{ph}\) in Fig. 4b are proportional to \(d\) and get weaker away from integer filling; decreasing \(d\) suppresses the tendency towards \(\mathcal{O}_{i}\). Considering now the role of Dirac revivals: First, the particle-hole diagram \(\propto N_{f}\) changes after each Dirac revival, which has a non-trivial influence on the order parameters. Second, doping away from the band touchings at integer fillings decreases \(\chi_{ph}\) via the factor \(d\), suppresses the tendency towards \(\mathcal{O}_{i}\); in other words, near the band-touching point, fluctuations of the degenerate particle and hole states promote insulating states \(\mathcal{O}_{i}\), while doping away from the band-touching point weakens these fluctuations allowing superconductivity \(\Delta_{i}\) to dominate. Lastly, after each Dirac revival, the order parameters and couplings are projected onto flavour polarised bands. Denoting the projection operator onto the remaining flavours as \(\mathcal{P}_{f}\), the order parameters transform as \(\mathcal{O}\rightarrow\mathcal{P}_{f}\mathcal{O}\mathcal{P}_{f}^{\dagger}\), \(\Delta\rightarrow\mathcal{P}_{f}\Delta\mathcal{P}_{f}^{T}\). Since the operators \(\mathcal{P}_{f}\) commute with the Hamiltonian, we can solve the RG equations in the unpolarised basis, then project the resulting order parameters onto the flavour polarised bands at a given filling. To determine the leading instabilities, we employ two approaches. Firstly, at long RG times \(t\rightarrow\infty\) the diverging couplings tend towards fixed constant ratios of each other referred to as _fixed rays_ of the RG flow. All possible choices of initial coupling values flow to one of these possible sets of ratios in the deep infrared, which therefore represent universal properties of the model. At a fixed ray, the eigenvalues \(\lambda_{i}\) (\(\tilde{\lambda}_{i}\)) which diverge sufficiently fast (see the Supplementary Material) at a given filling produce a corresponding ordered state. However, fixed rays are only approached at long RG times, and stronger initial couplings and/or a larger IR cutoff set by the Dirac velocity may mean that the flow is terminated by an instability before fixed ray behaviour is attained. Hence, in addition to describing the full set of fixed rays in our interacting model, a second approach is to explicitly integrate the RG equations and identify the leading diverging order parameter vertices, given some initial values of the couplings. In the next section we will present the full set of fixed rays, and also analyse explicit solutions of the RG equations at specific filling factors. Figure 3: **Flow equation.** The approximately quadratic dispersion and particle-hole symmetry result in logarithmic divergences in each diagram. Removing particle-hole degeneracy by doping above the band-touching point weakens the latter three diagrams; we encode this effect approximately through a prefactor \(d<1\). The RPA diagram \(\propto N_{f}\) decreases after each Dirac revival. ## V Properties of the ordered states ### Order parameters Table 1 contains the full set of order parameter structures which appear as a fixed ray, in a non-zero subrange of filling \(0\leq\nu<4\). The order parameters are classified by the irreducible representations (irreps) of the spinless point group \(D_{6}\), which is strictly only applicable in the unpolarised case of \(0\leq\nu<1\), but straightforwardly modified at other filling ranges. The full set of parent ordered states include spin singlet and triplet T-IVC and K-IVC insulating states consisting of a gap which hybridises the two valleys - phases which have been discussed in many prior works on TBG and multi-layer extensions [68; 69; 70; 71; 72; 73]. The singlet K-IVC state breaks TRS - consisting of a pattern of magnetisation currents which triple the graphene unit cell - but preserves a modified 'Kramers'-like TRS, consisting of TRS combined with a \(U(1)\)\(\tau\)-rotation. By contrast, the T-IVC consists of a spatial modulation of charge which triples the graphene unit cell, but preserves TRS [148; 149]. Triplet, or'spin', order parameters also appear (S-K-IVC and S-T-IVC) with opposite behaviour under TRS. In addition to the IVC states, RG-driven instabilities exist for moire charge density waves (MDW\({}_{-}\)), as well as polarised states (S-/MSLP\({}_{\pm}\) and SLP\({}_{\pm}\)), which consist of Chern insulating, quantum spin Hall, and topologically trivial gaps. In experiment, multiple nearly degenerate Chern insulating states are seen near each filling factor, with the topologically non trivial states typically stabilised by a small applied magnetic field [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. The SLP\({}_{-}\) state exhibits Chern numbers \(C=4-\left\lfloor\nu\right\rfloor\), a sequence observed in experiment. A combination of IVC and moire polarised order can account for the full set of observed Chern numbers; a more careful investigation of the signatures of these gaps is left for future studies. Lastly, we also find nematic states of the form \(\propto\eta_{0,z}\tau_{z}\mathbf{v}\cdot\mathbf{\alpha}\), where \(\mathbf{\alpha}=(\sigma_{x},\tau_{z}\sigma_{y})\), referred to as "graphene nematicity" in [144; 145]. These states do not open up a gap but instead split the quadratic band-touching into four Dirac points separated by the 'nematic director' \(\mathbf{v}\), spontaneously breaking threefold rotational symmetry. The last column of Table 1 associates the order parameter with a fixed ray eigenvalue - Table 2 provides the filling regions in which each eigenvalue, and therefore their associated order parameter(s), appear(s) as a leading instability. One sees from Table 1 that several distinct \(D_{6}\) irreps have the same fixed ray eigenvalues, arising due to the additional symmetries of our chosen model. First, the SU(2)\({}_{+}\times\)SU(2)\({}_{-}\) spin rotation symmetry results in degeneracies between'spin' and 'charge' order, as has been discussed many times before [54; 68; 69; 70; 71; 72; 73; 74]. For instance, the T-IVC and S-T-IVC states are degenerate as they can be related by a spin-rotation in only one of the two valleys. In experiment, a small but finite inter-valley Hund's coupling [17] will split the degeneracy between these "Hund's partners". Secondly, as discussed in Sec. II, our interacting model possesses a U(1) symmetry generated by \(\mathcal{C}\), i.e. \(\mathcal{U}_{\mathcal{C}}(\psi)=\exp(i\psi\mathcal{C}/2)\). For example, the T-IVC state and MSLP\({}_{-}\) are related by \(\mathcal{O}_{\text{T-IVC}}=\mathcal{U}_{\mathcal{C}}(\pi/2)\mathcal{O}_{\text{ MSLP}_{-}}\mathcal{U}_{\mathcal{C}}^{\dagger}(\pi/2)\), and hence degenerate. This degeneracy is also lifted in a physical setting by finite subleading corrections which break particle-hole symmetry. Flavour polarisation is compactly treated in Table 1 by use of the projection operators. However, this compact notation obscures certain subtleties - for instance, since the projection operator \(\mathcal{P}_{f}\) can break \(C_{2z}\) or TRS by imbalancing the two valleys, it is possible for the ordered states for \(\nu\geq 1\), e.g. \(\mathcal{P}_{f}\mathcal{O}_{i}\mathcal{P}_{f}^{\dagger}\), to break time reversal or inversion symmetry even when \(\mathcal{O}_{i}\) does not. ### \(\nu=0\) We begin by discussing the unpolarised case corresponding to the charge neutrality point \(\nu=0\) (CNP). Comparing with Table 2, neither T- or K-IVC appear as fixed rays, i.e. weak coupling instabilities. Rather, nematic order, moire density wave, and sublattice polarised order are the leading particle-hole orders. In Fig. 5 we illustrate a characteristic example of an RG flow plot, demonstrating that for \(g_{o}=v_{z}=0.25\) the leading instability is moire polarised nematic (MPN) order. However, at early RG times, moire density wave (MDW), T- and K-IVC compete closely, so one may imagine that in the strong coupling regime - where an instability is reached at shorter RG times - these may be candidate ground states as well. The presence of nematic order as a candidate state explains the observation of nematic order near \(\nu=0\)[9; 10], and the twofold reduction in the Landau fan degeneracy [92]. Additionally, in recent STM studies, it was found that strained devices exhibit a gapless CNP, while very low strain devices feature a gap at the CNP [21]. Figure 4: **Gap equations.** Diagrams contributing to the superconducting and particle-hole order parameters. The particle-hole diagrams weaken upon doping away from integer filling, i.e. decreasing \(d\), while the bubble diagram \(\propto N_{f}=2(4-\left\lfloor\nu\right\rfloor)\). This is quite natural in our description: a gap may be produced by a leading tendency towards K-IVC or MDW, while strain - which couples to the nematic susceptibility - should promote nematic order, leaving the CNP gapless. Interestingly, the only superconducting states which appear as fixed rays are exotic - the finite-\(Q\) pair density wave states \(E/E_{1}/E_{2}\) in Table 1[164]. Since pair density wave order is more susceptible to disorder, our prediction of this type of superconductor near \(\nu=0\) is consistent with the fact that superconductivity is less commonly seen near this filling compared with the vicinity of \(\nu=2\). ### \(\nu=2\) At \(\nu=2\), a flavour polarisation in the parent state which does not break time-reversal symmetry is possible - namely, anti-alignment of the spins in opposite valleys, \(s\tau=++,--\), as illustrated in Fig. 6. This scenario is supported by (1) the observation of antiferromagnetic intervalley Hund's coupling in electron spin resonance [17], and (2) the lack of hysteresis seen in unaligned TBG at \(\nu=2\)[165]. Assuming this spin-valley locked polarisation, the pro \begin{table} \begin{tabular}{c l l l l l l l} \hline \hline & Label & Name & Abbreviation & Order Parameter & IR of \(D_{6}\) & IR of \(D_{3}\) & \(\lambda_{j}\) \\ \hline \multirow{6}{*}{\(\nu=2\)} & \(O_{7}\) & \(\Theta\)-odd intervalley coherent & K-IVC & \(\sigma_{x}\eta_{y}(\tau_{x},\tau_{y})\) & \(A_{2},B_{2}\) & \(A_{2}\) & \(\lambda_{7}\) \\ & \(O_{7s}\) & \(\Theta\)-even spin-polarised intervalley coherent & S-K-IVC & \(\sigma_{x}\eta_{y}(\tau_{x},\tau_{y})\mathbf{s}\) & \(A_{2},B_{2}\) & \(A_{2}\) & \(\lambda_{7}\) \\ & \(O_{8s}\) & \(\Theta\)-even intervalley coherent & T-IVC & \(\sigma_{x}\eta_{y}(\tau_{x},\tau_{y})\mathbf{s}\) & \(A_{1},B_{1}\) & \(A_{1}\) & \(\lambda_{8}\) \\ & \(O_{8s}\) & \(\Theta\)-odd spin-polarised intervalley coherent & S-T-IVC & \(\sigma_{x}\eta_{x}(\tau_{x},\tau_{y})\mathbf{s}\) & \(A_{1},B_{1}\) & \(A_{1}\) & \(\lambda_{8}\) \\ \hline \multirow{6}{*}{\(\nu=2\)} & \(O_{11}\) & \(\Theta\)-even/odd moiré-valley, sublattice polarised & MSLP\({}_{\pm}\) & \(\sigma_{z}\eta_{z}(\tau_{0},\tau_{z})\) & \(B_{1},A_{1}\) & \(A_{1}\) & \(\lambda_{11},\lambda_{8}\) \\ & \(O_{11s}\) & \(\Theta\)-odd/even spin, moiré-valley, sublattice polarised & S-MSLP\({}_{\mp}\) & \(\sigma_{z}\eta_{z}(\tau_{0},\tau_{z})\mathbf{s}\) & \(B_{1},A_{1}\) & \(A_{1}\) & \(\lambda_{8}\) \\ & \(O_{12}\) & \(\Theta\)-even/odd sublattice polarised & SLP\({}_{\pm}\) & \(\sigma_{z}(\tau_{0},\tau_{z})\) & \(B_{2},A_{2}\) & \(A_{2}\) & \(\lambda_{7},\lambda_{12}\) \\ & \(O_{12s}\) & \(\Theta\)-odd/even spin, sublattice polarised & S-SLP\({}_{\mp}\) & \(\sigma_{z}(\tau_{0},\tau_{z})\mathbf{s}\) & \(B_{2},A_{2}\) & \(A_{2}\) & \(\lambda_{7}\) \\ \hline \multirow{6}{*}{\(\nu=2\)} & \(O_{9}\) & \(\Theta\)-odd moiré density wave & MDW\({}_{-}\) & \((\tau_{z}\eta_{x},\eta_{y})\) & \(-\) & \(E\) & \(\lambda_{9}\) \\ & \(O_{1}\) & \(\Theta\)-odd graphene nematic & N\({}_{-}\) & \((\tau_{z}\sigma_{x},\sigma_{y})\) & \(E_{1}\) & \(E\) & \(\lambda_{1}\) \\ & \(O_{6}\) & \(\Theta\)-even moiré-polarised graphene nematic & MPN\({}_{+}\) & \(\eta_{z}(\sigma_{x},\sigma_{y}\tau_{z})\) & \(E_{2}\) & \(E\) & \(\lambda_{6}\) \\ \hline \multirow{6}{*}{\(\nu=2\)} & \(\Delta_{5\tau\tau}\) & \(A_{2}\) intervalley spin-singlet & \(A_{2}\)-SSC & \(\eta_{z}\tau_{z}is_{y}\) & \(A_{2}\) & \(A_{2}\) & \(\tilde{\lambda}_{5}\) \\ & \(\Delta_{5\tau\tau}^{\perp}\) & \(B_{2}\) intervalley spin-triplet & \(B_{2}\)-TSC & \(\eta_{z}\tau_{y}\mathbf{s}is_{y}\) & \(B_{2}\) & \(B_{2}\) & \(\tilde{\lambda}_{5}\) \\ & \(\Delta_{6\tau\tau}^{\perp}\) & \(A_{1}\) intervalley spin-singlet & \(A_{1}\)-SSC & \(\tau_{z}is_{y}\) & \(A_{1}\) & \(A_{1}\) & \(\tilde{\lambda}_{6}\) \\ & \(\Delta_{6\tau\tau}^{\perp}\) & \(B_{1}\) intervalley spin-triplet & \(B_{1}\)-TSC & \(\tau_{y}\mathbf{s}is_{y}\) & \(B_{1}\) & \(B_{1}\) & \(\tilde{\lambda}_{6}\) \\ \hline \multirow{6}{*}{\(\nu=2\)} & \(\Delta_{4\tau\tau}\) & \(E\) inter-moiré-valley spin-singlet & \(E\)-Q\({}_{M}\)-SSC & \(\sigma_{z}(\eta_{x},\eta_{y}\tau_{z})\tau_{x}is_{y}\) & \(-\) & \(E\) & \(\tilde{\lambda}_{4}\) \\ & \(\Delta_{4\tau\tau}^{\perp}\) & \(E\) inter-moiré-valley spin-triplet & \(E\)-Q\({}_{M}\)-TSC & \(\sigma_{z}(\eta_{x}\tau_{z},\eta_{y}\tau_{z})\tau_{x}is_{y}\) & \(-\) & \(E\) & \(\tilde{\lambda}_{4}\) \\ & \(\Delta_{4\tau\tau}^{\perp}\) & \(E\) inter-moiré-valley spin-triplet & \(E\)-Q\({}_{M}\)-TSC & \(\sigma_{z}(\eta_{x}\tau_{z},\eta_{y})\tau_{x}is_{y}\) & \(-\) & \(E\) & \(\tilde{\lambda}_{4}\) \\ & \(\Delta_{4\tau\tau}^{\perp}\) & \(E\) intravalley spin-singlet & \(E_{2}\)-Q-SSC & \(\sigma_{x}(\eta_{0},\eta_{z})is_{y}\) & \(E_{2}\) & \(E\) & \(\tilde{\lambda}_{4}\) \\ & \(\Delta_{4\tau\tau}^{\perp}\) & \(E_{1}\) intravalley spin-singlet & \(E_{1}\)-Q-SSC & \(\sigma_{x}(\eta_{0},\eta_{z})\tau_{z}is_{y}\) & \(E_{1}\) & \(E\) & \(\tilde{\lambda}_{4}\) \\ & \(\Delta_{5\tau\tau}^{\perp}\) & \(B_{2}\) intravalley spin-triplet & \(B_{2}\)-Q-TSC & \(\sigma_{y}\eta_{x}\mathbf{s}is_{y}\) & \(B_{2}\) & \(A_{2}\) & \(\tilde{\lambda}_{5}\) \\ & \(\Delta_{5\tau\tau}^{\perp}\) & \(A_{2}\) intravalley spin-triplet & \(A_{2}\)-Q-TSC & \(\sigma_{y}\eta_{x}\tau_{z}\mathbf{s}is_{y}\) & \(A_{2}\) & \(A_{2}\) & \(\tilde{\lambda}_{5}\) \\ & \(\Delta_{6\tau\tau}^{\perp}\) & \(B_{1}\) intravalley spin-singlet & \(B_{1}\)-Q-SSC & \(\sigma_{y}\eta_{y}\mathbf{s}is_{y}\) & \(B_{1}\) & \(A_{1}\) & \(\tilde{\lambda}_{6}\) \\ & \(\Delta_{6\tau\tau}^{\perp}\) & \(A_{1}\) intravalley spin-singlet & \(A_{1}\)-Q-TSC & \(\sigma_{y}\eta_{y}\tau_{z}is_{y}\) & \(A_{1}\) & \(A_{1}\) & \(\tilde{\lambda}_{6}\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Parent order parameters.** Names and transformation properties of the order parameters which appear as fixed ray solutions of the RG equations. After each Dirac revival, the parent order parameters are multiplied by the projector onto the flavour-polarised subspace jection operator for \(2\leq\nu<3\) reads \[\mathcal{P}_{f}^{\nu=2}=\tfrac{1}{2}(\tau_{0}s_{0}+\tau_{z}s_{z}). \tag{11}\] We show the projected fixed-ray order parameters for \(2\leq\nu<3\) in Table 3, using a convenient choice of notation in which we define an 'isovalley' quantum number \(|\gamma=\pm\rangle\equiv|\tau=s=\pm\rangle\). The order parameters can no longer be categorised as'spin' and 'charge', as spin triplet and singlet mix after projection - which we may interpret as a consequence of broken \(C_{2z}\). However, the system retains a spinful \(D_{6}^{*}\); we classify the fixed ray orders by their \(D_{6}^{*}\) irreps in Table 3. The fixed ray analysis is 'unbiased' - it makes no assumption on the bare couplings. To complement this, we calculate an explicit RG flow which allows us to establish which order parameters are dominant given a physically motivated set of bare interaction couplings. Hence, in addition to the fixed rays of Table 3, we show an RG plot in Fig. 7 for \(\nu=2\), i.e. we set \(N_{f}=4\). We demonstrate the emergence of T-IVC order for purely repulsive interactions: taking \(g_{o}=u_{z}=0.2\), \(v_{z}=0.05\), \(v_{x\tau}=0.12\), i.e. positive couplings, but with \(v_{x\tau}>v_{z}\). We find that the competition between K-IVC and T-IVC is determined by the relative magnitude of \(v_{x\tau}\) and \(v_{z}\): when these couplings are approximately equal, the IVC orders are nearly degenerate, while increasing the bare value of \(v_{x\tau}\) (\(v_{z}\)) tends to promote T-IVC (K-IVC). As mentioned earlier, the polarised state MSLP\({}_{-}\) is degenerate with T-IVC, as a result of \(\mathcal{C}\)-symmetry. Sub-leading to T-IVC and MSLP\({}_{-}\) order are \(E\) superconductivity, along with K-IVC, MPN\({}_{+}\) and \(A_{2}\) superconductivity; these conclusions are true for a range of coupling values varied around the choice shown in Fig. 7. Previous studies based on Hartree-Fock treatments and perturbation theory around the flat-band limit in twisted bilayer [68; 69] and trilayer graphene [70; 71; 73] have led to the conventional wisdom that Coulomb interactions should favour K-IVC over T-IVC order - with the T-IVC state estimated to be significantly higher in energy (\(\sim 5\) meV) than K-IVC order [68; 70]. It has been proposed that phonon-mediated attractive interactions combined with spin-valley polarisation can promote T-IVC [66], though T- and K-IVC remain close contenders. The RG flow enhances T-IVC, demonstrating that repulsive electronic interactions, without phonons, can allow T-IVC to dominate K-IVC order. We reiterate that recent STM studies of the spatial texture of the insulating state near \(\nu=2\) see T-IVC order in low-strain devices [21; 22] - our results provide a natural resolution of this puzzle. Upon doping away from the band touching point, so that \(\nu\gtrsim 2\), the leading superconducting instabilities are the \(E\) states, with \(A_{2}\) appearing as a close competitor. While \(E/E_{1,2}\) are the leading instabilities, as already noted they are finite momentum states and so may be more susceptible to disorder than \(A_{2}\). The presence of \(A_{2}\) superconductivity can resolve a second experimental puzzle, related to tunnelling conductance measurements [18; 19; 20] of the superconducting state near \(\nu=2\) which show a 'U-shaped' density of states attributed to a full \begin{table} \begin{tabular}{l l} Filling region & Fixed ray eigenvalues \\ \hline \(0\leq\nu<1\) & \(\lambda_{1},\lambda_{6},\lambda_{9},\lambda_{11},\lambda_{12};\ \tilde{\lambda}_{4}\) \\ \(1\leq\nu<2\) & \(\lambda_{6},\lambda_{7},\,\lambda_{9},\lambda_{11},\lambda_{12};\ \tilde{\lambda}_{4},\tilde{\lambda}_{5},\tilde{ \lambda}_{6}\) \\ \(2\leq\nu<3\) & \(\lambda_{7},\lambda_{8},\lambda_{9},\lambda_{11},\lambda_{12};\ \tilde{\lambda}_{4},\tilde{\lambda}_{5},\tilde{\lambda}_{6}\) \\ \(3\leq\nu<4\) & \(\lambda_{7},\lambda_{8},\lambda_{9},\lambda_{11},\lambda_{12};\ \tilde{\lambda}_{5}\) \\ \end{tabular} \end{table} Table 2: **Fixed ray eigenvalues.** List of order parameter eigenvalues which dominate on a fixed ray, at and between each integer filling. The associated order parameters are presented in Table 1, and the dependence on doping relative to integer filling is presented in the Supplementary Material. Figure 5: **RG flow near \(\nu=0\).** RG flow of the quantities \(\log(\chi(t)/\chi(0))\), where \(\chi(t)\) is an order parameter susceptibility, for the 18 distinct order parameter structures with initial conditions \(g_{o}=v_{z}=0.25\), and \(N_{f}=8\), \(\mu=0\) as appropriate for the vicinity of \(\nu=0\). Moiré-polarised nematic (MPN) order is the leading instability, with moiré density wave and K-IVC closely contending at short RG times. Figure 6: **Flavour polarisation at \(\nu=2\).** Experimental evidence suggests the remaining flavours after the Dirac revival at \(\nu=2\) consist of opposite spins at opposite valleys. superconducting gap, that can become 'V-shaped' upon doping - typically attributed to gap nodes (aside from thermal fluctuations as a possible origin [147]). Since the \(A_{2}\) state is odd under \(C_{2x}\), the gap function has opposite signs at the two moire valleys and vanishes at the \(C_{2x}\)-invariant \(\Gamma M\) line in the Brillouin zone. While our theory becomes increasingly unjustified as doping is increased away from the Dirac point, if we assume that the superconducting order does not undergo a phase transition to a different irrep then the gap closes as the nodal lines of the \(A_{2}\) state approach the Fermi level, which may explain the observed tunnelling conductance. Furthermore, we expect that the symmetry-imposed sign change of the \(A_{2}\) intervalley state can also lead to subgap peaks as seen in tunnelling experiments in the strong-coupling limit [146; 19]. However, we leave a quantitative analysis of this aspect to future work. ### \(\nu=1,3\) We will now describe the differences between the leading order parameters observed at \(\nu=1\) and \(3\). The simplest case is the region \(3\leq\nu<4\), in which the Dirac revival has necessarily polarised all but one flavour, i.e. only a single species of spin and a single species of valley remain. Letting the remaining flavour be \(\{s\tau\}=++\), the associated projection operator is \(\mathcal{P}_{f}^{\nu=3}=(s_{0}+s_{z})(\tau_{0}+\tau_{z})/4\). After projection there are only two insulating states which appear, \(\sigma_{z}\mathcal{P}_{f}^{\nu=3}\) and \(\sigma_{z}\eta_{z}\mathcal{P}_{f}^{\nu=3}\), which have Chern numbers \(\pm 1\) and \(0\) respectively - both of which have been \begin{table} \begin{tabular}{l l l l l l l} & Name & Abbreviation & Order Parameter & IR of \(D_{6}^{*}\) & IR of \(D_{3}^{*}\) & \(\lambda_{j}\) \\ \hline \multirow{3}{*}{\(\Gamma M\)} & \(\Theta\)-odd intervalley coherent & K-IVC & \(\sigma_{x}\eta_{y}(\gamma_{x},\gamma_{y})\) & \(B_{2},A_{2}\) & \(A_{2}\) & \(\lambda_{7}\) \\ & \(\Theta\)-even intervalley coherent & T-IVC & \(\sigma_{x}\eta_{x}(\gamma_{x},\gamma_{y})\) & \(B_{1},A_{1}\) & \(A_{1}\) & \(\lambda_{8}\) \\ \hline \multirow{3}{*}{\(\Gamma M\)} & \(\Theta\)-odd moiré-valley, sublattice polarised & MSLP\({}_{-}\) & \(\sigma_{z}\eta_{z}\gamma_{z}\) & \(B_{1}\) & \(A_{1}\) & \(\lambda_{8}\) \\ & \(\Theta\)-even moiré-valley, sublattice polarised & MSLP\({}_{+}\) & \(\sigma_{z}\eta_{z}\) & \(A_{1}\) & \(A_{1}\) & \(\lambda_{11}\) \\ & \(\Theta\)-even sublattice polarised & SLP\({}_{+}\) & \(\sigma_{z}\) & \(A_{2}\) & \(A_{2}\) & \(\lambda_{7}\) \\ & \(\Theta\)-odd sublattice polarised & SLP\({}_{-}\) & \(\sigma_{z}\gamma_{z}\) & \(B_{2}\) & \(A_{2}\) & \(\lambda_{12}\) \\ \hline \multirow{3}{*}{\(\Gamma M\)} & \(\Theta\)-odd moiré density wave & MDW\({}_{-}\) & \((\gamma_{z}\eta_{x},\eta_{y})\) & \(-\) & \(E\) & \(\lambda_{9}\) \\ \hline \multirow{3}{*}{\(\Gamma M\)} & \(A_{2}\) intervalley & \(A_{2}\)-SC & \(\eta_{z}\gamma_{y}\) & \(A_{2}\) & \(A_{2}\) & \(\tilde{\lambda}_{5}\) \\ & \(A_{1}\) intervalley & \(A_{1}\)-SC & \(\gamma_{y}\) & \(A_{1}\) & \(A_{1}\) & \(\tilde{\lambda}_{6}\) \\ \hline \multirow{3}{*}{\(\Gamma M\)} & \(E\) intervalley & \(E\)-Q\({}_{M}\)-SC & \(\sigma_{z}(\eta_{x},\eta_{y}\gamma_{z})\gamma_{y}\) & \(-\) & \(E\) & \(\tilde{\lambda}_{4}\) \\ & \(A_{2}\) intravalley & \(A_{2}\)-Q-SC & \(\sigma_{y}\eta_{x}\) & \(A_{2}\) & \(A_{2}\) & \(\tilde{\lambda}_{5}\) \\ \cline{1-1} & \(B_{2}\) intravalley & \(B_{2}\)-Q-SC & \(\sigma_{y}\eta_{x}\gamma_{z}\) & \(B_{2}\) & \(A_{2}\) & \(\tilde{\lambda}_{5}\) \\ \end{tabular} \end{table} Table 3: **Dominant flavour-polarised order parameters at \(\nu=2\).** We assume the time-reversal-symmetric flavour polarisation \(\{\tau s\}=\{++,--\}\), with corresponding projection operator \(\mathcal{P}=(\tau_{0}s_{0}+\tau_{z}s_{z})/2\). We define an _isovalley_ quantum number \(\gamma=\tau=s\) as described in the main text; working in this basis automatically implements the flavour-projection. Due to projection, for \(\nu=2\) the point group and TRS have a different representation than at \(\nu=0\). Figure 7: **RG flow near \(\nu=2\).** RG flow of the quantities \(\log(\chi(t)/\chi(0))\), where \(\chi(t)\) is an order parameter susceptibility, for the \(18\) distinct order parameter structures with initial conditions \(g_{o}=u_{z}=0.2\), \(v_{z}=0.05\), \(v_{x\tau}=0.12\), and \(N_{f}=4\), \(\mu=0\) as appropriate for the vicinity of \(\nu=2\). T-IVC order dominates with a subleading \(E_{2}\) superconducting instability; moiré-polarised nematic order and \(A_{2}\) superconductivity appear as subleading competitors. seen in experiment near \(\nu=3\)[23]. Any possible superconducting state is necessarily intra-flavour; the possible fixed-ray superconducting states significantly reduce to \(\Delta_{5,\tau\tau}=(\sigma_{y}\eta_{x})\mathcal{P}_{f}^{\nu=3}\), i.e. an \(A_{2}\)\(Q\neq 0\) superconductor. To the best of our knowledge, superconductivity has been observed near \(\nu=0,1,2\) but not \(\nu=3\) - consistent with the theoretical fragility of this state, though we speculate it may be observable in low-disorder devices [166]. Near \(\nu=1\), the projection operator may, without loss of generality, be written as \(\mathcal{P}_{f}^{\nu=1}=s_{0}\tau_{0}-(s_{0}-s_{z})(\tau_{0}+\tau_{z})/4\), i.e. we project out the flavour \(\{s\tau\}=+-\). The formation of an IVC state at \(\nu\gtrsim 1\) leaves one flavour ungapped, which means a fully gapped state near either \(\nu=1,3\) requires an order parameter \(\propto\sigma_{z}\). We speculate that the states \(\propto\sigma_{z}\) are more fragile to disorder, as experimental studies have seen a gap at \(\nu=1,3\) mainly in low-disorder devices; further, many studies have found these insulating states do not appear in transport, but do appear in local compressibility measurements - suggesting the formation of local regions in which the insulating state forms, but which are shorted due to disorder-induced conductive channels [23]. ## VI Wess-Zumino-Witten terms Having described the ordering instabilities of the Dirac theory, we now consider the interplay of the insulating and superconducting states. A direct second-order transition between certain insulating and superconducting states is possible when the Landau-Ginzburg free energy possesses a so-called Wess-Zumino-Witten (WZW) term - a scenario which has been recently discussed in several studies on graphene-based systems [150; 151; 152; 153; 154; 155]. The presence of this term results in the skyrmion defects of a three-component particle-hole order parameter \(\mathbf{m}\) carrying charge \(\mathcal{N}e\) with \(\mathcal{N}\in\mathbb{N}\), so that the proliferation of vortices - and the associated destruction of the particle-hole order - leads to superconductivity [150]. Conversely, superconducting vortices carry the quantum numbers of the associated particle-hole order. We emphasise that our mechanism for superconductivity is a Fermi liquid instability via the RG-enhanced Coulomb interaction, arising from the quadratic Dirac dispersion - our analysis of skyrmion defects in the insulating phases will serve to demonstrate that continuous transitions between our superconducting and insulating phases are possible. The WZW term can be written explicitly by defining \(\mathbf{n}=(m_{1},m_{2},m_{3},\text{Re}\Delta,\text{Im}\Delta)\), and adding an auxiliary dimension \(u\) to the spacetime dimensions \((\tau,x,y,z)\): \[\mathcal{S}_{\text{WZW}} = i\frac{2\pi\mathcal{N}}{\Omega_{4}}\int_{0}^{1}du\int d\tau dxdy \sum_{abcde=1}^{5}\epsilon_{abcde} \tag{12}\] \[\times n_{a}\partial_{u}n_{b}\partial_{\tau}n_{c}\partial_{x}n_{d }\partial_{y}n_{e}\,,\] where \(\Omega_{4}=8\pi^{2}/3\). The general criterion for the emergence of a WZW term in TBG as well as all possible compatible choices of \(\mathbf{m}\) and zero-momentum superconducting order parameters have been worked out in Ref. [153], which we will next apply to our results (we present further details in the Supplemental Material). We first note that there is no single set of particle-hole and superconducting orders among the fixed rays for the region \(0\leq\nu<1\) around CNP consistent with a WZW term. This results from the fact that there is only one possible fixed-ray eigenvalue (\(\tilde{\lambda}_{4}\)) associated with superconductivity in this filling range, see Table 2, and the associated superconducting order parameters in Table 1 are all inconsistent with a WZW term. This observation might provide another reason for why superconductivity is less commonly observed for \(|\nu|<1\) in experiment. Moving on to \(\nu=2\), our RG-dominant \(E\) intervalley superconductor does not allow for a skyrmion-mediated transition, however the closely competing and less fragile \(A_{2}\) state does. Most importantly, a WZW term between the \(A_{2}\) intervalley superconductor, the two components of T-IVC order, and \(\text{SLP}_{+}\) is possible. This is the most plausible scenario for a skyrmion-mediated critical point within our analysis, though we note that the \(\text{SLP}_{+}\) is not close in energy to the T-IVC state within the RG without fine-tuning of parameters. Apart from this, the only other WZW term we find is the one between the \(A_{1}\) intervalley superconductor, K-IVC, and \(\text{SLP}_{+}\) order - the scenario of Ref. [152]. Our conclusions are that superconducting vortices in our primary candidate superconductor, the \(A_{2}\) intervalley state, can carry quanta of the T-IVC [150]; they could therefore exhibit a Kekule pattern similar to that seen in STM analysis of superconducting state near \(\nu=2\)[21]. The analysis further shows that the transition from T-IVC to \(A_{2}\) superconductivity may be second order. ## VII Discussion We have argued that TBG near integer fillings is described by a Dirac theory as a result of the observed revivals, with the assumption that the Fermi velocity is small enough that the fermions have a quadratic dispersion in some range of momenta near the Dirac points. The result is a non trivial renormalisation group flow across an associated window of temperatures: the particle-hole fluctuations near the band touching points result in nematic and insulating states, while doping away from the band-touching results in superconductivity dominating. Our theory is able to simultaneously describe both the insulating and superconducting states, a major advantage over alternative methods such as Hartree-Fock [61; 62; 63; 64; 65; 66; 67; 68; 69; 70]. We motivated our assumption of a quadratic energy regime by appealing to an approximate particle-hole symmetry TBG possesses, however a direct verification of the physical values of \(v\) and \(\beta\) is experimentally feasible, via measurements of the electronic compressibility [157]. The theory can explain a great deal of the observed phenomena in TBG. Firstly, the theory provides a unified explanation of the phase diagram of TBG throughout the entire region \(-4<\nu<4\), which consists of interlaced insulating/nematic and superconducting states. The theory explains why insulating/nematic states appear near each integer filling, and why superconducting states have been observed in the absence of insulating states - the phases have a common origin, rather than a 'parent-child' relationship. Furthermore, the theory naturally accounts for a gapped CNP in low-strain devices and a gapless nematic CNP in the presence of strain, consistent with experiment, and can account for the sequence of Chern numbers associated to the insulating states near integer filling. Secondly, recent STM studies of the insulating states near \(\nu=2\) have found evidence of T-IVC order in low-strain devices [21; 22];. Prior mean-field treatments and strong-coupling calculations with Coulomb interactions have favoured the K-IVC state over T-IVC (see, e.g., [68; 69; 70; 71; 72; 73]), but here we find that RG provides a mechanism for the appearance of T-IVC order, relying on repulsive interactions rather than a resort to phonons. The T-IVC state exhibits a spatial pattern known as a Kekule distortion, and in the presence of strain can result in a spatial texture known as an 'incommensurate Kekule spiral' (IKS) - a state introduced in Ref. [86], which [21; 22] also observed in strained devices. Thirdly, our results suggest a resolution of another recent experimental puzzle. Tunnelling conductance measurements of the superconducting state near \(|\nu|=2\) show a transition from a V-shaped density of states to a U-shaped density of states as a function of doping [18; 19; 20], indicating a transition between nodal and fully-gapped superconductivity. Our prediction of \(A_{2}\) superconductivity in the Dirac theory near \(|\nu|=2\) provides a possible microscopic mechanism which naturally accounts for these features. Note that the U-shaped regime has only been reported in tTLG, however it is generally believed that this system shares the same pairing symmetry as TBG. Fourthly, our proposed link between revivals and superconductivity can explain the asymmetry of the phase diagram between electron- \(\nu>0\) and hole- \(\nu<0\) doping. Experiments have observed that that the Dirac revivals appear in a different window of angles for \(\nu>0\) and \(\nu<0\) - e.g. Ref. [33] observed revivals for \(\nu>0\) in the range \(\theta\approx 0.88^{\circ}-1.04^{\circ}\), but observed revivals for \(\nu<0\) in the range \(\theta\approx 0.97^{\circ}-1.23^{\circ}\)[167]. In our theory, the Dirac revivals create the parent state from which superconductivity emerges at low temperatures - i.e. the quadratic momentum regime near the Dirac point - consistent with the observed asymmetry in the superconducting phase diagram. Fifthly, the Dirac revival picture also offers two possible explanations of why the superconducting states at \(\nu=0,1,3\) appear to be less robust - firstly that the leading superconducting orders which appear are finite momentum states fragile to disorder, and secondly that the Dirac revival does not always appear at \(|\nu|=1,3\). Finally, the theory explains why superconductivity is generally absent when TBG is aligned with an hBN substrate, which breaks \(C_{2z}\) symmetry and gaps out the Dirac points, obviating the interaction physics of the band-touching point. A host of other moire systems - including twisted multilayer graphene and twisted transition metal dichalcogenides - are characterised by Dirac particles near charge neutrality with flattened dispersions. Our RG results open up a possible approach to studying the interaction physics of these systems - in fact, experiments on twisted trilayer graphene also indicate signatures of Dirac revivals at integer fillings [114; 115; 116; 117], as well as multi-layer graphene proximitised with WSe\({}_{2}\)[118], so we anticipate the physics of quadratic Dirac fermions is directly relevant in these systems as well. Moreover, Bernal-stacked bilayer graphene proximitised with WSe\({}_{2}\) - recently found to exhibit superconductivity [156] - also possesses a flavour-polarised Fermi surface characterised by a spin-valley locking equivalent to our scenario for TBG near \(|\nu|=2\). Our analysis suggests that flavour polarisation and band-touching Dirac states are the essential ingredient in the emergence of insulating and proximate superconducting states in these systems. ## Acknowledgements The authors thank Eva Andrei, Maine Christos, Shahal Illani, Eslam Khalaf, Yves Kwan, Ryan Lee, Kevin Nuckolls, Raquel Queiroz, and Senthil Todadri for discussions and comments on the manuscript. M.S.S. acknowledges funding by the European Union (ERC-2021-STG, Project 101040651--SuperCorr). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
2303.05542
Transcendence measure of $e^{1/n}$
For a given transcendental number $\xi$ and for any polynomial $P(X)=: \lambda_0+\cdots+\lambda_k X^k \in \mathbb{Z}[X]$, we know that $ P(\xi) \neq 0.$ Let $k \geq 1$ and $\omega (k, H)$ be the infimum of the numbers $r > 0$ satisfying the estimate $$ \left|\lambda_0+\lambda_1 \xi+\lambda_2 \xi^{2}+ \ldots +\lambda_k\xi^{k}\right| > \frac{1}{H^r}, $$ for all $(\lambda_0, \ldots ,\lambda_k)^T \in \mathbb{Z}^{k+1}\setminus\{\overline{0}\}$ with $\max_{1\le i\le k} \{|\lambda_i|\} \le H$. Any function greater than or equal to $\omega (k, H)$ is a {\it transcendence measure of $\xi$}. In this article, we find out a transcendence measure of $ e^{1/n}$ which improves a result proved by Mahler(\cite{Mahler}) in 1975.
Marta Dujella, Anne-Maria Ernvall-Hytönen, Linda Frey, Bidisha Roy
2023-03-09T19:04:38Z
http://arxiv.org/abs/2303.05542v1
# Transcendence measure of \(e^{1/n}\) ###### Abstract. For a given transcendental number \(\xi\) and for any polynomial \(P(X)=:\lambda_{0}+\dots+\lambda_{k}X^{k}\in\mathbb{Z}[X]\), we know that \(P(\xi)\neq 0.\) Let \(k\geq 1\) and \(\omega(k,H)\) be the infimum of the numbers \(r>0\) satisfying the estimate \[\left|\lambda_{0}+\lambda_{1}\xi+\lambda_{2}\xi^{2}+\dots+\lambda_{k}\xi^{k} \right|>\frac{1}{H^{r}},\] for all \((\lambda_{0},\dots,\lambda_{k})^{T}\in\mathbb{Z}^{k+1}\setminus\{\overline{0}\}\) with \(\max_{1\leq i\leq k}\{|\lambda_{i}|\}\leq H\). Any function greater than or equal to \(\omega(k,H)\) is a _transcendence measure of \(\xi\)_. In this article, we find out a transcendence measure of \(e^{1/n}\) which improves a result proved by Mahler([7]) in 1975. We thank the organizers of the conference Women in Numbers Europe 4, especially but not exclusively Valentijn Karemaker and Nirvana Coppola. This conference set the foundation for this article. Furthermore, we thank the Universiteit Utrecht for granting unlimited coffee. corollary, they also derived a transcendence measure for integer powers of \(e\). However, we were not able to find any transcendence measures tailored for _roots of \(e\)_ in the literature. There are some general results in the literature, for instance, by Mahler [7] which can be used to derive a bound. Also, the generalized transcendence measure by Ernvall-Hytonen, Leppala and Matala-aho can be used to derive a bound. Our bound will be compared to these bounds in Section 2. In this article, we prove the following bound: **Theorem 1**.: _Assume \(k\geq n\geq 2\). We have_ \[\left|\lambda_{0}+\lambda_{1}e^{1/n}+\lambda_{2}e^{2/n}+\ldots+\lambda_{k}e^{ k/n}\right|>\frac{1}{H^{r}}, \tag{2}\] _where \(r>\omega(k,H)\) and we can choose_ \[\omega(k,H)=k+\frac{k^{2}\log k}{\log\log H}\left(1+\frac{0.69}{\log k-1} \right),\] _for \(k\geq 5\) and \(\omega(k,H)=k+\frac{k^{2}\log k}{\log\log H}d(k)\), where_ \[d(k)=\left\{\begin{array}{rl}&3.319\text{ for }k=2\\ &1.145\text{ for }k=3\\ &1.114\text{ for }k=4\end{array}\right.\] _and \(\log H\geq s(n,k)e^{s(n,k)}\) with \(s(n,k)=(k+n)(\log(k+n))^{2}\)._ We follow the approach used in [3]. ## 2. Earlier results and comparisons to our bound The following result can be obtained as a corollary of a much more general result presented in by Mahler in 1975. **Theorem 2** (Theorem 1 in [7]).: _Take \(a_{i}=i\) for \(i=0,\ldots,k\) (with \(k\geq 2\)) and \(a=n\) a positive integer. Let \(\lambda_{0},\ldots,\lambda_{k}\) be integers not all zero (in Mahler's paper \(x_{0},\ldots,x_{k}\)), \(C(r)=(k+1)^{2}r\sqrt{(\log(n+k+1)\log r}\) and \(T\) be the product of the non-zero \(\lambda_{i}\). Then for \(r\) the smallest integer for which_ \[\frac{(r-1)!}{e^{2C(r-1)}}\leq\max\{|x_{i}|\}<\frac{r!}{e^{2C(r)}}\] _we have_ \[|T(\lambda_{0}+\lambda_{1}e^{\frac{1}{n}}+\ldots+\lambda_{k}e^{\frac{k}{n}})| >\frac{\max\{|\lambda_{i}|\}}{e^{(2(k+1)-\frac{1}{4})C(r)}}. \tag{3}\] The following result is due to Ernvall-Hytonen, Leppala and Matala-aho, and can be obtained as a corollary of a much more general result presented in [2]. **Theorem 3** (Corollary of [2]).: _We have_ \[\left|\lambda_{0}+\lambda_{1}e^{1/n}+\cdots+\lambda_{k}e^{k/n}\right|>\frac{M ^{1-\hat{\delta}(M)}}{h_{0}h_{1}\ldots h_{k}},\] _where \(M=\max_{0\leq i\leq k}\{|\lambda_{i}|\}\), \(\hat{\delta}(M)\leq\frac{\hat{B}(\overline{\omega})}{\sqrt{\log\log M}}\leq c _{k}k^{2}\sqrt{\log(g_{1}(\overline{\alpha})(1+g_{3}(\overline{\alpha})))}/ \sqrt{\log\log M}\) and \(h_{i}=\max\{1,|\lambda_{i}|\}\), for \(i=1,\ldots,k\). Moreover, \(c_{k}=13\) if \(k<3\) and \(12\) otherwise._ In particular, for \(\overline{\alpha}=(0,\frac{1}{n},\ldots,\frac{k}{n})\) we have \(g_{1}(\overline{\alpha})=n\) and \(g_{3}(\overline{\alpha})=\frac{k}{n}\). Therefore, \[\left|\lambda_{0}+\lambda_{1}e^{1/n}+\cdots+\lambda_{k}e^{k/n}\right|>\frac{M}{ h_{0}\cdots h_{k}M^{\frac{c_{k}k^{2}\sqrt{\log(n(1+k/n))}}{\sqrt{\log\log M}}}}.\] Let us now compare these results with our bound. **Example 4**.: _Let us look at the family of polynomials with \(\frac{H}{2}\leq|\lambda_{i}|\leq H\) for all coefficients \(\lambda_{i}\) when \(1\leq i\leq k\) and compare our result with results in [2] and [7]._ _Our result gives the bound_ \[\left|\lambda_{0}+\lambda_{1}e^{1/n}+\cdots+\lambda_{k}e^{k/n}\right|>H^{-k- \frac{k^{2}\log k}{\log\log H}\left(1+\frac{0.639}{\log k-1}\right)}\] _The bound by Ernvall-Hytonen, Matala-aho and Leppala:_ \[\left|\lambda_{0}+\lambda_{1}e^{1/n}+\cdots+\lambda_{k}e^{k/n}\right|>\frac{H }{h_{0}\cdots h_{k}H^{\frac{c_{k}k^{2}\sqrt{\log(n(1+k/n))}}{\sqrt{\log\log H }}}}\] _This bound is certainly not better than_ \[\left(\frac{H}{2}\right)^{-k-\frac{12k^{2}\sqrt{\log(n(1+k/n))}}{\sqrt{\log \log H}}}=H^{-k+k\frac{\log 2}{\log H}-\frac{12k^{2}\sqrt{\log(n+k)}}{\sqrt{\log \log H}}},\] _which is weaker than ours for large values of \(H\) because for large \(\sqrt{\log\log H}\) grows slower than \(\log\log H\)._ _Mahler 1975:_ \[|\lambda_{0}+\lambda_{1}e^{\frac{1}{n}}+\ldots+\lambda_{k}e^{\frac{k}{n}}|> \frac{\max\{|\lambda_{i}|\}}{Te^{(2(k+1)-\frac{1}{4})C(r)}}, \tag{4}\] _where \(T\) is the product of all non-zero \(\lambda_{i}\)'s._ \[\frac{\max\{|\lambda_{i}|\}}{Te^{(2(k+1)-\frac{1}{4})C(r)}}\leq\frac{1}{\left( \frac{H}{2}\right)^{k}e^{(2(k+1)-1/4)C(r)}}. \tag{5}\] _Mahler gives the bound \(\frac{\log x}{\log\log x}<r<\frac{6\log x}{\log\log x}\), where in his notation \(x\) is the maximum of the absolute values of the coefficients of the polynomial. In our setting, this inequality would approximately translate to_ \[\frac{\log H}{\log\log H}<r<\frac{6\log H}{\log\log H}.\] _We are losing some accuracy here because we only expected the coefficients of the polynomial to be on the interval \([\frac{H}{2},H]\). However, for the current purposes, this is not an issue._ _The denominator of (5) can now be written as_ \[\left(\frac{H}{2}\right)^{k}e^{(2(k+1)-1/4)(2(k+1)-1/4)C(r)}=H^{k-\frac{k\log 2 -(2(k+1)-1/4)C(r)}{\log H}}=H^{k-\frac{k\log 2}{\log H}+\frac{(2(k+1)-1/4)C(r)}{ \log H}}.\] _Let us now look at the expression \(\frac{(2(k+1)-1/4)C(r)}{\log H}\). Let us use the expression for \(C(r)\) and the bound for \(r\):_ \[C(r)=(k+1)^{2}r\sqrt{(\log(n+k+1)\log r}\approx(k+1)^{2}\frac{\log H }{\log\log H}\sqrt{(\log(n+k+1)\log\frac{\log H}{\log\log H}}\\ \approx(k+1)^{2}\frac{\log H\sqrt{\log k}}{\sqrt{\log\log H}},\] _where we used the bound \(n\leq k\) to estimate \(\log(n+k+1)\approx\log k\), and that for large H, \(\log\frac{\log H}{\log\log H}\approx\log\log H\). Hence,_ \[\frac{(2(k+1)-1/4)C(r)}{\log H}\approx\frac{(2(k+1)-1/4)}{\log H}\cdot(k+1)^{ 2}\frac{\log H\sqrt{\log k}}{\sqrt{\log\log H}}\approx\frac{2(k+1)^{2}\sqrt{ \log k}}{\sqrt{\log\log H}}.\] _Hence, our bound is also better than Mahler's bound, because \(\sqrt{\log\log H}\) grows slower than \(\log\log H\), and the numerator is bigger (dependance on \(k^{3}\) instead of \(k^{2}\))._ ## 3. Preliminaries and the outline of the method Ernvall-Hytonen, Seppala and Matala-aho [3] used the following approach: Assume that there is a sequence of simultaneous approximations \[L_{m,j}(h)=B_{m,0}(h)\Theta_{j}+B_{m,j}(h)\] with \(m=0,1,\ldots,k\) and \(j=1,2,\ldots,k\). Further assume \(B_{m,j}(\ell)\in\mathbb{Z}\) for all \(m,j\in\{0,1,\ldots,k\}\). Assume further that the coefficients \(B_{m,j}\) satisfy the following determinant condition: \[\begin{vmatrix}B_{0,0}&B_{0,1}&\cdots&B_{0,k}\\ B_{1,0}&B_{1,1}&\cdots&B_{1,k}\\ \vdots&\vdots&\ddots&\vdots\\ B_{k,0}&B_{m,1}&\cdots&B_{k,k}\end{vmatrix}\neq 0.\] Pick the functions \(Q(h)\), \(q(h)\), \(R(h)\) and \(r(h)\) to be such that they satisfy the following inequalities: \[B_{m,0}(h)\leq Q(h)=e^{q(h)}\] and \[\sum_{j=1}^{k}|L_{m,j}(h)|\leq R(h)=e^{-r(h)},\] for all \(h\geq h_{0}\), where the functions are of the form \[q(h)=ah\log h+bh\] and \[-r(h)=-ch\log h+dh.\] Assume that \(z(y)\) is the inverse function of the function \(y(z)=z\log z\). Further, denote \[B=b+\frac{ad}{c},\quad C=a,\quad D=a+b+e^{-s(m)},\quad F^{-1}=2e^{D},\quad v=c -\frac{d}{s(m)},\quad h_{1}=\max\{h_{0},e,e^{s(m)}\}.\] Our choice will be \(s(n,k)=(n+k)(\log(n+k))^{2}\), and we will actually have \(h_{1}=e^{s(n,k)}\). Under the assumptions above, they proved the following lemma: **Lemma 5** ([3]).: _Let \(m\geq 1\) and \(\log(2H)\geq vh_{1}\log h_{1}\). Then under the assumptions above_ \[|\lambda_{0}+\lambda_{1}\Theta_{1}+\dots+\lambda_{m}\Theta_{m}|>F(2H)^{-\frac{ a}{c}-\epsilon(H)}, \tag{6}\] _where_ \[\epsilon(H)\log(2H)=Bz\left(\frac{\log(2H)}{v}\right)+C\log\left(z\left(\frac {\log(2H)}{v}\right)\right).\] Furthermore, they gave the following construction for the approximations in the case of \(e^{\alpha_{j}}\): Write \(\overline{\alpha}=(\alpha_{0},\dots,\alpha_{k})\) and set \[\Omega(x,\overline{\alpha})=\prod_{j=0}^{m}(\alpha_{j}-x)^{\ell_{j}}=\sum_{i= 0}^{L}\sigma_{i}x^{i}, \tag{7}\] where \(L=\ell_{0}+\ell_{1}+\dots+\ell_{m}\) and \(\sigma_{i}=\sigma_{i}(\overline{\ell},\overline{\alpha})\). Then choosing \[A_{0}(t)=\sum_{i=\ell_{0}}^{L}t^{L-i}i!\sigma_{i},\] we get \[e^{\alpha_{j}}A_{0}(t)-A_{j}(t)=R_{j}(t), \tag{8}\] where \(A_{j}(t)\) is a polynomial with integer coefficients and \[\begin{cases}\deg A_{0}(t)=L-\ell_{0}\\ \deg A_{j}(t)=L-\ell_{j}\\ \operatorname{ord}_{t=0}R_{j}(t)\geq L+1.\end{cases}\] Notice that the polynomials depend on the values of \(\ell_{0},\ell_{1},\dots,\ell_{m}\) and on \(\overline{\alpha}\). We will explicitly describe \(A_{j}\) and \(R_{j}\) in the following chapter. In the following, we will be choosing \(\Theta_{j}=e^{j/n}\) for some \(n\geq 2\). We will then proceed in the same fashion as in [3] to construct the explicit polynomials used in the simultaneous approximations and to bound them. Finally, we simplify the estimate given by (6). ## 4. Explicit polynomial construction We start by constructing the simultaneous approximations of the powers of the roots of \(e\). For estimating the required term, we set \(\overline{\alpha}=(\alpha_{0},\dots,\alpha_{k})\) with \(\alpha_{s}=s/n\), for \(s=0,1,\dots k\). Let \(\overline{\ell}=(l_{0},\dots,l_{k})\in\mathbb{Z}_{\geq 1}^{k+1}\) and \(L=l_{0}+\dots+l_{k}\). As explained in the previous chapter, we get the following approximation formulas for \(j=1,\dots,k\) \[e^{\alpha_{j}t}A_{0}(t)-A_{j}(t)=L_{j}(t),\] where \[A_{0}(t)=\sum_{i=\ell_{0}}^{L}t^{L-i}i!\sigma_{i}.\] With a direct computation (similarly as in [3]), we obtain \[\sigma_{i} =(-1)^{i}\sum_{\ell_{0}+i_{1}+\ldots i_{k}=i}\binom{\ell_{1}}{i_{1}} \ldots\binom{\ell_{k}}{i_{k}}\left(\frac{1}{n}\right)^{\ell_{1}-i_{1}}\ldots \left(\frac{k}{n}\right)^{\ell_{k}-i_{k}}\] \[=(-1)^{i}\sum_{\ell_{0}+i_{1}+\ldots i_{k}=i}\binom{\ell_{1}}{i_{1 }}\ldots\binom{\ell_{k}}{i_{k}}n^{-L+i}2^{\ell_{2}-i_{2}}\ldots k^{\ell_{k}-i_ {k}}\] Furthermore, \(\sigma_{i}=0\) when \(0\leq i<\ell_{0}\) and so \[A_{0}(t)=\sum_{i=0}^{L}t^{L-i}i!\sigma_{i}.\] We wish to now bound the polynomials. Laplace transform gives us a tool to switch from sums to integrals, which is helpful in estimates. Since \(\frac{i!\sigma_{i}(\overline{\ell},\overline{\alpha})}{t^{i+1}}=\mathcal{L}( \sigma_{i}(\overline{\ell},\overline{\alpha})x^{i})(t)\) (where \(\mathcal{L}\) denotes the Laplace transform), we have \[A_{0}(t)=\sum_{i=0}^{L}t^{L-i}i!\sigma_{i}=t^{L+1}\sum_{i=0}^{L}\mathcal{L}( \sigma_{i}x^{i})(t)=t^{L+1}\int_{0}^{\infty}e^{-xt}\Omega(x)dx,\] where \(\Omega(x):=\Omega(x,\overline{\alpha})\) is given by (7). Now for any \(\alpha_{j}\), we have \[e^{\alpha_{j}t}A_{0}(t)=t^{L+1}\int_{0}^{\infty}e^{(\alpha_{j}-x)t}\Omega(x) dx=t^{L+1}\left(\int_{0}^{\alpha_{j}}+\int_{\alpha_{j}}^{\infty}\right)e^{( \alpha_{j}-x)t}\Omega(x)dx.\] Changing the variable in the second integral: \(y=x-\alpha\) gives us: \[e^{\alpha_{j}t}A_{0}(t)=t^{L+1}\int_{0}^{\alpha_{i}}e^{(\alpha_{j}-x)t}\Omega( x)dx+t^{L+1}\int_{0}^{\infty}e^{-yt}\Omega(y+\alpha_{j})dy\] Hence we get \[A_{j}(t)=t^{L+1}\int_{0}^{\infty}e^{-yt}\Omega(y+\alpha_{j})dy\] and \[L_{j}(t)=t^{L+1}\int_{0}^{\alpha_{j}}e^{(\alpha_{j}-x)t}\Omega(x)dx\] for \(j=1,\ldots,k\). We can make the result stronger if the terms in (8) are as small as possible. But at the same time, we want to keep the coefficients in \(A_{j}(t)\) as integers. Therefore, we try to find as large common factors in coefficients as possible. To do that, we proceed as in [3]. We start by picking very specific values of \(\ell_{0},\ell_{1},\ldots,\ell_{k}\) in relation to each other. For any \(u\) with \(0\leq u\leq k\), we take \(\ell_{s}^{(u)}=\begin{cases}\ell-1&\text{if }s=u\\ \ell&\text{otherwise}\end{cases}\) and \(\overline{\ell}^{(u)}=(\ell_{0}^{(u)},\ldots,\ell_{k}^{(u)})\). For these values of \(\overline{\ell}\), we denote \(A_{j}(t)=A_{\overline{\ell},j}(t)\) by \(A_{u,j}(t)\) and \(L_{j}(t)=L_{\overline{\ell},j}(t)\) by \(L_{u,j}(t)\). \[A_{0}(t)=\sum_{i=\ell_{0}}^{L}t^{L-i}i!(-1)^{i}\sum_{\ell_{0}+i_{1}+\ldots i_ {k}=i}\binom{\ell_{1}}{i_{1}}\ldots\binom{\ell_{k}}{i_{k}}n^{-L+i_{2}}2^{\ell_ {2}-i_{2}}\ldots k^{\ell_{k}-i_{k}}.\] For our chosen \(\overline{\ell}\)-s we always have \(\ell_{0}\in\{\ell,\ell-1\}\), so we can see that \[\frac{n^{L-\ell+1}}{(\ell-1)!}A_{u,0}(t)\in\mathbb{Z}[t].\] Similarly, we can look at polynomials \(A_{u,j}(t)\) for \(j=1,\ldots,k\). We have \[A_{u,j}(t)=\int_{0}^{\infty}e^{-yt}\Omega(y+\alpha_{j})dy=t^{L+1}\sum_{i=0}^{ L}\mathcal{L}(\sigma_{i}(\overline{\ell},\overline{\beta^{(j)}}))=\sum_{i=0}^{L}t ^{L-i}i!\sigma_{i}(\overline{\ell},\overline{\beta^{(j)}}),\] where \(\overline{\beta^{(j)}}=(\alpha_{0}-\alpha_{j},\ldots,\alpha_{k}-\alpha_{j})\) for each \(j\). Since the coefficients \(\sigma_{i}(\overline{\ell},\overline{\beta^{(j)}})\) are defined using polynomial \(\Omega(\overline{\ell},\overline{\beta^{(j)}})\), and in the product representation of \(\Omega\), the term \((t-\beta_{j}^{(j)})^{\ell_{j}}=(t-(\alpha_{j}-\alpha_{j}))^{\ell_{j}}=t^{ \alpha_{j}}\) will define the lowest degree of terms occuring in \(\Omega(\overline{\ell},\overline{\beta})\), and thereby in \(A_{u,j}\), we have \(\sigma_{i}(\overline{\ell},\overline{\beta})=0\) unless \(i\geq\ell_{j}\). Hence, all the coefficients in the representation of \(A_{u,j}(t)\) have the factor \(\ell_{j}!\), which is again either \(\ell\) or \(\ell-1\). On the other hand, these terms have the denominator \(n^{L-i}\), so again \[\frac{n^{L-\ell+1}}{(\ell-1)!}A_{u,j}(t)\in\mathbb{Z}[t].\] ## 5. Estimation of \(A_{u,0}(1)\) Next, we would like to estimate the term \(A_{u,0}(t)\), for which its representation as an integral will be useful. We have fixed \(\overline{\alpha}=(1/n,\ldots,k/n)\) and so \(\Omega(x)=\prod_{j=0}^{k}\left(j/n-x\right)\). Furthermore, for the choices of \(\overline{\ell}\) as in the previous section we have \(L-1=(k+1)\ell\). Therefore, \(A_{u,0}(t)\) looks like \[A_{u,0}(t) =t^{(k+1)\ell}\int_{0}^{\infty}e^{-yt}\prod_{j=0}^{k}\left(\frac {j}{n}-x\right)^{\ell_{s}}dx\] \[=t^{(k+1)\ell}\int_{0}^{\infty}e^{-xt}(-x)^{\ell}\left(\frac{1}{n }-x\right)^{\ell}\cdots\left(\frac{u}{n}-x\right)^{\ell-1}\cdots\left(\frac{k }{n}-x\right)^{\ell}dx.\] Note that \(\left|\frac{x^{\ell}(x-1/n)\cdots(x-k/n)}{(x-u/n)}\right|\leq x^{(k+1)\ell-1} \leq x^{(k+1)\ell}\), for \(x>\frac{k}{n}.\) This gives us an idea of how the function inside the integral behaves: while \(0\leq x\leq\frac{k}{n}\), the function stays relatively small, and it touches zero at points \(0,\frac{1}{n},\ldots,\frac{k}{n}\). However, when \(x\geq\frac{k}{n}\), it starts behaving roughly as \(x^{(k+1)\ell-1}e^{-x}\). Therefore, we split the above integral in the following way \[\int_{0}^{\infty}e^{-xt}\prod_{j=0}^{k}\left(\frac{j}{n}-x\right)^{\ell_{s}}dx =\left(\int_{0}^{k/n}+\int_{k/n}^{2(k+1)\ell}+\int_{2(k+1)\ell}^{\infty} \right)e^{-xt}\prod_{j=0}^{k}\left(\frac{j}{n}-x\right)^{\ell_{s}}dx. \tag{9}\] Now we treat the above integrals with \(t=1\). In that case, \[\left|\left(\int_{k/n}^{2(k+1)\ell}+\int_{2(k+1)\ell}^{\infty}\right)e^{-x} \frac{\prod_{s=0}^{k}\left(\frac{s}{n}-x\right)^{\ell}}{\left(\frac{u}{n}-x \right)}dx\right|\leq\left(\int_{k/n}^{2(k+1)\ell}+\int_{2(k+1)\ell}^{\infty} \right)e^{-x}x^{(k+1)\ell-1}dx:=I_{1}+I_{2}. \tag{10}\] For estimating the above integrals, we first consider \(I_{1}\). In that case, \[|I_{1}|=\left|\int_{k/n}^{2(k+1)\ell}e^{-x}x^{(k+1)\ell-1}dx\right|<2(k+1)\ell e^ {-(k+1)\ell+1}((k+1)\ell-1)^{(k+1)\ell-1}\] because the expression \(e^{-x}x^{(k+1)\ell-1}\) is maximal when \(x=(k+1)\ell-1\). Let us now move to \(I_{2}\). **Lemma 6**.: _Let \(c>1\) be a constant. We have_ \[\int_{c\ell(k+1)}^{\infty}e^{-x}\frac{\left(\prod_{j=0}^{k}\left|\frac{j}{n}- x\right|\right)^{\ell}}{\left|\frac{k^{\prime}}{n}-x\right|}\leq\frac{c}{c-1}e^{-c (k+1)l}(c(k+1)\ell)^{\ell(k+1)-1}.\] _for any \(0\leq k^{\prime}\leq k\)._ Proof.: We can partially integrate: \[\int e^{-x}x^{t}dx=\left[-e^{-x}x^{t}\right]+\int e^{-x}tx^{t-1},\] which gives us the series expansion for the integral above: \[\int_{c\ell(k+1)}^{\infty}e^{-x}x^{\ell(k+1)-1}dx\\ =e^{-c(k+1)\ell}(c(k+1)\ell)^{\ell(k+1)-1}+(\ell(k+1)-1)e^{-c(k+1 )\ell}(c(k+1)\ell)^{\ell(k+1)-2}\\ +(\ell(k+1)-1)(\ell(k+1)-2)e^{-c(k+1)\ell}(c(k+1)\ell)^{\ell(k+1)- 3}+\ldots\\ \leq e^{-c(k+1)l}(c(k+1)\ell)^{\ell(k+1)-1}\left(1+\frac{(k+1)\ell -1}{c(k+1)\ell}+\frac{(k+1)\ell-1}{c(k+1)\ell}\cdot\frac{(k+1)\ell-2}{c(k+1) \ell}\ldots\right)\\ \leq\frac{c}{c-1}e^{-c(k+1)l}(c(k+1)\ell)^{\ell(k+1)-1}.\] Notice that if we pick \(c=2\), we get the following corollary: **Corollary 7**.: _We have the following estimate_ \[|I_{2}|\leq\int_{2\ell(k+1)}^{\infty}e^{-x}x^{\ell(k+1)-1}dx\leq 2e^{-2(k+1)l}(2(k +1)\ell)^{\ell(k+1)-1}.\] Next, it remains to get a bound for \[\int_{0}^{k/n}e^{-y}\frac{\prod_{s=0}^{k}\left(\frac{s}{n}-y\right)^{\ell}}{ \left(\frac{u}{n}-y\right)}dy.\] **Lemma 8**.: _Assume \(k\geq 5\). Now_ \[\int_{0}^{k/n}e^{-y}\frac{\prod_{s=0}^{k}\left(\frac{s}{n}-y\right)^{\ell}}{ \left(\frac{u}{n}-y\right)}dy\leq\frac{(k!)^{\ell}}{120^{\ell-1}n^{k\ell-5( \ell-1)}}c(n)^{\ell-1},\] _where \(c(n)=\max_{0\leq y\leq 1}\prod_{s=0}^{5}\left|\frac{s}{n}-y\right|^{\ell-1}.\) Furthermore, \(|c(n)|\leq 1\)._ Proof.: Observe that \(\max\limits_{v\leq y\leq v+1}\prod_{s=0}^{k}\left|\frac{s}{n}-y\right|^{\ell-1}\leq \max\limits_{0\leq y\leq 1}\prod_{s=0}^{k}\left|\frac{s}{n}-y\right|^{\ell-1},\) for positive integer \(v\) with \(0\leq v\leq k-1\). Therefore, \[\left|\frac{\prod_{s=0}^{k}\left(\frac{s}{n}-y\right)^{\ell}}{\left(\frac{u}{n }-y\right)}\right|\leq\frac{k!}{n^{k}}\max\limits_{0\leq y\leq 1}\prod_{s=0}^{k} \left|\frac{s}{n}-y\right|^{\ell-1}.\] Hence \[\left|\int_{0}^{k/n}e^{-y}\frac{\prod_{s=0}^{k}\left(\frac{s}{n}-y \right)^{\ell}}{\left(\frac{u}{n}-y\right)}dy\right| \leq\int_{0}^{k/n}e^{-y}\frac{k!}{n^{k}}\max\limits_{0\leq y\leq 1} \prod_{s=0}^{k}\left|\frac{s}{n}-y\right|^{\ell-1}dy\] \[\leq\frac{k!}{n^{k}}\frac{(k!)^{\ell-1}}{(5!)^{\ell-1}n^{(k-5)( \ell-1)}}\int_{0}^{k/n}e^{-y}\max\limits_{0\leq y\leq 1}\prod_{s=0}^{5} \left|\frac{s}{n}-y\right|^{\ell-1}dy\] \[\leq\frac{(k!)^{\ell}}{120^{\ell-1}n^{k\ell-5(\ell-1)}}c(n)^{ \ell-1},\text{ writing }c(n)=:\max\limits_{0\leq y\leq 1}\prod_{s=0}^{5} \left|\frac{s}{n}-y\right|.\] By checking small values individually, and bounding \[\prod_{s=0}^{5}\left|\frac{s}{n}-y\right|\leq 1\] for \(n\geq 5\), we obtain \(|c(n)|\leq 1\). **Lemma 9**.: _Assume \(k\leq 5\). Now_ \[\int_{0}^{k/n}e^{-y}\frac{\prod_{s=0}^{k}\left(\frac{s}{n}-y\right)^{\ell}}{ \left(\frac{u}{n}-y\right)}dy\leq\frac{k!}{n^{k}}c(n,k)^{\ell-1},\] _where_ \[c(n,k)=\begin{cases}0.049&\text{for }(n,k)=(2,2)\\ \frac{1}{16}&\text{for }(n,k)=(2,3)\\ \frac{1}{81}&\text{for }(n,k)=(3,3)\\ 0.114&\text{for }(n,k)=(2,4)\\ 0.015&\text{for }(n,k)=(3,4)\\ 0.004&\text{for }(n,k)=(4,4)\end{cases}\] Proof.: We simply bound \[\int_{0}^{k/n}e^{-y}\frac{\prod_{s=0}^{k}\left(\frac{s}{n}-y\right)^{\ell}}{ \left(\frac{u}{n}-y\right)}dy\leq\frac{k!}{n^{k}}\max\limits_{0\leq y\leq 1} \prod_{s=0}^{k}\left|y-\frac{s}{n}\right|\int_{0}^{k/n}e^{-y}dy,\] where the integral can be bounded to be at most \(1\), and the individual maxima can be determined using WolframAlpha. **Lemma 10**.: _Assume \(k\geq 2\). We have_ \[\int_{0}^{\infty}e^{-y}\frac{\prod_{s=0}^{k}\left(\frac{s}{n}-y\right)^{\ell}}{ \left(\frac{u}{n}-y\right)}dy\leq\exp\left(\log 4+(k+1)\ell\log 2+(\ell(k+1)-1) \log((k+1)\ell)-\ell(k+1)+1\right)\] Proof.: Assume first \(k\geq 5\). Now taking the above three estimations into account, we obtain \[|A_{u,0}(1)|=\left|\int_{0}^{\infty}e^{-y}(-y)^{\ell}(\frac{1}{n}-y )^{\ell}\cdots(\frac{u}{n}-y)^{\ell-1}\cdots(\frac{k}{n}-y)^{\ell}dy\right|\] \[\leq\frac{(k!)^{\ell}}{120^{\ell-1}n^{k\ell-5(\ell-1)}}c(n)^{\ell -1}+2(k+1)\ell e^{-(k+1)\ell+1}((k+1)\ell-1)^{(k+1)\ell-1}+2e^{-2(k+1)l}(2(k+1) \ell)^{\ell(k+1)-1}\] \[\leq\frac{(k!)^{\ell}}{n^{k\ell-5(\ell-1)}}\left(\frac{c(n)}{120} \right)^{\ell-1}+\left(2(k+1)\ell+2\cdot 2^{\ell(k+1)-1}\right)e^{-(k+1)\ell+1}((k+ 1)\ell)^{\ell(k+1)-1}\] \[\leq\frac{(k!)^{\ell}}{n^{k\ell-5(\ell-1)}}\left(\frac{c(n)}{120} \right)^{\ell-1}+3\cdot 2^{(k+1)\ell}e^{-(k+1)\ell+1}((k+1)\ell)^{\ell(k+1)-1}\] \[\leq 4\cdot 2^{(k+1)\ell}e^{-(k+1)\ell+1}((k+1)\ell)^{\ell(k+1)-1},\qquad\text{ for }n\geq 2\] \[\leq\exp\left(\log 4+(k+1)\ell\log 2+(\ell(k+1)-1)\log((k+1)\ell)- \ell(k+1)+1\right)\] For \(k\in\{2,3,4\}\) the only thing that changes is the first term, but because \(c(n,k)<1\) for all choices of \(k\) and \(n\) that interest us, we have that \[\frac{k!}{n^{k}}c(n,k)^{\ell-1}\leq 2^{(k+1)\ell}e^{-(k+1)\ell+1}((k+1)\ell)^{ \ell(k+1)-1},\] so the final bound is also valid in for these values. Finally, we actually need to get a bound for \(A_{u,0}^{*}(1)\). The following corollary gives us the desired bound. **Corollary 11**.: _For \(k\geq 3\) and \(\ell\geq\exp(s(n,k))\), we have_ \[\left|\frac{n^{L-\ell+1}}{(\ell-1)!}A_{u,0}(1)\right|\leq\exp\left(\ell k\log \ell+\ell\left[k\log k+k\log n+0.72k+0.000003\right]\right)\] _For \(k=2\), we have \(\left|\frac{n^{L-\ell+1}}{(\ell-1)!}A_{u,0}(1)\right|\leq\exp(2\ell\log\ell+3.377257\ell+2\ell\log 2).\)_ Proof.: Applying the previous lemma and Stirling's formula we have \[\log|A_{u,0}(1)|-\log\left((l-1)!\right) \leq\ell k\log\ell+\ell\left((k+1)\log(k+1)+\log\ell-\log(\ell-1) -k+(k+1)\log 2\right)\] \[-\log\ell-\log(k+1)+\frac{1}{2}\log(\ell-1)+\log 4-\log\sqrt{2\pi}\] We first deal with the case when \(k\geq 3\). With this assumption, because \(\log\ell\geq s(n,k)\geq s(2,3)\) we have that \[\log(\ell)-\log(\ell-1)=\int_{\ell-1}^{\ell}\frac{dx}{x}\leq\frac{1}{\ell-1}< 0.000003.\] Furthermore, \[(k+1)\log(k+1)-k\log(k)=k\log(1+1/k)+\log(k+1)\leq 1+\log(k+1).\] and \[-k+(k+1)\log 2+1+\log(k+1)<0.72k.\] Additionally, for all \(k\geq 2\) \[\frac{-\log\ell-\log(k+1)+\frac{1}{2}\log(\ell-1)+\log 4-\log\sqrt{2\pi}}{\ell}<0\] and goes to \(0\) as \(\ell\) grows. Thus, multiplying by \(n^{L-\ell+1}=n^{k\ell}\) we get \[\left|\frac{n^{L-\ell+1}}{(\ell-1)!}A_{u,0}(1)\right|\leq\exp\left(\ell k\log \ell+\ell\left[k\log k+k\log n+0.72k+0.000003\right]\right)\] Let us now move to the case \(k=2\). We have \[\log(\ell)-\log(\ell-1)=\int_{\ell-1}^{\ell}\frac{dx}{x}\leq\frac{1}{\ell-1}<0. 00046.\] For \(k=2\) see that \[\log\left|A_{u,0}(1)\right|-\log\left((l-1)!\right)\] \[\leq(\log 4-3\ell+3\ell log(6\ell))-(\ell-1)\log(\ell-1)+(\ell-1)- \frac{1}{2}\log(\ell-1)-\log\sqrt{2\pi}\] \[\leq 3\ell\log\ell+\ell\left(3\log 6-3+\frac{\log 4-1}{\ell}- \frac{\ell\log(\ell-1)}{\ell}+1+\frac{\log(\ell-1)}{2\ell}-\frac{\log\sqrt{2 \pi}}{\ell}\right)\] \[\leq 2\ell\log\ell+\ell\left(3\log 6-2+\frac{\log 4/\sqrt{2\pi}-1 }{\ell}+\log(\frac{\ell}{\ell-1})+\frac{\log(\ell-1)}{2\ell}\right)\] \[\leq 2\ell\log\ell+3.377\ell.\] Therefore, in this case, \(\left|\frac{n^{L-\ell+1}}{(\ell-1)!}A_{u,0}(1)\right|\leq\exp(2\ell\log\ell+3.3 77257\ell+2\ell\log 2)\). ## 6. Integrals corresponding to terms \(L_{u,j}^{\star}\) Next we need to get a suitable bound on terms \(L_{u,j}^{\star}\), or more precisely their sum \(\sum_{j=1}^{k}\left|L_{u,j}^{\star}\right|\). The following lemma will be useful **Lemma 12**.: _Let \(k\geq 3\) and \(n\geq 2\). Then_ \[\max_{0<x<k/n}\left|x\left(\frac{1}{n}-x\right)\left(\frac{2}{n}-x\right) \cdot\ldots\cdot\left(\frac{k}{n}-x\right)\right|\leq\frac{k!}{6n^{k+1}}.\] _If \(k=2\), then we have \(\max_{0<x<2/n}\left|x(\frac{1}{n}-x)(\frac{2}{n}-x)\right|\leq\frac{2}{3\sqrt {3}n^{3}}\)._ Proof.: By doing a change of variable \(y=nx\) the expression on the left becomes \[\max_{0<x<k/n}\left|x\left(\frac{1}{n}-x\right)\cdot\ldots\cdot\left(\frac{k}{ n}-x\right)\right|=\frac{1}{n^{k+1}}\max_{0<y<k}\left|y(1-y)\cdot\ldots\cdot(k-y) \right|.\] By analyzing the function \(\left|y(1-y)(2-y)\cdot\ldots\cdot(k-y)\right|\) we see that its maximum on the interval \((0,k)\) is attained for the first time already on the interval \((0,1)\). If \(k\geq 3\) we have the following \[\max_{0<y<k}|y(1-y)\ldots(k-y)| \leq\max_{0<y<1}|(4-y)\ldots(k-y)|\cdot\max_{0<y<1}|y(1-y)(2-y)(3-y)|\] \[\leq\frac{k!}{3!}\max_{0<y<1}y(1-y)(2-y)(3-y).\] By taking the derivative we can see that the function \(y(y-1)(y-2)(y-3)\) achieves its maximum \(1\) for \(y=(3\pm\sqrt{5})/2\), which finally implies that \[\max_{0<x<k/n}\left|x\left(\frac{1}{n}-x\right)\cdot\ldots\cdot\left(\frac{k}{ n}-x\right)\right|\leq\frac{k!}{6n^{k+1}}.\] Similarly for \(k=2\) we need to analyze the function \(y(y-1)(y-2)\), whose maximum \(\frac{2}{3\sqrt{3}}\) is achieved for \(y=1\pm\frac{1}{\sqrt{3}}\), from which the claim follows. **Lemma 13**.: _Let \(k\geq 2\) and \(n\geq 2\). Then_ \[|L^{*}_{u,j}(1)|\leq n^{L-\ell+1}\frac{(e^{\frac{j}{n}}-1)(k!)^{\ell}}{(\ell- 1)!(c(k)n^{k+1})^{\ell-1}n^{k}},\] _where \(c(k)=6\) for \(k\geq 3\) and \(c(2)=3\sqrt{3}\)._ Proof.: Let \(j\in\{1,\ldots,k\}\). By the definition of \(|L^{*}_{u,j}(1)|\) we have \[|L^{*}_{u,j}(1)|(\ell-1)! =n^{L-\ell+1}e^{\frac{j}{n}}\int_{0}^{\frac{j}{n}}e^{-x}\frac{ \prod_{r=0}^{k}|\frac{r}{n}-x|^{\ell}}{|\frac{u}{n}-x|}dx\] \[=n^{L-\ell+1}e^{\frac{j}{n}}\int_{0}^{\frac{j}{n}}e^{-x}\prod_{r= 0}^{k}|\frac{r}{n}-x|^{\ell-1}\frac{\prod_{r=0}^{k}|\frac{r}{n}-x|}{|\frac{u} {n}-x|}dx\] \[\leq n^{L-\ell+1}e^{\frac{j}{n}}\int_{0}^{\frac{j}{n}}e^{-x}\prod _{r=0}^{k}|\frac{r}{n}-x|^{\ell-1}\frac{k!}{n^{k}}dx.\] Because \(j\geq k\), we have that \(\max_{0<x<j/n}\prod_{r=0}^{k}|\frac{r}{n}-x|\leq\max_{0<x<k/n}\prod_{r=0}^{k }|\frac{r}{n}-x|\) which is at most \(\frac{k!}{c(k)n^{k+1}}\) due to the previous lemma. So we further have \[|L^{*}_{u,j}(1)|(\ell-1)! \leq n^{L-\ell+1}e^{\frac{j}{n}}\int_{0}^{\frac{j}{n}}e^{-x} \left(\frac{k!}{c(k)n^{k+1}}\right)^{\ell-1}\frac{k!}{n^{k}}dx\] \[\leq n^{L-\ell+1}\frac{(k!)^{l}}{(c(k)n^{k+1})^{\ell-1}n^{k}}e^{ \frac{j}{n}}\int_{0}^{\frac{j}{n}}e^{-x}dx\] \[\leq n^{L-\ell+1}\frac{(k!)^{\ell}}{(c(k)n^{k+1})^{\ell-1}n^{k}}( e^{\frac{j}{n}}-1)\] **Lemma 14**.: _Let \(k\geq 2\) and \(c(k)\) as in the previous lemma. We have_ \[\sum_{j=1}^{k}|L^{*}_{u,j}|\leq\frac{(k!)^{\ell}}{c(k)^{\ell-1}(\ell-1)!}n^{2- \ell}e^{(k+1)/n}.\] Proof.: We have the following: \[\sum_{j=1}^{k}(e^{j/n}-1)<\sum_{j=1}^{k}e^{j/n}=\frac{e^{(k+1)/n}-e^{1/n}}{e^{1 /n}-1}<\frac{e^{(k+1)/n}}{e^{1/n}-1}.\] Since \[e^{1/n}-1=\int_{0}^{1/n}e^{x}dx>\frac{1}{n},\] this can be further estimated to \[\sum_{j=1}^{k}(e^{j/n}-1)<ne^{(k+1)/n}.\] By summing up the above estimation for \(j=1,\ldots,k\) we get \[\sum_{j=1}^{k}|L^{*}_{u,j}|\leq n^{L-\ell+1}\frac{(k!)^{\ell}}{(c(k)n^{k+1})^{ \ell-1}n^{k}(\ell-1)!}ne^{(k+1)/n}.\] We can further simplify the above expression, by noticing that \[\frac{n^{L-\ell+1}n}{(n^{k+1})^{\ell-1}n^{k}}=n^{2-\ell},\] which follows from \(L=k\ell-1\). This finally gives us the bound \[\sum_{j=1}^{k}|L^{*}_{u,j}|\leq\frac{(k!)^{\ell}}{c(k)^{\ell-1}(\ell-1)!}n^{2- \ell}e^{(k+1)/n}.\] To make this bound suitable for application in \(r(\ell)=\exp(R(\ell)\) we need to simplify it further. **Lemma 15**.: _Let \(k\geq 3\) and \(\ell\geq e^{s(k,n)}=e^{(k+n)(\log(k+n))^{2}}\). We have_ \[\sum_{j=1}^{k}|L^{*}_{u,j}|\leq\exp\left(-\ell\log\ell+\ell(k\log k-0.81k-\log n +0.174)\right).\] _For \(k=2\) we have_ \[\sum_{j=1}^{2}|L^{*}_{u,j}|\leq\exp\left(-\ell\log\ell-0.64\ell\right).\] Proof.: First let \(k\geq 3\). We need to simplify the following expression: \[\log\left(\frac{(k!)^{\ell}}{6^{\ell-1}(\ell-1)!}n^{2-\ell}e^{(k+1)/n}\right) =\ell\log(k!)-(\ell-1)\log 6-\log(\ell-1)!+(2-\ell)\log n+\frac{k+1}{n}.\] We have \(\log(\ell-1)!=\log\ell!-\log\ell\). Further, we can use Stirling's formula to bound the factorials: \[\sqrt{2\pi\ell}\left(\frac{\ell}{e}\right)^{\ell}e^{1/(12\ell+1)}<\ell!<\sqrt{2 \pi\ell}\left(\frac{\ell}{e}\right)^{\ell}e^{1/(12\ell)}.\] Hence \[\log\ell!>\frac{1}{2}\log(2\pi)+\frac{1}{2}\log\ell+\ell\log\ell-\ell+\frac{1} {12\ell+1}\] and similarly for \(k!\): \[\log k!<\frac{1}{2}\log(2\pi)+\frac{1}{2}\log k+k\log k-k+\frac{1}{12k}.\] Hence, we have \[\log\left(\frac{(k!)^{\ell}}{6^{\ell-1}(\ell-1)!}n^{2-\ell}e^{(k+ 1)/n}\right)= \ell\left(\frac{1}{2}\log(2\pi)+\frac{1}{2}\log k+k\log k-k+\frac{ 1}{12k}\right)-(\ell-1)\log 6\] \[-\log\ell!+\log\ell+(2-\ell)\log n+\frac{k+1}{n}\] \[= \ell\left(\frac{1}{2}\log(2\pi)+\frac{1}{2}\log k+k\log k-k+ \frac{1}{12k}\right)-(\ell-1)\log 6+\log\ell\] \[-\left(\frac{1}{2}\log(2\pi)+\frac{1}{2}\log\ell+\ell\log\ell- \ell+\frac{1}{12\ell+1}\right)+(2-\ell)\log n+\frac{k+1}{n}\] \[= -\ell\log\ell+\ell(k\log k-0.81k-\log n+0.16)+\frac{1}{2}\log\ell\] \[+2\log n+\frac{k+1}{n}+0.88,\] because \(\log 6-\frac{1}{2}\log(2\pi)-\frac{1}{12\ell+1}<0.88\) and \(\frac{1}{2}\log(2\pi)-\log 6+\frac{1}{12k}+1<0.16\) and \(\frac{1}{2}\log k-k<-0.81k\). We can further simplify by using the inequality \[\left(\frac{1}{2}\log\ell+2\log n+\frac{k+1}{n}+0.88\right)<0.00004\ell\] to obtain \[\log\left(\frac{(k!)^{\ell}}{6^{\ell-1}(\ell-1)!}n^{2-\ell}e^{(k+1)/n}\right) <-\ell\log\ell+\ell(k\log k-0.81k-\log n+0.17).\] Similar calculation for \(k=2\) gives us explicitly: \[\log\left(\sum_{j=1}^{2}|L_{u,j}^{*}|\right)\leq -\ell\log\ell+\ell(2\log 2+\frac{1}{2}\log 2-2+\frac{1}{24}- \log(3\sqrt{3})+\frac{1}{2}\log(2\pi)+1-\log n)\] \[+\log(3\sqrt{3})-\frac{1}{2}\log(2\pi)+\frac{1}{2}\log\ell-\frac{ 1}{12\ell+1}+2\log n+\frac{k+1}{n}\] \[\leq -\ell\log\ell+\ell(0.0456-\log n)+0.0035\ell\] \[\leq -\ell\log\ell-0.65\ell+0.0035\ell\leq-\ell\log\ell-0.64\ell.\] \(\Box\) ## 7. Transcendence measure for \(e^{1/n}\) We are now ready to put together the bounds \[q(\ell)=\ell k\log\ell+\ell\left[k\log k+k\log n+0.72k+0.000003\right]\] which is true for \(k\geq 3\) and for \(k=2\) \[q(\ell)=2\ell\log\ell+\ell(3.377257+2\log n).\] Estimating sum of \(L^{*}_{u,j}\), for \(k\geq 3\), we obtained \[-r(\ell)=-\ell\log\ell+\ell(k\log k-0.81k-\log n+0.17)\] and for \(k=2\), it is \[-r(\ell)=-\ell\log\ell-0.64\ell.\] Using notation from Section 3 in [3], equations (8) and (9), we have \[s(n,k) =(k+n)(\log(k+n))^{2}\quad\text{this function is in place of $s(m)$ there}\] \[a =k\] \[b =k\log k+k\log n+0.72k+0.000003\] \[c =1\] \[d =k\log k-0.81k-\log n+0.17.\] Now with the notation from Section 3 in [3], equation (10), we have \[B =b+\frac{ad}{c}=k\log k+k\log n+0.72k+0.000003+k(k\log k-0.81k- \log n+0.17)\] \[=k\log k+0.89k+0.000003+k^{2}\log k-0.81k^{2}\] \[C =a=k\] \[D =a+b+ae^{-s(k,n)}=k+k\log k+k\log n+0.72k+0.000003+\frac{k}{e^{(k+ n)(\log(k+n))^{2}}}\] \[F^{-1} =2e^{D}\] \[v =1-\frac{k\log k-0.81k-\log n+0.17}{(k+n)(\log(k+n))^{2}}\] \[n_{1} =e^{(n+k)(\log(n+k))^{2}}.\] Now we have \[|\lambda_{0}+\lambda_{1}e^{1/n}+\cdots+\lambda_{k}e^{k/n}|>F(2H)^{-a/c-\epsilon (H)},\] where \[\epsilon(H)=\frac{1}{\log(2H)}\left(Bz\left(\frac{\log(2H)}{v}\right)+C\log \left(z\left(\frac{\log(2H)}{v}\right)\right)\right).\] The term \(H^{-a/c}=H^{-k}\) will form the main term, and everything else will be put together into the second term in the exponent (of \(H\)). This second term will be formed of the terms \[F2^{-a/c}(2H)^{-\epsilon(H)}\] For large \(k\), we have \[1<|\Lambda|2(2H)^{\frac{a}{c}}e^{\epsilon(H)\log(2H)+D}=|\Lambda|H^{\frac{a}{c }+Y}=|\Lambda|H^{k+Y},\] where \[Y: =\frac{1}{\log H}\left(Bz\left(\frac{\log(2H)}{v}\right)+C\log\left( z\left(\frac{\log(2H)}{v}\right)\right)+D+(k+1)\log 2\right)\] \[\leq\frac{1}{\log H}\left(\frac{uB}{v}\frac{\log(2H)}{\log\log(2H) }+k\log\left(\frac{u}{v}\frac{\log(2H)}{\log\log(2H)}\right)+D+(k+1)\log 2\right)\] and \[u=1+\frac{\log(s(n,k))}{s(n,k)}.\] We use the fact that \(\log H\geq s(n,k)e^{s(n,k)}\). Not, we need to estimate the terms involving \(D\) and \(B\). \[D+(k+1)\log 2 =k+k\log k+k\log n+0.72k+0.000003+\frac{k}{e^{(k+n)(\log(k+n))^{2 }}}+(k+1)\log 2\] \[=k+k\log k+k\log n+k\left(0.72+\frac{0.000003}{k}+\frac{1}{e^{(k+ n)(\log(k+n))^{2}}}+\log 2+\frac{\log 2}{k}\right)\] \[\leq k\log k+k\log n+3.4k.\] For the next term, we observe \[k\log\left(\frac{u}{v}\frac{\log(2H)}{\log\log(2H)}\right)\leq k\log(u\log(2H )).\] Therefore, we get \[Y \leq\frac{1}{\log H}\left(\frac{uB}{v}\frac{\log(2H)}{\log\log(2 H)}+k\log(u\log(2H))+k\log k+k\log n+3.4k\right)\] \[\leq\frac{1}{\log H}\left(\frac{uB}{v}\frac{\log(2H)}{\log\log(2 H)}+k\log(2\log H)+k\log(2\log 2)+k\log k+k\log n+3.4k\right)\] \[\leq\frac{1}{\log H}\left(\frac{uB}{v}\frac{\log(2H)}{\log\log(2 H)}+k\log(2\log H)+\frac{k}{2}+k\log k+k\log n+3.4k\right)\] \[\leq\frac{1}{\log H}\left(\frac{uB}{v}\frac{\log(2H)}{\log\log(2 H)}+\frac{69}{10}k\log(2\log H)\right)\] \[\leq\frac{1}{\log\log H}\left(\frac{\log(2H)}{\log H}\cdot\frac{ uB}{v}+\frac{6.9\log\log H\cdot k\log(2\log H)}{\log H}\right)\] \[=\frac{u}{v\log\log H}\left(B+\frac{1}{\log H}\left(\log(2)B+ \frac{v\log\log H\cdot 6.9k\log(2\log H)}{u}\right)\right)\] We have \[Y\leq\left\{\begin{array}{rl}\frac{u}{v\log\log H}\left(B+0.744754115 \right),&\text{for }k=2\\ \frac{u}{v\log\log H}\left(B+0.04386773\right),&\text{for }k=3\\ \frac{u}{v\log\log H}\left(B+0.00075786\right),&\text{for }k=4\\ \frac{u}{v\log\log H}\left(B+0.00000412\right),&\text{for }k=5\\ \frac{u}{v\log\log H}\left(B+7.976\times 10^{-9}\right),&\text{for }k=6\end{array}\right.\] For \(k\geq 6\), we observe \[Y\leq\frac{u}{v\log\log H}\left(B+10^{-8}\right).\] For calculating the small values of \(k\), we do it case by case. For \(k=2\), take \(n=2\) and recall \[b =3.377257+2\log 2\] \[d =-0.64\] \[a =2\text{ and }d=1\] which implies \[3.878864\text{ and }D=7.159781\qquad\frac{u}{v}\leq 1.151906\] We also have \(\log H\geq s(2,2)e^{s(2,2)}=7.69\times e^{7.69}\) \[Y \leq\frac{1}{\log H}\left(\frac{uB}{v}\frac{\log(2H)}{\log\log(2 H)}+2\log\left(\frac{u}{v}\frac{\log(2H)}{\log\log(2H)}\right)+D+3\log 2\right)\] \[\leq\frac{9.202255}{\log\log H}\] Let us now look at the case \(k\geq 3\). For simplicity's sake, write \[Y\leq\frac{u}{v\log\log H}(B+\theta)\] Define \[f(k,n)=\frac{u}{vk^{2}\log k}\left(B+\theta\right)=\frac{\left(1+\frac{\log(( k+n)(\log(k+n))^{2})}{(k+n)(\log(k+n))^{2}}\right)\left(1+\frac{1}{k}+\frac{0.89 }{k\log k}+\frac{0.00003+\theta}{k^{2}\log k}-\frac{0.81}{\log k}\right)}{1- \frac{k\log k-0.81k-\log n+0.17}{(k+n)(\log(k+n))^{2}}}\] The expression can be further simplified to \[f(k,n)=\frac{\left(1+\frac{1}{(k+n)\log(k+n)}+\frac{2\log\log(k+n)}{(k+n)( \log(k+n))^{2}}\right)\left(1+\frac{1}{k}+\frac{0.89}{k\log k}+\frac{0.000003+ \theta}{k^{2}\log k}-\frac{0.81}{\log k}\right)}{1-\frac{k\log k-0.81k-\log n +0.17}{(k+n)(\log(k+n))^{2}}}\] Let us look at the numerator. We have \[\left(1+\frac{1}{(k+n)\log(k+n)}+\frac{2\log\log(k+n)}{(k+n)(\log(k+n))^{2}} \right)\left(1+\frac{1}{k}+\frac{0.89}{k\log k}+\frac{0.000003+\theta}{k^{2} \log k}-\frac{0.81}{\log k}\right)\] \[=1+\frac{1}{k}+\frac{0.89}{k\log k}+\frac{0.000003+\theta}{k^{2}\log k}-\frac {0.81k}{\log k}+\frac{1}{(k+n)\log(k+n)}+\frac{1}{k(k+n)\log(k+n)}+\frac{0.89} {k(k+n)\log k\log(k+n)}\] \[+\frac{0.000003+\theta}{k^{2}\log k(k+n)\log(k+n)}-\frac{0.81}{(k+n)\log(k+n) \log k}+\frac{2\log\log(k+n)}{(k+n)(\log(k+n))^{2}}+\frac{2\log\log(k+n)}{k(k+ n)(\log(k+n))^{2}}\] \[+\frac{2\cdot 0.89\log\log(k+n)}{k(k+n)\log k(\log(k+n))^{2}}+\frac{2\cdot(0.00000 3+\theta)\log\log(k+n)}{k^{2}(k+n)\log k(\log(k+n))^{2}}-\frac{2\cdot 0.81 \log\log(k+n)}{(k+n)(\log(k+n))^{2}\log k}.\] We can now verify using WolframAlpha that \[\frac{1}{k(k+n)\log(k+n)}+\frac{0.89}{k(k+n)\log k\log(k+n)}+\frac{0.000003+\theta }{k^{2}\log k(k+n)\log(k+n)}-\frac{0.81}{(k+n)\log(k+n)\log k}<0.\] and \[\frac{2\log\log(k+n)}{k(k+n)(\log(k+n))^{2}}\] \[+\frac{2\cdot 0.89\log\log(k+n)}{k(k+n)\log k(\log(k+n))^{2}}+ \frac{2\cdot(0.000003+\theta)\log\log(k+n)}{k^{2}(k+n)\log k(\log(k+n))^{2}}- \frac{2\cdot 0.81\log\log(k+n)}{(k+n)(\log(k+n))^{2}\log k}<0.\] for \(k\geq 3\). Furthermore, the denominator can be written as \[1-\frac{k\log k}{(k+n)(\log(k+n))^{2}}+\frac{0.81k}{(k+n)(\log(k+ n))^{2}}+\frac{\log n}{(k+n)(\log(k+n))^{2}}-\frac{0.17}{(k+n)(\log(k+n))^{2}}\] \[>1-\frac{k\log k}{(k+n)(\log(k+n))^{2}}+\frac{0.81k}{(k+n)(\log (k+n))^{2}},\] since \(\frac{\log n}{(k+n)(\log(k+n))^{2}}-\frac{0.17}{(k+n)(\log(k+n))^{2}}>0\). We can thus estimate \[f(n,k)<\frac{1+\frac{1}{k}+\frac{0.89}{k\log k}+\frac{0.000003+\theta}{k^{2} \log k}-\frac{0.81}{\log k}+\frac{1}{(k+n)\log(k+n)}+\frac{2\log\log(k+n)}{(k +n)(\log(k+n))^{2}}}{1-\frac{k\log k}{(k+n)(\log(k+n))^{2}}+\frac{0.81k}{(k+n )(\log(k+n))^{2}}}.\] If \(k=3\), we have \(f(2,3)<1.145\) and \(f(3,3)<1.08\), so \(f(2,3)\) gives the larger value. For \(k=4\), we have \(f(2,4)<1.114\), \(f(3,4)<1.05\) and \(f(4,4)<1\), so \(f(2,4)\) yields the largest bound. Estimating \[\frac{1}{(k+n)\log(k+n)}+\frac{2\log\log(k+n)}{(k+n)(\log(k+n))^{2}}<\frac{0. 81k}{(k+n)(\log(k+n))^{2}}\] (which is based on the fact that the second term on the left side is at most \(1+\frac{2}{e}<1.74\) and the term on the right side is at least \(>1.75\) if \(k\geq 5\) and \(n\leq k\)), and \[\frac{k\log k}{(k+n)(\log(k+n))^{2}}<\frac{1}{\log k},\] we can further simplify the expression: \[f(n,k)<\frac{1+\frac{1}{k}+\frac{0.89}{k\log k}+\frac{0.000003+\theta}{k^{2} \log k}-\frac{0.81}{\log k}+\frac{0.81k}{(k+n)(\log(k+n))^{2}}}{1-\frac{1}{ \log k}+\frac{0.81k}{(k+n)(\log(k+n))^{2}}}\] Now \[\frac{1}{k}+\frac{0.89}{k\log k}+\frac{0.000003+\theta}{k^{2}\log k }=\frac{1}{\log k}\left(\frac{\log k}{k}+\frac{0.89}{k}+\frac{0.000003+\theta} {k^{2}\log k}\right)\] \[<\frac{1}{\log k}\left(0.299+0.149+0.0000003\right)<\frac{0.5}{ \log k},\] where the terms are estimated using \(k\geq 5\). Hence \[f(n,k)<\frac{1-\frac{0.31}{\log k}+\frac{0.81k}{(k+n)(\log(k+n))^{2}}}{1-\frac {1}{\log k}+\frac{0.81k}{(k+n)(\log(k+n))^{2}}}=1+\frac{0.69}{\log k-1+\frac{0.8 1k\log k}{(k+n)(\log(k+n))^{2}}}<1+\frac{0.69}{\log k-1}\] Hence, \[Y\leq\frac{k^{2}\log k}{\log\log H}f(n,k)<\frac{k^{2}\log k}{\log\log H}\left(1+ \frac{0.69}{\log k-1}\right).\]